text
stringlengths
104
605k
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) # DT Fourier Series with a single MATLAB command! Calculating fourier series by hand can often become time consuming and error prone. Matlab has an easy and fast built-in fuction for computing discrete time fourier series coefficents. Unfortunely, this wont help you on exams, but it might save you considerable time on homework assignemnts. The command is ifft. It takes in a vector representing your signal and produces a vector of the fourier series coefficients. Two examples are provided below: Example 1: The signal is represented by the graph below and is periodic for all time: This signal can be represented by a vector. Each element in the vector corresponds to the value that the signal takes at each time interval. At time 0, the value is 2. At time 1 the value is 1 and for time 2 and 3 the value is 0. This can be represented by the vector below: [2,1,0,0]? To find the fourier series coefficients, we would use the following matlab code: signal = [2,1,0,0]; fouriercoefs = ifft(signal) The output gives: fouriercoefs = 0.7500 0.5000 + 0.2500i 0.2500 0.5000 - 0.2500i This means that the fourier series coefficients are: a0 = .75 a1 = .5+.25j a2 = .25 a3 = .5-.25j. Example 2: The signal is represented by the graph below and is periodic for all time: This signal can be represented by a vector like before. You may recognize this signal as x(t) = t for 0<=t<=2 in continuous time. Since we're in discrete time, the vector below represents the signal: [0,1,2]? Note that this is NOT the same as [0,.5,1,1.5,2]?. In that case, the continuous time example would be x(t) = .5t for 0<=t<=4. To find the fourier series coefficients, we would use the following matlab code: signal = [0,1,2]; fouriercoefs = ifft(signal) The output gives: fouriercoefs = 1.0000 -0.5000 - 0.2887i -0.5000 + 0.2887i This means that the fourier series coefficients are: a0 = 1 a1 = -.5 - .2887j a2 = -.5 + .2877j Hopefully this will help you take DT fourier series in matlab easier and faster. These are examples which you can easily verify by hand. ... --shaun.p.greene.1, Wed, 26 Sep 2007 22:46:19 Good find in the functions library. Although I'm not sure that the ifft() does what we really want, I tried it on the homework, and it gives me pretty much the same answer, so I definately think its close to what we need. I found a function called dftmtx() that will generate the k and n matrix that is needed for finding the ak values. I apologize in advance because I'm not good in latex. we have $\displaystyle a_k = \frac{1}{N}\cdot \sum_{k=0}^{k=N-1} \left( signal \cdot e^{(\jmath\frac{2\pi}{N} k n)} \right)$ the dftmtx() command should give you the matrix that has your exp(......). Then, all you have left to find the aks is to ak = signal * dfftmtx(N); which gives a matrix of ak values that is almost identical to what the ifft() command gives you. Thanks again for the good start with the ifft function, it got me moving on this homework. ... --shaun.p.greene.1, Wed, 26 Sep 2007 22:43:05 sry, that paragraph is really hard to read. help? --andrew.c.daniel.1, Thu, 27 Sep 2007 00:04:44 if you used ifft() in the homework and it didn't work you could try transposing the [1xn]? vector wavread gives you to a [nx1]? horizontal vector matlab syntax: A_transposed = A'; ... --tom.l.moffitt.1, Thu, 27 Sep 2007 00:15:22 If you do it on the homework, make sure you take it over one period. Since with voice recordings each period wont be exactly the same, it's a good idea to just do it over one. A note about fft vs ifft --ross.a.howard.1, Thu, 27 Sep 2007 09:31:01 If you look at the help fft page it gives the equations it represents with fft and ifft. There are some differences between them and our book. fft has the correct sign in the complex exponential, but is not multiplied by 1/N. ifft finds the conjugate of the aks (which when you plot abs(ak) it does not matter) and has the 1/N term. For finding the aks using fft: aks=aks./length(aks); One should also note the helpful function shiftfft. It is useful because the matrix returned from fft starts at index 1 but contains a0, index 2 contains a1, and so on until it reaches the highest k, then it starts counting down. This means that when you plot the aks, they will not be in the right order. The shiftfft function correctly puts the negative aks on the left of the a0. (aks=shiftfft(aks);) Note: if you used ifft to find the aks, then use the shiftifft instead. Hope this helps. :) ## Alumni Liaison To all math majors: "Mathematics is a wonderfully rich subject." Dr. Paul Garrett
# State space model with constant offset for harmonic balance Given a linear state space model as $$\begin{split} \dot{x}_1 &= -50\,x_1 + 5\,x_2 - 0.15\,u + 250 \\ \dot{x}_2 &= 5 - x_1 \\ y &=0.2\,x_1 - 1 \,. \end{split}$$ I now would like to analyse this model with harmonic balance given a symetric nonlinear curve $$u = f(e)$$ with $e = w - y$ and $w = 0$, as usual with harmonic balance, so $$u = f(-y) = -f(y)\,.$$ My problem here are the constant terms in the state equations and the output equation. How can I deal with those? The $f(\cdot)$ curve is point symmetric to the origin, so I cannot just omit the $-1$ in the output equation, otherwise the symmetry required for application of harmonic balance is not given anymore at the equilibrium $x_1 = 5$. If your system is of the form $$\dot{x} = A\,x + B\,u + f \\ y = C\,x + D\,u + g \tag{1}$$ with $f$ and $g$ constant vectors, then you can do a coordinate transformation $z=x+\alpha$ and $v=u+\beta$ with $\alpha$ and $\beta$ constant vectors which satisfies $$\begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \begin{bmatrix} f \\ g \end{bmatrix}. \tag{2}$$ This can always be solved if the $(A,B,C,D)$ matrix is full rank and if this is not the case when the vector of $(f,g)$ lies in the span of the $(A,B,C,D)$ matrix. After this transformation the dynamics will simply be $$\dot{z} = A\,z + B\,v \\ y = C\,z + D\,v. \tag{3}$$ However if it is required that $u=f(y)$, with $f(0)=0$ and $f(-y)=-f(y)$, then this transformation will only work if the value found for $\beta$ is zero. If $\beta\neq0$ or the system of equation in $(2)$ is not solvable, then you could resort to extending the state space by one state $\xi$, whose time derivative is always zero. If the initial condition of $\xi$ is one, then the same dynamics as $(1)$ will be obtained when using the following extended state space model $$\begin{bmatrix} \dot{x} \\ \dot{\xi} \end{bmatrix} = \begin{bmatrix} A & f \\ 0 & 0 \end{bmatrix} \begin{bmatrix} x \\ \xi \end{bmatrix} + \begin{bmatrix} B \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} C & g \end{bmatrix} \begin{bmatrix} x \\ \xi \end{bmatrix} + D\,u. \tag{4}$$ • Thanks, I have an additional questions: Is there any disadvantage in using the second method right away? Because it seems that the second method always works while the first method requires the system$~(2)$ to be solvable. – SampleTime Dec 10 '17 at 10:28 • @SampleTime Depending on what analysis you plan to do there might be a disadvantage of adding an uncontrollable marginally stable mode to the system. It is always good to have multiple options. And my first method might also be useful in other situations where it is not required that $u=f(y)$ with $f(0)=0$. – Kwin van der Veen Dec 10 '17 at 16:00
# Mathematical Sciences Research Institute Home » The Commutative Algebra of Singularities in Birational Geometry: Multiplier Ideals, Jets, Valuations, and Positive Characteristic Methods # Workshop The Commutative Algebra of Singularities in Birational Geometry: Multiplier Ideals, Jets, Valuations, and Positive Characteristic Methods May 06, 2013 - May 10, 2013 Registration Deadline: May 24, 2013 7 months ago February 06, 2013 10 months ago Parent Program: Commutative Algebra Organizers Craig Huneke (University of Virginia), Yujiro Kawamata (University of Tokyo), Mircea Mustata (University of Michigan), Karen Smith (University of Michigan), Kei-ichi Watanabe (Nihon University) Speaker(s) ## Show List of Speakers Description The workshop will examine the interplay between measures of singularities coming both from characteristic p methods of commutative algebra, and invariants of singularities coming from birational algebraic geometry. There is a long history of this interaction which arises via the "reduction to characteristic p" procedure. It is only in the last few years, however, that very concrete objects from both areas, namely generalized test ideals from commutative algebra and multiplier ideals from birational geometry, have been shown to be intimately connected. This workshop will explore this connection, as well as other topics used to study singularities such as jets schemes and valuations. Bibliography (pdf) Accommodation: A block of rooms has been reserved at the Rose Garden Inn. Reservations may be made by calling 1-800-992-9005 OR directly on their website. Click on Corporate at the bottom of the screen and when prompted enter code MATH (this code is not case sensitive). By using this code a new calendar will appear and will show MSRI rate on all room types available. A block of rooms has been reserved at the Hotel Durant. Reservations may be made by calling 1-800-238-7268. When making reservations, guests must request the MSRI preferred rate.  If you are making your reservations on line, please go to this link and enter the promo/corporate code MSRI123.  Our preferred rate is $129 per night for a Deluxe Queen/King, based on availability. Funding & Logistics ## Show Funding To apply for funding, you must register by the funding application deadline displayed above. Students, recent Ph.D.'s, women, and members of underrepresented minorities are particularly encouraged to apply. Funding awards are typically made 6 weeks before the workshop begins. Requests received after the funding deadline are considered only if additional funds become available. ## Show Lodging MSRI has preferred rates at the Rose Garden Inn, depending on room availability. Reservations may be made by calling 1-800-992-9005 OR directly on their website. Click on Corporate at the bottom of the screen and when prompted enter code MATH (this code is not case sensitive). By using this code a new calendar will appear and will show the MSRI rate on all room types available. MSRI has preferred rates at the Hotel Durant. Reservations may be made by calling 1-800-238-7268. When making reservations, guests must request the MSRI preferred rate. If you are making your reservations on line, please go to this link and enter the promo/corporate code MSRI123. Our preferred rate is$129 per night for a Deluxe Queen/King, based on availability. MSRI has preferred rates of $149 -$189 plus tax at the Hotel Shattuck Plaza, depending on room availability. Guests can either call the hotel's main line at 510-845-7300 and ask for the MSRI- Mathematical Science Research Inst. discount; or go to www.hotelshattuckplaza.com and click Book Now. Once on the reservation page, click “Promo/Corporate Code“ and input the code: msri. MSRI has preferred rates of $110 -$140 at the Berkeley Lab Guest House, depending on room availability. Reservations may be made by calling 510-495-8000 or directly on their website. Select “I am an individual traveler affiliated with MSRI”. Additional lodging options may be found on our short term housing page. ## Show Visa/Immigration Schedule May 06, 2013 Monday 09:00 AM - 09:30 AM Welcome 09:30 AM - 10:20 AM Resolutions of dlt pairs János Kollár (Princeton University) 10:30 AM - 11:00 AM Tea 11:00 AM - 11:50 AM Recent results on the grading of local cohomology modules Gennady Lyubeznik (University of Minnesota Twin Cities) 12:00 PM - 02:00 PM Lunch 02:00 PM - 02:50 PM the core of an ideal Claudia Polini (University of Notre Dame) 03:00 PM - 03:30 PM Tea 03:30 PM - 04:20 PM ACC for the log canonical threshold and termination of flips James McKernan (Massachusetts Institute of Technology) May 07, 2013 Tuesday 09:00 AM - 09:50 AM Multiplicities of graded families of linear series Steven Cutkosky (University of Missouri) 10:00 AM - 10:30 AM Tea 10:30 AM - 11:20 AM Ordinary varieties and the comparison between multiplier ideals and test ideals Vasudevan Srinivas (Tata Institute of Fundamental Research) 11:30 AM - 12:20 PM F-singularities in families Karl Schwede (Pennsylvania State University) 12:30 PM - 02:00 PM Lunch 02:00 PM - 02:50 PM On the local uniformization of Abhyankar valuations using toric maps Bernard Teissier (Institut mathématique de Jussieu-PRG) 03:00 PM - 03:30 PM Tea 03:30 PM - 04:20 PM A local Lefschetz theorem Bhargav Bhatt (Institute for Advanced Study) 04:30 PM - 06:20 PM Reception May 08, 2013 Wednesday 09:00 AM - 09:50 AM The Nash problem on families of arcs Tommaso de Fernex (University of Utah) 10:00 AM - 10:30 AM Tea 10:30 AM - 11:20 AM F-Signature and Relative Hilbert-Kunz Multiplicity Kevin Tucker (Princeton University) 11:30 AM - 12:20 PM Some computations of Hilbert-Kunz functions Vijaylaxmi Trivedi (Tata Institute of Fundamental Research) 12:30 PM - 02:00 PM Lunch 02:00 PM - 02:25 PM F-pure thresholds of quasi-homogeneous polynomials Emily Witt (University of Minnesota Twin Cities) 02:00 PM - 02:25 PM Pluri-canonical maps in positive characteristic Yuchen Zhang (University of Utah) 02:30 PM - 02:55 PM Degrees of relations, the Weak Lefschetz Property, and top socle degrees in positive characteristic Adela Vraciu (University of South Carolina) 02:30 PM - 02:55 PM Generic linkage and regularity of algebraic varieties Wenbo Niu (Purdue University) 03:00 PM - 03:30 PM Tea 03:30 PM - 03:55 PM Multiplier ideals and test ideals of complete intersection binomial ideals. Takafumi Shibuta (Kyushu University) 03:30 PM - 03:55 PM Asymptotic test ideals and their possible applications to resolution problems Angelica Benito (University of Michigan) 04:00 PM - 04:25 PM Frobenius splitting of orbit closures associated to type A quivers Jenna Rajchgot (MSRI - Mathematical Sciences Research Institute) 04:00 PM - 04:25 PM Dual F-signature Akiyoshi Sannai (Nagoya University) May 09, 2013 Thursday 09:00 AM - 09:50 AM Globally F-regular and Frobenius split surfaces. Shunsuke Takagi (Graduate School of Mathematical Sciences) 10:00 AM - 10:30 AM Tea 10:30 AM - 11:20 AM The index of a threefold canonical singularity Masayuki Kawakita (Kyoto University) 11:30 AM - 12:20 PM Uniform Izumi's theorem Charles Favre (École Polytechnique) 12:30 PM - 02:00 PM Lunch 02:00 PM - 02:50 PM Singularities with respect to Mather-Jacobian discrepancies Shihoko Ishii (University of Tokyo) 03:00 PM - 03:30 PM Tea 03:30 PM - 04:20 PM The monodromy conjecture for motivic and related zeta functions Willem Veys (Katholieke Universiteit Leuven) May 10, 2013 Friday 09:30 AM - 10:20 AM Mather multiplier ideals Lawrence Ein (University of Illinois at Chicago) 10:30 AM - 11:00 AM Tea 11:00 AM - 11:50 AM Something is irrational in Hilbert-Kunz theory Holger Brenner (Universität Osnabrück) 12:00 PM - 02:00 PM Lunch 02:00 PM - 02:50 PM The Singularities of the Moduli Spaces of Vector Bundles over Curves in characteristic p Vikram Mehta (IIT, Bombay) 03:00 PM - 03:30 PM Tea 03:30 PM - 04:20 PM Stabilization of the Frobenius push-forward and the F-blowup sequence Nobuo Hara (Tohoku University)
Tutorials on Advanced Math and Computer Science Concepts # Problem 4 - The Median of Two Sorted Arrays (Hard) Code for this problem can be found at https://github.com/scprogramming/LeetcodeExplained/tree/master/Problem%20Four%20-%20Median%20of%20Two%20Sorted%20Arrays%20(Hard) Problem: Given two sorted arrays of numbers, determine the median of the two sorted arrays. The overall runtime complexity must be O(log(min(m,n))). If we were to remove the runtime complexity requirement, there would be several solutions to this problem that could work. A first thought might be to merge the two arrays, then find the median of the merged array. The issue with this strategy is that, although determining where to insert into the array is O(log(n)) with a binary search, the actual act of inserting into an array is O(n), so this goes over our required time complexity. Seeing this, we now understand that we can't combine the arrays in any way while maintaining an O(log(min(m+n))) time complexity. In order to solve this problem, we will need to think a little more creatively. First, let's think about exactly what a median of two arrays is giving us. A median is a midpoint in a set of numbers, such that half the numbers are on one side, and the other half are on the other side. More importantly, with sorted arrays, a median will have the property that everything on the left-hand side will be smaller than everything on the right-hand side. This is because if a median splits an array in half, and the array is sorted, naturally, all the smaller elements will be on one side, and all the larger will be on the other side. This fact is the key to the solution we need for this problem. Let's consider a simple example of two arrays: A1 = [1,2,3,4,10] A2 = [5,6,7,8,9] If we wanted to find the median of the array, we would need to first get the array into an order such that all elements are sorted. This would give us a result like this: A = [1,2,3,4,5,6,7,8,9,10] From here, we calculate the median. Since it is an even number of elements, we use the formula (5+6)/2 = 5.5. We are just taking the two elements in the middle, 5 and 6, and taking their average. If the array was of odd size, we could just pick the middle element. From this example, we can take a few facts that will help us solve the problem. 1. We need to take n elements from the first array A1. This means that there would be (len(A1) + len(A2) + 1)/2 - n elements to pick from the second array. This formula is just representing the number of elements required to create an equally sized array, so that our median uses all the elements available. 2. We need to ensure that the left-hand side of the median is less than the right-hand side of the median. Since the arrays are sorted, we can do this by comparing the largest value on the left to the smallest value on the right. If the property is satisfied for these values, then our median choice is valid. 3. If the property for 2 is not satisfied, we simply pick a new point to partition, and try again. Let's take a look at the code to get an intuition on how we can program this, and how we can derive the time complexity from the algorithm. To start, we want a1 to be holding the shortest array out of the two. This is done to minimize the area that we need to search. In addition to this, we also set up variables to hold the values of the length of each array, as well as a low which will start at the beginning of the array, and a high which is the length of the smallest array. From here, we will iterate while the low is less than or equal to the high. We are going to calculate our partition as discussed earlier. The partition of the first array will be the (low + high)/2, as the midpoint of the first array. The partition of the second array will be (len1 + len2 + 1)/2 - paritition1, to gather the remaining elements from the first partition. From here, we are going to gather the values at the minimum and maximums of the arrays after the partition. To find the max of the first array, we look at the value that exists at the end of the partition, as long as the partition is bigger than 0. Recall that the first array is holding all values smaller than the median, and we want the largest of these values to be smaller than the smallest of array two's values. We then find the maximums and minimums in a similar fashion, either taking the start of the array for minimums, or the end of the array for maximums. In any instance where we can't find a largest or smallest values due to the partition, we set a default value as the smallest or largest value. From here, we just need to check the maximums and minimums that we found to determine if we need to partition again, or have a solution. If we have a situation where the max of array 1 is smaller than the min of array 2, and the min of array 1 is larger than the max of array 2, we have a solution. From here we just calculate the median based on the partition and we are done. Otherwise, we move partition 1 to the left and right of its current position and start again. This code gives us a good solution to the problem, so the final thing to discuss is the time complexity. I've asserted that this satisfies the condition of a O(log(min(m,n))) solution, where m and n are the length of the first and second array respectively. Let's prove that this is true. Consider that the search space for this problem is either the length of m or the length of n, depending on which is smaller. This means that in a typical case, we would iterate min(m,n) times, depending on which array is selected. Now consider how we increment low and high, to try to meet the condition that low <= high. To do this, we will either set high = partition1 - 1 or low = partition1 + 1. This means that on average, we are moving our current index by $\frac{low + high}{2}$. Since our search space is halving each time, we end up in the typical example of logarithmic time complexity. Each iteration halves the search space of min(m,n), therefore we have a time complexity of O(log(min(m,n))).
Symbol Problem Duration $:$ $3.00.00$ Time $len:$ $1:40$ If $α,$ $β,γ$ are the roots of $x^{3}+2x^{2}-4x-3=0$ then the equation whose roots $\dfrac {α} {3},$ $\dfrac {β} {3},\dfrac {γ} {3}$ $\left($ (A) $\right)$ $x^{3}+6x^{2}-36x-81=0$ (B) $\right)$ $9x^{3}+6x^{2}-4x-1=0$ – $\left($ (C) $9x^{3}+6x^{2}+4x+1=0$ (D) $\right)$ $x^{3}-6x^{2}+36x+81=0$
Seminar # Postdoc day, Seminar 4: On the high-dimensional rational cohomology of special linear groups #### Robin Sroka - McMaster University Work of Borel--Serre implies that the rational cohomology of \$\operatorname{SL}_n(\mathbb{Z})\$ satisfies a duality property, which is analogous to Poincaré duality for manifolds. In particular, the rational cohomology of \$\operatorname{SL}_n(\mathbb{Z})\$ vanishes in all degrees above its virtual cohomological dimension \$v_n = {n \choose 2}\$. Surprisingly, the highest two possibly non-trivial rational cohomology groups also vanish, if \$n \geq 3\$. In the top-degree \$v_n\$ this is a result of Lee--Szczarba and in codimension one \$v_n - 1\$ a theorem of Church--Putman. In this talk, I will discuss work in progress with Brück--Miller--Patzt--Wilson on the rational cohomology of \$\operatorname{SL}_n(\mathbb{Z})\$ in codimension two \$v_n - 2\$. Organizers Gregory Arone Stockholm University Tilman Bauer KTH Royal Institute of Technology Alexander Berglund Stockholm University Søren Galatius University of Copenhagen Jesper Grodal, University of Copenhagen Thomas Kragh Uppsala University # ProgramContact Alexander Berglund [email protected] Søren Galatius [email protected] Thomas Kragh [email protected] # Otherinformation For practical matters at the Institute, send an e-mail to [email protected]
# High School Math : The Unit Circle and Radians ## Example Questions ### Example Question #41 : The Unit Circle And Radians If angle  equals , what is the equivalent angle in radians (to the nearest hundredth)? Explanation: To convert between radians and degrees, it is important to remember that: With this relationship in mind, we can convert from degrees to radians with the following formula: ### Example Question #42 : The Unit Circle And Radians If equals , what is the equivalent angle in radians (to the nearest hundredth)? Explanation: To convert between radians and degrees, it is important to remember that: With this relationship in mind, we can convert from degrees to radians with the following formula: ### Example Question #43 : The Unit Circle And Radians If is , what is the equivalent angle in radians (to the nearest hundredth)? Explanation: To convert between radians and degrees, it is important to remember that: With this relationship in mind, we can convert from degrees to radians with the following formula: ### Example Question #44 : The Unit Circle And Radians If an angle is , what is the equivalent angle in radians (to the nearest hundredth)? Explanation: To convert between radians and degrees, it is important to remember that: With this relationship in mind, we can convert from degrees to radians with the following formula: ### Example Question #45 : The Unit Circle And Radians What angle below is equivalent to ? Explanation: To convert between radians and degrees, it is important to remember that: With this relationship in mind, we can convert from degrees to radians with the following formula: ### Example Question #46 : The Unit Circle And Radians If an angle is 2.43 radians, what is the equivalent angle in degrees? Explanation: To answer this question, it is important to remember the relationship: From here, we can set up the following formula: Multiplying across, we see that the units of radians cancel out and we are left with units in degrees. ### Example Question #47 : The Unit Circle And Radians If an angle equals 5.75 radians, what is the equivalent angle in degrees? Explanation: To answer this question, it is important to remember the relationship: From here, we can set up the following formula: Multiplying across, we see that the units of radians cancel out and we are left with units in degrees. ### Example Question #48 : The Unit Circle And Radians If an angle equals 15.71 radians, what is the equivalent angle in degrees? Explanation: To answer this question, it is important to remember the relationship: From here, we can set up the following formula: Multiplying across, we see that the units of radians cancel out and we are left with units in degrees. ### Example Question #49 : The Unit Circle And Radians If an angle equals 6.4 radians, what is the equivalent angle in degrees? Explanation: To answer this question, it is important to remember the relationship: From here, we can set up the following formula: Multiplying across, we see that the units of radians cancel out and we are left with units in degrees. ### Example Question #50 : The Unit Circle And Radians If an angle is 3.55 radians, what is the equivalent angle in degrees? Explanation: To answer this question, it is important to remember the relationship: From here, we can set up the following formula: Multiplying across, we see that the units of radians cancel out and we are left with units in degrees.
# An LTI system has a wide-sense stationary (WSS) input signal with zero mean. Its output is: This question was previously asked in ESE Electronics 2014 Paper 1: Official Paper View all UPSC IES Papers > 1. non-zero mean and non-WSS signal 2. zero mean and WSS signal 3. non-zero mean and WSS signal 4. zero mean and non-WSS signal Option 2 : zero mean and WSS signal Free CT 3: Building Materials 2565 10 Questions 20 Marks 12 Mins ## Detailed Solution Analysis: The output of filter is Y(t) = h(t) * X(t), where h(t) is impulse response of the filter. Let the output mean be μy. μy = E[Y(t)] = E[h(t) * X(t)] $${{\rm{μ }}_{\rm{Y}}} = {\rm{E}}\left[ {\mathop \smallint \limits_{ - \infty }^\infty {\rm{h}}\left( {\rm{\tau }} \right){\rm{X}}\left( {{\rm{t}} - {\rm{\tau }}} \right){\rm{d\tau }}} \right]$$ $${{\rm{μ }}_{\rm{Y}}} = \mathop \smallint \limits_{ - \infty }^\infty {\rm{h}}\left( {\rm{\tau }} \right) \cdot {\rm{E}}\left[ {{\rm{X}}\left( {{\rm{t}} - {\rm{\tau }}} \right)} \right]{\rm{d\tau }}$$ Now, if X(t) is WSS, a shift in time does not affect its mean, i.e. $${\rm{E}}\left[ {{\rm{X}}\left( {\rm{t}} \right)} \right] = {\rm{E}}\left[ {{\rm{X}}\left( {{\rm{t}} - {\rm{\tau }}} \right)} \right] = {{\rm{μ }}_{\rm{X}}}$$ . $${\rm{E}}\left[ {{\rm{Y}}\left( {\rm{t}} \right)} \right] = \mathop \smallint \limits_{ - \infty }^\infty {\rm{h}}\left( {\rm{\tau }} \right){{\rm{μ }}_{\rm{X}}}{\rm{d\tau }}$$ $$= {{\rm{μ }}_{\rm{X}}}\mathop \smallint \limits_{ - \infty }^\infty {\rm{h}}\left( {\rm{\tau }} \right){\rm{d\tau }}$$ $$\because {\mathop \smallint \limits_{ - \infty }^\infty {\rm{h}}\left( {\rm{\tau }} \right){\rm{d\tau }} = {\rm{H}}\left( 0 \right)}$$ $${{\rm{μ }}_{\rm{Y}}} = {{\rm{μ }}_{\rm{X}}} \cdot {\rm{H}}\left( 0 \right)$$ Since H(0) will be some constant, we conclude that the output will also have the same nature as the input signal, i.e. WSS and zero mean.
## Intermediate Algebra (6th Edition) $x=-2$ A vertical line has an equation of the form $x=c$. Since the line contains the point, $(-2,-\frac{3}{4})$, the equation for this line must be $x=-2$. The x-coordinate will be -2 for all values of y.
Which of the following is not due to total internal reflection? 1.  Difference between apparent and real depth of the pond 2.  Mirage on hot summer days 3.  Brilliance of the diamond 4.  Working of optical fibre Subtopic:  Total Internal Reflection | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A ray of light travelling in a transparent medium of refractive index $\mu$ falls on a surface separating the medium from the air at an angle of incidence of 45 °. For which of the following value of $\mu$, the ray can undergo total internal reflection? 1. $\mu =1.33$ 2. $\mu =1.40$ 3. $\mu =1.50$ 4. $\mu =1.25$ Subtopic:  Total Internal Reflection | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot The speed of light in media M1 and M2 is 1.5 × 108 m/s and 2.0 × 108 m/s respectively. A ray of light enters from medium M1 to M2 at an incidence angle i. If the ray suffers total internal reflection, the value of i is - 1. Equal to or less than sin–1 $\left(\frac{3}{5}\right)$ 2. Equal to or greater than sin–1 $\left(\frac{3}{4}\right)$ 3. less than sin–1$\left(\frac{2}{3}\right)$ 4. Equal to sin–1 $\left(\frac{2}{3}\right)$ Subtopic:  Total Internal Reflection | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A small coin is resting on the bottom of a beaker filled with a liquid. A ray of light from the coin travels upto the surface of the liquid and moves along its surface (see figure). How fast is the light traveling in the liquid? 1. 2. 3. 4. Subtopic:  Total Internal Reflection | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh
# OpenAccess 2017-11-1011:28 [FZJ-2017-07202] Journal Article et al Magnetically induced ferroelectricity in Bi$_2$CuO$_4$ Physical review / B 96(5), 054424 (2017) [10.1103/PhysRevB.96.054424] The tetragonal copper oxide Bi2CuO4 has an unusual crystal structure with a three-dimensional network of well separated CuO4 plaquettes. The spin structure of Bi2CuO4 in the magnetically ordered state below TN∼43 K remains controversial. [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-11-1011:28 [FZJ-2017-07201] Journal Article et al Simulation and optimization of a new focusing polarizing bender for the diffuse neutrons scattering spectrometer DNS at MLZ6G4 Journal of physics / Conference Series 862, 012018 - (2017) [10.1088/1742-6596/862/1/012018] We present the concept and the results of the simulations of a new polarizer for the diffuse neutron scattering spectrometer DNS at MLZ. The concept of the polarizer is based on the idea of a bender made from the stack of the silicon wafers with a double-side supermirror polarizing coating and absorbing spacers in between. [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-11-1011:28 [FZJ-2017-07200] Journal Article et al Single-crystal growth and physical properties of 50% electron-doped rhodate Sr$_{1.5}$La$_{0.5}$RhO$_4$ Physical review materials 1(4), 044005 (2017) [10.1103/PhysRevMaterials.1.044005] Centimeter-sized single crystals of Sr1.5La0.5RhO4 were grown by the floating zone method at oxygen pressures of 20 bar. The quality of our single crystals was confirmed by x-ray Laue, powder and single crystal x-ray diffraction, neutron and x-ray absorbtion spectroscopy measurements. [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-11-1011:28 [FZJ-2017-07008] Journal Article Wuttke, J. No case against scattering theory In a series of papers, Frauenfelder et al. (1⇓–3) propose a radical reinterpretation of incoherent neutron scattering by complex systems, specifically by protein hydration water, drawing into doubt the “currently accepted model, used for >50 y” (3) [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-11-1011:28 [FZJ-2017-06819] Journal Article et al Nanostructure of HT-PEFC Electrodes Investigated with Scattering Methods ECS transactions 80(8), 19-25 (2017) [10.1149/08008.0019ecst] The nanostructure of the electrode layer of a HT-PEFC is studied with neutron and X-ray small-angle scattering techniques. The different contrasts of the two probes provide a view on different aspects of the electrode layer [...] External links: Fulltext; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; Fulltext; Fulltext; Fulltext; Fulltext; Fulltext 2017-09-2211:28 [FZJ-2017-06677] Journal Article et al Anomalous phonons in CaFe 2 As 2 explored by inelastic neutron scattering3 Journal of physics / Conference Series 251, 012008 - (2010) [10.1088/1742-6596/251/1/012008] Extensive inelastic neutron scattering measurements of phonons on a single crystal of CaFe2As2 allowed us to establish a fairly complete picture of phonon dispersions in the main symmetry directions. The phonon spectra were also calculated by density functional theory (DFT) in the local density approximation (LDA). [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-09-2211:28 [FZJ-2017-06675] Journal Article et al Pressure-driven Phase Transition in CaFeAsF at 40 and 300 K Journal of physics / Conference Series 377, 012034 - (2012) [10.1088/1742-6596/377/1/012034] We carried out systematic investigation of high pressure crystal structure and structural phase transition upto 46 GPa at 40 K and 25 GPa at 300 K in CaFeAsF using powder synchrotron x-ray diffraction experiments. Rietveld analysis of the diffraction data at 40 K reveals a structural phase transition from an orthorhombic to a monoclinic phase at Pc = 13.7 GPa, while increasing pressure. [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-09-1612:24 [FZJ-2017-06043] Journal Article et al Spinon confinement in a quasi-one-dimensional anisotropic Heisenberg magnet Physical review / B 96(5), 054423 (2017) [10.1103/PhysRevB.96.054423] Confinement is a process by which particles with fractional quantum numbers bind together to formquasiparticles with integer quantum numbers. The constituent particles are confined by an attractive interactionwhose strength increases with increasing particle separation and, as a consequence, individual particles arenot found in isolation. [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-09-1612:24 [FZJ-2017-05842] Journal Article et al Discussion on data correction for Polarization Analysis with a 3 He spin filter analyzer Journal of physics / Conference Series 862, 012001 - (2017) [10.1088/1742-6596/862/1/012001] Fully polarized neutron reflectometry and grazing incidence small angle neutron scattering are effective methods to explore magnetic structures on the nm to μm length scales. This paper is an outline of how to fully correct for the polarization analysis (PA) inefficiencies of such an instrument and to determine the error contributions of the neutron polarizer and analyzer. [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess 2017-09-1011:29 [FZJ-2017-06416] Journal Article et al Magnetic targeting to enhance microbubble delivery in an occluded microarterial bifurcation Physics in medicine and biology 62(18), 7451 - 7470 (2017) [10.1088/1361-6560/aa858f] Ultrasound and microbubbles have been shown to accelerate the breakdown of blood clots both in vitro and in vivo. Clinical translation of this technology is still limited, however, in part by inefficient microbubble delivery to the thrombus. [...] External links: OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess; OpenAccess
## Chemistry (4th Edition) The law of mass action was an empirically developed equation that allows us to write the equilibrium expression for any reaction, as long as we have the balanced equation for it. Its representation for the reaction: $$aA+bB\longrightarrow cC$$ Is: $$K_c = \frac{[C]^c}{[A]^a[B]^b}$$ The law of mass action was an empirically developed equation that allows us to write the equilibrium expression for any reaction, as long as we have the balanced equation for it. Its representation for the reaction: $$aA+bB\longrightarrow cC$$ Is: $$K_c = \frac{[C]^c}{[A]^a[B]^b}$$
# Why setting 'predisplaypenalty' to 0 is wrong idea? In Vertical space between section header and text is too big user egreg said that setting \predisplaypenalty=0 is wrong idea. Why is that? Could you provide an example? If not '0', what value could be used? - ## 1 Answer The parameter \predisplaypenalty is the "cost" that TeX assigns to inserting a page break immediately before the start of a displayed equation (or multiple equations). On p. 189 of the TeXbook, Knuth writes: Plain TeX sets \predisplaypenalty=10000, because fine printers traditionally shun displayed formulas at the very top of a page." [Asides: (i) If a TeX penalty parameter is set to 10,000, it's the functional equivalent of infinity. (ii) LaTeX also sets \predisplaypenalty=10000.] If one were to set \predisplaypenalty=0, this cost parameter would be 0, i.e., TeX wouldn't even try to find alternative page break points than the one it may have "found" immediately before the start of a displayed equation. As with virtually all typographic "rules," this rule is not absolute. There may be some circumstances in which it's better to violate this rule than it is to violate some other, even more important typographic rule. Nevertheless, disregarding this typographic rule entirely -- by setting the cost parameter to 0 -- cannot be the best approach. - Why do you suppose it became a "rule" not to start a page with a displayed equation? My guess is that it's because it looks bad, to me, for a page to end with a line that doesn't go to the right margin and that doesn't end with a sentence-ending punctuation mark. For instance, if I see "Thus we see that [rest of line is blank]" at the bottom of page 305, I think for a moment that something got cut out, even if page 306 begins with the displayed equation that logically follows. But that's just my guess. – MSC Mar 14 '13 at 19:26 @MSC - In addition to your guess (which seems entirely plausible to me), I'd venture to guess that this "rule" also reflects the general strong aversion to (typographic!) "widows", i.e., starting a page with a single line before the next paragraph break occurs. Starting a page with a displayed equation can look very much like having a typographic widow. Cast in these terms, your conjecture might be (re)stated as saying that having a page break right before a displayed equation not only creates a widow on that page but also an "orphan" (or "semi-orphan"?) on the preceding page... – Mico Mar 14 '13 at 19:39 It all depends, I think. At my current project, I'm probably content with \predisplaypenalty=\@highpenalty. That allows for some pagebreaks above equations, where the preceding page would otherwise contain huge white gaps around equations or similar uglinesses. Short two-line paragraphs between equations are not beautiful either, when they get broken across the pagebreak: real widow and real orphan in one go. – Blackface May 20 '14 at 14:31
• Volume 77, Issue 3 September 2011,   pages  405-597 • Preface • On the Lie point symmetry analysis and solutions of the inviscid Burgers equation Lie point symmetries of the first-order inviscid Burgers equation in a general setting are studied. Some new and interesting solutions are presented. • Isometric embeddings in cosmology and astrophysics Recent interest in higher-dimensional cosmological models has prompted some signifi-cant work on the mathematical technicalities of how one goes about embedding spacetimes into some higher-dimensional space. We survey results in the literature (existence theorems and simple explicit embeddings); briefly outline our work on global embeddings as well as explicit results for more complex geometries; and provide some examples. These results are contextualized physically, so as to provide a foundation for a detailed commentary on several key issues in the field such as: the meaning of `Ricci equivalent’ embeddings; the uniqueness of local (or global) embeddings; symmetry inheritance properties; and astrophysical constraints. • Gravitational collapse with decaying vacuum energy The effect of dark energy on the end state of spherical radiation collapse is considered within the context of the cosmic censorship hypothesis. It is found that it is possible to have both black holes as well as naked singularities. • On the enigmatic 𝛬 – A true constant of spacetime Had Einstein followed the Bianchi differential identity for the derivation of his equation of motion for gravitation, 𝛬 would have emerged as a true new constant of spacetime on the same footing as the velocity of light? It is then conceivable that he could have perhaps made the most profound prediction that the Universe may suffer accelerated expansion some time in the future! Further we argue that its identification with the quantum vacuum energy is not valid as it should have to be accounted for like the gravitational field energy by enlarging the basic framework of spacetime and not through a stress tensor. The acceleration of the expansion of the Universe may indeed be measuring its value for the first time observationally. • A note on the interplay between symmetries, reduction and conservation laws of Stokes’ first problem for third-grade rotating fluids We investigate the invariance properties, nontrivial conservation laws and interplay between these notions that underly the equations governing Stokes’ first problem for third-grade rotating fluids. We show that a knowledge of this leads to a number of different reductions of the governing equations and, thus, a number of exact solutions can be obtained and a spectrum of further analyses may be pursued. • Higher-order symmetries and conservation laws of multi-dimensional Gordon-type equations In this paper a class of multi-dimensional Gordon-type equations are analysed using a multiplier and homotopy approach to construct conservation laws. The main focus is the analysis of the classical versions of the Gordon-type equations and obtaining higher-order variational symmetries and corresponding conserved quantities. The results are extended to the multi-dimensional Gordontype equations with the two-dimensional Klein–Gordon equation in particular yielding interesting results. • Relativistic stellar models We obtain a class of solutions to the Einstein–Maxwell equations describing charged static spheres. Upon specifying particular forms for one of the gravitational potentials and the electric field intensity, the condition for pressure isotropy is transformed into a hypergeometric equation with two free parameters. For particular parameter values we recover uncharged solutions corresponding to specific neutron star models. We find two charged solutions in terms of elementary functions for particular parameter values. The first charged model is physically reasonable and the metric functions and thermodynamic variables are well behaved. The second charged model admits a negative energy density and violates the energy conditions. • Temperature evolution during dissipative collapse We investigate the gravitational collapse of a radiating sphere evolving into a final static configuration described by the interior Schwarzschild solution. The temperature profiles of this particular model are obtained within the framework of causal thermodynamics. The overall temperature evolution is enhanced by contributions from the temperature gradient induced by perturbations as well as relaxational effects within the stellar core. • Charged fluids with symmetries We investigate the role of symmetries for charged perfect fluids by assuming that spacetime admits a conformal Killing vector. The existence of a conformal symmetry places restrictions on the model. It is possible to find a general relationship for the Lie derivative of the electromagnetic field along the integral curves of the conformal vector. The electromagnetic field is mapped conformally under particular conditions. The Maxwell equations place restrictions on the form of the proper charge density. • A note on the Lie symmetries of complex partial differential equations and their split real systems Folklore suggests that the split Lie-like operators of a complex partial differential equation are symmetries of the split system of real partial differential equations. However, this is not the case generally. We illustrate this by using the complex heat equation, wave equation with dissipation, the nonlinear Burgers equation and nonlinear KdV equations. We split the Lie symmetries of a complex partial differential equation in the real domain and obtain real Lie-like operators. Further, the complex partial differential equation is split into two coupled or uncoupled real partial differential equations which constitute a system of two equations for two real functions of two real variables. The Lie symmetries of this system are constructed by the classical Lie approach. We compare these Lie symmetries with the split Lie-like operators of the given complex partial differential equation for the examples considered. We conclude that the split Lie-like operators of complex partial differential equations are not in general symmetries of the split system of real partial differential equations. We prove a proposition that gives the criteria when the Lie-like operators are symmetries of the split system. • Effects of non-uniform interfacial tension in small Reynolds number flow past a spherical liquid drop A singular perturbation solution is given for small Reynolds number flow past a spherical liquid drop. The interfacial tension required to maintain the drop in a spherical shape is calculated. When the interfacial tension gradient exceeds a critical value, a region of reversed flow occurs on the interface at the rear and the interior flow splits into two parts with reversed circulation at the rear. The magnitude of the interior fluid velocity is small, of order the Reynolds number. A thin transition layer attached to the drop at the rear occurs in the exterior flow. The effects could model the stagnant cap which forms as surfactant is added but the results apply however the variability in the interfacial tension might have been induced. • Chandrasekhar: The all rounder This paper provides a very brief introduction to three of Chandrasekhar’s famous books on Stellar Structure, Hydrodynamics and Black Holes. In particular we summarize Chandra’s treatment of the “Thermal Instability” which plays such a crucial role in the understanding of convection zones in stellar atmospheres. We also outline three important ideas in fluid dynamics which are inexplicably omitted from Chandrasekhar’s Hydrodynamic and Hydromagnetic Stability; the first is the Brunt–Väisȧlä frequency which appears in internal gravity waves and is closely related to Schwarzschild’s stability criterion; the second is the baroclinic instability which is important in atmospheric dynamics and meteorology, and the third is the conservation of potential vorticity which is central to the understanding of the planetary scale – Rossby waves. • Transient heat transfer in longitudinal fins of various profiles with temperature-dependent thermal conductivity and heat transfer coefficient Transient heat transfer through a longitudinal fin of various profiles is studied. The thermal conductivity and heat transfer coefficients are assumed to be temperature dependent. The resulting partial differential equation is highly nonlinear. Classical Lie point symmetry methods are employed and some reductions are performed. Since the governing boundary value problem is not invariant under any Lie point symmetry, we solve the original partial differential equation numerically. The effects of realistic fin parameters such as the thermogeometric fin parameter and the exponent of the heat transfer coefficient on the temperature distribution are studied. • An investigation of embeddings for spherically symmetric spacetimes into Einstein manifolds Embeddings into higher dimensions are very important in the study of higherdimensional theories of our Universe and in high-energy physics. Theorems which have been developed recently guarantee the existence of embeddings of pseudo-Riemannian manifolds into Einstein spaces and more general pseudo-Riemannian spaces. These results provide a technique that can be used to determine solutions for such embeddings. Here we consider local isometric embeddings of four-dimensional spherically symmetric spacetimes into five-dimensional Einstein manifolds. Difficulties in solving the five-dimensional equations for given four-dimensional spaces motivate us to investigate embedded spaces that admit bulks of a specific type. We show that the general Schwarzschild–de Sitter spacetime and Einstein Universe are the only spherically symmetric spacetimes that can be embedded into an Einstein space of a particular form, and we discuss their five-dimensional solutions. • Exact solutions of the generalized Lane–Emden equations of the first and second kind In this paper we discuss the integrability of the generalized Lane–Emden equations of the first and second kinds. We carry out their Noether symmetry classification. Various cases for the arbitrary functions in the equations are obtained for which the equations have Noether point symmetries. First integrals of such cases are obtained and also reduction to quadrature of the corresponding Lane–Emden equations are presented. New cases are found. • Invariance analysis and conservation laws of the wave equation on Vaidya manifolds In this paper we discuss symmetries of classes of wave equations that arise as a consequence of some Vaidya metrics. We show how the wave equation is altered by the underlying geometry. In particular, a range of consequences on the form of the wave equation, the symmetries and number of conservation laws, inter alia, are altered by the manifold on which the model wave rests. We find Lie and Noether point symmetries of the corresponding wave equations and give some reductions. Some interesting physical conclusions relating to conservation laws such as energy, linear and angular momenta are also determined. We also present some interesting comparisons with the standard wave equations on a flat geometry. Finally, we pursue the existence of higher-order variational symmetries of equations on nonflat manifolds. • Estimate of stellar masses from their QPO frequencies Kilohertz quasiperiodic oscillations (kHz QPOs) are observed in binary stellar systems. For such a system, the stellar radius is very close to the marginally stable orbit $R_{\text{ms}}$ as predicted by Einstein’s general relativity. Many models have been proposed to explain the origin of the kHz QPO features in the binaries. Here we start from the work of Li et al (Phys. Rev. Lett. 83, 3776 (1999)) who in 1999, from the unique millisecond X-ray pulsations, suggested SAX J1808.4−3658 to be a strange star, from an accurate determination of its rotation period. It showed kHz QPOs eight years ago and so far it is the only set that has been observed. We suggest that the mass of four compact stars SAX J1808.4−3658, KS 1731−260, SAX J1750.8−2900 and IGR J17191−2821 can be determined from the difference in the observed kHz QPOs of these stars. It is exciting to be able to give an estimate of the mass of the star and three other compact stars in low-mass X-ray binaries using their observed kHz QPOs. • Linearization of systems of four second-order ordinary differential equations In this paper we provide invariant linearizability criteria for a class of systems of four second-order ordinary differential equations in terms of a set of 30 constraint equations on the coefficients of all derivative terms. The linearization criteria are derived by the analytic continuation of the geometric approach of projection of two-dimensional systems of cubically semi-linear secondorder differential equations. Furthermore, the canonical form of such systems is also established. Numerous examples are presented that show how to linearize nonlinear systems to the free particle Newtonian systems with a maximally symmetric Lie algebra relative to $sl$(6, $\mathfrak{R}$) of dimension 35. • List of Participants • # Pramana – Journal of Physics Current Issue Volume 93 | Issue 6 December 2019 • # Editorial Note on Continuous Article Publication Posted on July 25, 2019 Click here for Editorial Note on CAP Mode © 2017-2019 Indian Academy of Sciences, Bengaluru.
# User Details User Since Oct 31 2014, 9:09 AM (438 w, 3 d) Availability Available LDAP User Unknown MediaWiki User # Oct 17 2022 fredw renamed T320910: Remove namedspaces length values in MathML from Deprecate/Remove namedspaces length values in MathML to Remove namedspaces length values in MathML. fredw added a comment to T320910: Remove namedspaces length values in MathML. To elaborate a bit, this was just an example. More generally, we should use the substitution described here: # Sep 7 2017 fredw added a comment to T175060: [itex] does not render correctly on Android app preview. # Feb 8 2017 Le 08/02/2017 à 19:13, Physikerwelt a écrit : @Jdlrobson https://phabricator.wikimedia.org/p/Jdlrobson/ @fredw https://phabricator.wikimedia.org/p/fredw/ is still actively maintaining the package as far as I know... how would you change the selector? Adapting the CSS and updating the extension would be straightforward, hopefully this can be done in a backward compatible way (i.e. the new version of the addon would still work will previous versions of mediawiki). # Oct 13 2016 fredw added a comment to T147319: No MathML rendering in Firefox - SVG or PNG is used. Le 13/10/2016 à 11:58, Physikerwelt a écrit : However, I think we should enable MathML without plugin (at least for some known good browser / OS combinations). @fredw https://phabricator.wikimedia.org/p/fredw/ What are your thoughts on that? I don't think things have changed much since the last time you asked. This will require coordination between various actors, so we should probably discuss that in the MathML association. # Jun 2 2016 fredw added a comment to T136709: Make math using MathML / SVG / PNG mode print properly. Just a note: some people use CSS to force native MathML to be displayed (in a user stylesheet, addon etc) so the optimization of not loaded SVG images still makes sense for them. # Apr 24 2016 fredw added a comment to T132607: Increase font size for MathML. Le 24/04/2016 20:32, Edokter a écrit : @fredw, the current release version (45) still uses the internal font stack, which is why I find it odd we override it in the first place. But if the internal stack is removed, we may as well leave it. I think the rule was added in MediaWiki before https://bugzilla.mozilla.org/show_bug.cgi?id=947654 was fixed. The internal stack is unlikely to be removed soon (except if in the very long term all text fonts get a math companion and then Gecko can switch to math font "automatically"). fredw added a comment to T132607: Increase font size for MathML. On https://de.wikipedia.org/wiki/MediaWiki_Diskussion:Common.css#Schriftgr.C3.B6.C3.9Fe_f.C3.BCr_MathML, there is a reference to https://dxr.mozilla.org/mozilla-central/source/modules/libpref/init/all.js and to the x-math properties. # Apr 13 2016 fredw added a comment to T132607: Increase font size for MathML. Note that this should not be needed if the math font style is consistent with the one of the text font (e.g. Latin Modern with Latin Modern Math). The percent is a bit arbitrary and won't work for everybody. Unfortunately very few standard fonts have a math companion and few math fonts are available on OS at the moment... # Mar 5 2016 fredw added a comment to T128950: Make SVG with PNG fallback mode. Also, note that one can use CSS to select between MathML, SVG or PNG: # Jan 26 2016 fredw added a comment to T105311: The MathML code for \not should not use <mpadded> hack. menclose updiagonalstrike represents some content that is struck out and this can be exposed as it into the accessible tree. ATs do not need any heuristics to transmit that presentation to the user and following's Abraham Nemeth idea it is then up to the user to deduce the mathematical meaning. This is the best you can do with presentation MathML to make things accessible. As a comparison, MathJax's solution is really a visual-only hack: a zero-width BIG SOLIDUS followed by the content. # Dec 25 2015 fredw added a comment to T122400: Increased line height when use formula inside a paragraph. Note that this is already fixed in Debian testing. I'm note familiar with Ubuntu process and how they import Debian's packages but the next release Ubuntu 16.04 LTS will be published next April: https://en.wikipedia.org/wiki/List_of_Ubuntu_releases#Ubuntu_16.04_LTS_.28Xenial_Xerus.29 # Dec 24 2015 fredw added a comment to T122400: Increased line height when use formula inside a paragraph. This is a bug in the Latin Modern Math version distributed by Ubuntu. # Nov 10 2015 I think unicode-math is also based on David Carlisle's unicode.xml so that should be the same commands. (but please check) I don't have preference, but please check http://ctan.org/pkg/unicode-math and their GitHub repo. # Nov 2 2015 fredw added a comment to T106890: smallmatrix not very small in MathML. Again, there are two bugs here: # Nov 1 2015 fredw added a comment to T106890: smallmatrix not very small in MathML. I checked the http://www.mathmlcentral.com/Tools/FromMathML.jsp to generate a png from the first mathml input and it looks about right. fredw added a comment to T106890: smallmatrix not very small in MathML. See https://phabricator.wikimedia.org/T106890#1483157 for how force an increment of scriptlevel (btw, it would probably more convenient if the scriptlevel attribute was allowed on the mtable element...) fredw added a comment to T106890: smallmatrix not very small in MathML. Any updates here https://bugzilla.mozilla.org/show_bug.cgi?id=1187682 seems to be stalled. # Sep 23 2015 fredw added a comment to T106973: Always use Latin Modern Math for native MathML. I removed my comment about Windows line spacing bug has it is fixed in Gecko 41. Latin Modern Math should now work on all platforms. For details, see https://lists.w3.org/Archives/Public/www-math/2015Sep/0031.html fredw updated the task description for T106973: Always use Latin Modern Math for native MathML. # Sep 2 2015 If you are talking about the "SVG fallback" of the MathML mode, then that is expected: different CSS properties are necessary to change the color of SVG paths. This does not happen in when "MathML" is really used since it just inherits the text colors. # Aug 7 2015 fredw added a comment to T99369: Remove client-side MathJax rendering mode. Why is SVG fallback now broken for Chrome-like browsers? fredw renamed T99369: Remove client-side MathJax rendering mode from Remove MathJax rendering mode to Remove client-side MathJax rendering mode. # Jul 27 2015 fredw added a comment to T99369: Remove client-side MathJax rendering mode. Finally, I realize we have a history and I don't want to start a fight. But @fredw, could you please *not* speculate about MathJax technology or opinions of MathJax team members? That can mislead people (just like inaccurate comments about Firefox MathML support can). If there's a question about how MathJax or MathJax-node works, we're happy to answer it. # Jul 26 2015 fredw updated the task description for T106973: Always use Latin Modern Math for native MathML. fredw added a project to T106973: Always use Latin Modern Math for native MathML: Math. fredw added a comment to T106890: smallmatrix not very small in MathML. In MathML the size reduction of \begin{smallmatrix} compared to \begin{matrix} is only about 75%. In the SVG mode its more like 65% and in PNG mode its close to 50%. It looks like the font size is not decreased at all, whereas in other modes a smaller font is used. fredw added a comment to T106890: smallmatrix not very small in MathML. Just to be sure to chatch the right one you mean this https://github.com/mathjax/MathJax/issues/839 bug? fredw added a comment to T106890: smallmatrix not very small in MathML. Not beforehand, but if you search "displaystyle" on the MathJax github issue or on the MathML mailing list, you should be able to find it. fredw added a comment to T106890: smallmatrix not very small in MathML. @fredw: So to fix this problem we could include math { line-height: normal; } To our custom math.css? fredw added a comment to T106890: smallmatrix not very small in MathML. Regarding the font-size itself: this seems to be a bug in MathJax. If I set <mstyle scriptlevel="+1"> around the <mtable>, I get the expected text size reduction in Gecko. fredw added a comment to T106890: smallmatrix not very small in MathML. line-height: normal also improves the linespacing of https://en.wikipedia.org/wiki/E8_%28mathematics%29 but I'm not sure what's expected for the font-size. fredw added a comment to T106890: smallmatrix not very small in MathML. So I'm wondering whether it's a bug in Gecko or whether the style on the Wiki page is interfering with the native MathML rendering... fredw added a comment to T106890: smallmatrix not very small in MathML. Here is what I get for https://en.wikipedia.org/wiki/Hecke_operator using Gecko: # Jul 25 2015 fredw added a comment to T99369: Remove client-side MathJax rendering mode. What is stopping us from using MathJax to generate HTML/CSS and send that to the browser? fredw added a comment to T99369: Remove client-side MathJax rendering mode. @SalixAlba: Thank you for your comment. # Jul 24 2015 fredw added a comment to T106855: Extra space after equations in MathML mode.. I think it was https://phabricator.wikimedia.org/T74806 fredw added a comment to T106855: Extra space after equations in MathML mode.. Moritz: I think I already reported this bug in the past. IIRC, it was due to the fact that mathoid puts space in the source code around the math output... fredw updated subscribers of T106855: Extra space after equations in MathML mode.. fredw added a comment to T99369: Remove client-side MathJax rendering mode. As I read the comments, it seems that people have limited knowlegde & distorted information about the MathJax / KaTeX / MediaWiki / Browser developments and, sadly, know almost nothing about the technical implementation details... so I'm not sure it is very efficient to try arguing or countering the falsehoods about MathML. Instead, I'll just write one post with a few (hopefully helpful) remarks on MediaWiki math and then go back to doing more constructive work for math rendering on the Web... # Jul 22 2015 fredw added a comment to T105316: Need a way to write multiscripts around an expression. I don't know what you mean by "how MathJax looks now and how it should look like". As said in the first comment, it's not a problem with the visual rendering but with the way the generated MathML, which has bad semantics and so makes things hard for assistive technologies. fredw added a comment to T105320: Weird examples of limits on Help:Displaying_a_formula. Does anyone know about this special notation for limits? This looks a TeX hack to achieve a purely visual distinction for "alternative limits style". # Jul 9 2015 fredw renamed T74141: The MathML mode does not work well when doing touch screen navigation in VoiceOver+WebKit from The MathML mode does not work well with VoiceOver + touch screen to The MathML mode does not work well when doing touch screen navigation in VoiceOver+WebKit. So since this bug was confusely used by the MathJax team to justify their refusal to provide MathML in the DOM by default in order to help assistive technologies, I'd like to clarify the issue: # Apr 8 2015 fredw committed rGMATa350d2a271c9: Add TeX annotation to MathML again (authored by fredw). Add TeX annotation to MathML again fredw committed rGMAT759c26e7c87a: Apply PNG post-processing compression to image fonts. #44 (authored by fredw). Apply PNG post-processing compression to image fonts. #44 fredw committed rGMAT5828d635986c: Merge branch 'master' into bidi (authored by fredw). Merge branch 'master' into bidi fredw committed rGMATa2ca632cac1b: Merge pull request #662 from fred-wang/issue611 (authored by fredw). Merge pull request #662 from fred-wang/issue611 fredw committed rGMATef36d4b2a5ce: Repack and combine. #534 (authored by fredw). Repack and combine. #534 fredw committed rGMAT705548fc371e: Update README-branch.txt (authored by fredw). fredw committed rGMAT337361c1cd4c: Regenerate the fonts. #611 (authored by fredw). Regenerate the fonts. #611 Regenerate the font data (no longer in sync with the font files!) #611 fredw committed rGMAT74c270f3c9db: ... and now fix the issue myself. #650 (authored by fredw). ... and now fix the issue myself. #650 fredw committed rGMAT7364fffae2c8: Add caligraphic-bold and tex-oldstylebold. #612, #611 (authored by fredw). Add caligraphic-bold and tex-oldstylebold. #612, #611 fredw committed rGMAT6448c0ce914c: Merge pull request #643 from fred-wang/issue530 (authored by fredw). Merge pull request #643 from fred-wang/issue530 fredw committed rGMATb3867893b2bd: Adding remap to latin modern and gyre fonts. #611 (authored by fredw). Adding remap to latin modern and gyre fonts. #611 fredw committed rGMAT767225cd3e21: Revert third-party fix for lack of CLA #650 (authored by fredw). Revert third-party fix for lack of CLA #650 fredw committed rGMAT9702afba15b2: Increase version numbers for 2.3 ; update languages. #534 (authored by fredw). Increase version numbers for 2.3 ; update languages. #534 fredw committed rGMATd1822362f2e2: Merge pull request #660 from fred-wang/issue656 (authored by fredw). Merge pull request #660 from fred-wang/issue656 fredw committed rGMAT62afeb2ad516: Update fontdata to improve the list of stretchy operators. #611 (authored by fredw). Update fontdata to improve the list of stretchy operators. #611 fredw committed rGMATe3a431826ccb: Add some remap for Euler fonts. #611 (authored by fredw). Add some remap for Euler fonts. #611 fredw committed rGMAT133df1675c47: Fix many issues with Asana Font. #611 (authored by fredw). Fix many issues with Asana Font. #611 fredw committed rGMAT53b1201b54d7: Merge pull request #659 from fred-wang/issue636 (authored by fredw). Merge pull request #659 from fred-wang/issue636 MathMenu: fix typo in STIXLocal key, add strings for the new Web fonts. #656 fredw committed rGMAT0c85be88e0bb: Merge pull request #655 from jdh8/develop (authored by fredw). Merge pull request #655 from jdh8/develop fredw committed rGMAT53b3160dfd17: MathJax.isPacked and MathJax.AuthorConfig mixup. #636 (authored by fredw). MathJax.isPacked and MathJax.AuthorConfig mixup. #636 fredw committed rGMAT6cd37d02f989: Merge pull request #652 from fred-wang/issue530 (authored by fredw). Merge pull request #652 from fred-wang/issue530 fredw committed rGMAT2b903c431056: Localisation updates from http://translatewiki.net. #530 (authored by fredw). Add the matchFontHeight options to unpacked/config/default.js #639 fredw committed rGMAT139b799dbff8: Localisation updates from http://translatewiki.net. #530 (authored by fredw). fredw committed rGMATad33b1f40cf1: Merge pull request #635 from fred-wang/v2.3-beta (authored by fredw). Merge pull request #635 from fred-wang/v2.3-beta fredw committed rGMAT359c61a82749: Merge pull request #637 from fred-wang/bidi (authored by fredw). Merge pull request #637 from fred-wang/bidi fredw committed rGMATb73eafd431cb: Bump version numbers. #534 (authored by fredw). Bump version numbers. #534 fredw committed rGMAT1ca86fbc6b32: Remove extra comma. #627 (authored by fredw). Remove extra comma. #627 fredw committed rGMAT90749a7afefc: Fix WARNINGs generated by the MathJax packer. #534 (authored by fredw). Fix WARNINGs generated by the MathJax packer. #534 fredw committed rGMAT72aa13ee8912: Localisation updates from http://translatewiki.net. #530 (authored by fredw). fredw committed rGMAT28afef1a95ef: mglyph messages: pass the def to the MML.Error function. #627 (authored by fredw). mglyph messages: pass the def to the MML.Error function. #627 fredw committed rGMAT542bce301d42: Enable English strings in the localization test page. #626 (authored by fredw). Enable English strings in the localization test page. #626 fredw committed rGMATa259bcba8865: Merge pull request #634 from fred-wang/bidi (authored by fredw). Merge pull request #634 from fred-wang/bidi fredw committed rGMAT6f3051e98916: follow-up (authored by fredw). follow-up fredw committed rGMAT461f9b46140d: Update Farsi localization. #530 (authored by fredw). Update Farsi localization. #530 fredw committed rGMAT4400d42d5777: Merge pull request #633 from fred-wang/issue530 (authored by fredw). Merge pull request #633 from fred-wang/issue530 fredw committed rGMATdc1364ec2574: Merge pull request #629 from fred-wang/issue530 (authored by fredw). Merge pull request #629 from fred-wang/issue530
My Math Forum Scalar Line Integral Calculus Calculus Math Forum January 4th, 2017, 06:52 PM #1 Newbie   Joined: Jan 2017 From: United States Posts: 4 Thanks: 0 Scalar Line Integral Hello all, I'm trying to understand the scalar line integral, enough so that I should be able to do it without the formula. But I got kind of confused at the end of the first image (attached). I was thinking that adding up the lengths of all of the tangent vectors would give the total length of the curve C. But then I realized I would be missing the (dt). I did figure it out by looking at the units, but it still feels like I'm missing some part of the big picture. Can somebody please help me? Also, I think that my geometric interpretation is probably wrong. I was thinking that the scalar line integral gave the area between a surface and a curve, (wikipedia called it the area of the "curtain"). But again realizing that f(x,y,z) is not even a surface, and if f(x,y,z)=1 then integrating it over a curve C will just give the length of C. But what about the plane z(x,y)= 1? If C was some curve, wouldn't the line integral of z=1 over C give the area between a plane and a curve? Last edited by skipjack; January 4th, 2017 at 10:53 PM. January 5th, 2017, 08:10 AM #2 Math Team   Joined: Jan 2015 From: Alabama Posts: 2,492 Thanks: 632 You say, "I'm trying to understand the scalar line integral, enough so that I should be able to do it without the formula". I would recommend NOT worrying about the basic definition in terms of Riemann sums just as you do not use Riemann sums to do integrals in Calculus I. To integrate, say $\vec{f}(x, y, z)= xy$ over the curve $y= 3x^2$, $z= 2x$, from (0, 0, 0) to (1, 3, 2), write the curve in terms of a single parameter. Here, we we can use x itself as parameter: $x= t$, $y= 3t^2$, and $z= 2t$ with t from 0 to 1. Then $dx= dt$, $dy= 6tdt$, and $dz= 2dt$. $ds= \sqrt{dx^2+ dy^2+ dz^2}= \sqrt{1+ 36t^2+ 4]dt= \sqrt{5+ 36t^2}dt$. $f(x,y,z)= xy= (t)(3t^2)= 3t^3$. The path integral is [tex]\int_0^1 3t^3\sqrt{36t^2+ 5}dt[/rtex]. January 5th, 2017, 11:28 AM   #3 Newbie Joined: Jan 2017 From: United States Posts: 4 Thanks: 0 Quote: Originally Posted by Country Boy You say, "I'm trying to understand the scalar line integral, enough so that I should be able to do it without the formula". I would recommend NOT worrying about the basic definition in terms of Riemann sums just as you do not use Riemann sums to do integrals in Calculus I. To integrate, say $\vec{f}(x, y, z)= xy$ over the curve $y= 3x^2$, $z= 2x$, from (0, 0, 0) to (1, 3, 2), write the curve in terms of a single parameter. Here, we we can use x itself as parameter: $x= t$, $y= 3t^2$, and $z= 2t$ with t from 0 to 1. Then $dx= dt$, $dy= 6tdt$, and $dz= 2dt$. $ds= \sqrt{dx^2+ dy^2+ dz^2}= \sqrt{1+ 36t^2+ 4]dt= \sqrt{5+ 36t^2}dt$. $f(x,y,z)= xy= (t)(3t^2)= 3t^3$. The path integral is [tex]\int_0^1 3t^3\sqrt{36t^2+ 5}dt[/rtex]. Thank you for putting it for me in that way. One of my favorite things in math is being able to look at something in multiple different ways. The reason I wanted to do it by the definition is because I spent a good part of my summer trying to figure out what an integral actually was, and so I ended up teaching myself about summations and properties of summations etc, so that I could calculate an integral by actually taking the limit of a sum. But anyways that's why I want to do it this way. But thank you. January 5th, 2017, 02:38 PM #4 Math Team   Joined: Jan 2015 From: Alabama Posts: 2,492 Thanks: 632 Well, it is good to do it that way once to see how it works. But once you have done that, in one dimension, you shouldn't have to do it again! Tags integral, line, scalar Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post nomad609 Calculus 0 October 23rd, 2016 11:39 AM harley05 Calculus 0 May 11th, 2015 02:57 AM evol_w10lv Calculus 3 September 13th, 2013 10:17 AM Unununium111 Calculus 2 January 21st, 2011 05:30 AM dp212121 Calculus 1 June 25th, 2008 12:36 PM Contact - Home - Forums - Cryptocurrency Forum - Top
## March 30, 2020 ### Of the Evaluation of Expertise ("I am not so good for that as an old roofer") Attention conservation notice: A 2000-word reaction to conferences in 2011 where too many people wanted impossible things of social science and data mining, and too many people seemed eager to offer those impossible things. To long by more than half, too pleased with itself by much more than half, it lacks constructive suggestions and even a proper ending. To the extent there's any value to these ideas, you'd be better off getting them from the source I am merely parroting. Left to gather dust in my drafts folder for years, posted now for lack of new content. Q: You have an old house with a slate roof, right? A: You know that perfectly well. Q: Does the roof ever need work? A: You just said it's old and slate, and I live in Appalachia. (In the Paris of Appalachia; but still, in Appalachia.) Of course it does. Q: Do you do the work yourself? A: I have no idea how; I hire a roofer. Q: How do you know the roofer knows what he's doing? A: I am not sure what you mean. He fixes my roof. Q: Well, does he accurately estimate where leaks will occur over the next year? A: No. Q: Does he accurately estimate how much the roof is going to leak each year? A: No. Q: Does he accurately estimate how many slate tiles will crack and "need replaced" each winter? A: No. Q: OK, so he's not much into point forecasts, I can get behind that. Does he give you probability forecasts of any of these? If so, are they properly calibrated? A: No. I don't see where this is going. Q: Everyone agrees that the ability to predict is a fundamental sign of scientific and technological knowledge. It sounds like your roofer can't predict much of anything, so what can they know? You should really hire someone else, preferably someone well-calibrated. Does Angie's list provide roofers' Brier scores? If not, why not? A: I don't believe they do, I can't see why they should, and I really can't see how knowing that help me pick a better roofer. Q: It's your business if you want to be profligate, but wouldn't it help people who do not enjoy wasting money to know whether supposed experts actually deserve to be taken seriously? A: Well, yes, but if you will only listen to roofers who are also soothsayers, I foresee an endless succession of buckets under leaking ceilings. Q: So you maintain a competent (never mind expert) roofer needn't be able to predict what will happen to your roof, not even probabilistically? A: I do so maintain it. Q: And how do you defend such an obscurantist opinion? Do you suppose that a good roofer is one who enters into a sympathetic human understanding with the top of the house, and can convey the meaning of a slate or a gutter? A: Humanist-baiting is cheap even for you. No, when it comes to roofs, I am all about explanation, and to hell with understanding. (People are different.) But an expert roofer no more needs to predict what happens to the roof than an expert engineer needs to predict how a machine they have designed and built will behave. Indeed, it would be a bizarre miracle if they could make such predictions. Q: And why would a plain, straightforward prediction be such a wonder? A: However expert the roofer is about the roof, or the engineer about the machine, what happens to them depends not just on the object itself, but the big, uncontrolled environment in which it's embedded. You will allow, I hope, that what happens to my roof depends on how much rain we get, how much snow, etc.? Q: Not being a roofer, I don't really know, but that sounds reasonable. A: Might it not also matter how many sunny days with freezing nights we have, turning snow on the roof to ice? Q: Sure. A: And so on, through contingencies I'm too impatient, and ignorant, to run through. But then, to predict damage to the roof, doesn't the roofer need to not only know what condition it started in, but also all the insults it will be subjected to? Q: That does seem reasonable. (But aren't you asking a lot of questions for "A"?) A: (Shut up, I explain.) So your soothsaying roofer must be a weather-prophet, as well as knowing about roofs. And the same with the engineer and their machine: they would need to foresee not just the environment in which it will be put, but also the demands which its users will place upon it. It sounds very strange to say that such prophetic capacities are a necessary part of expertise in roofing, or even engineering. Q: It might sound strange; many true things sound strange; indeed, doesn't every science become more and more un-intuitive and strange-sounding as it progresses? It sounds strange to lay-people to say that the economy consists of one immortal, lazy, greedy, infinitely calculating person, who does all the work, owns all the assets, and consumes all that the goods, but just think of the triumphs of successful prediction which the macroeconomists have achieved on this basis! If we abandon the criterion of prediction just because it sounds strange, how shall we ever distinguish an expert roofer from a mere pundit of the slates? A: Perhaps by seeing if they can do the things roofers are supposed to do. Q: Such as? A: Well, when something goes wrong with the roof, they should be able to diagnose what caused the problem, and in favorable cases prescribe a course of action which will fix it, and even carry out the operation. Q: Do you only call in the roofer when something has gone wrong? Your house must be in a sad state if so. A: No, a good roofer should also be able to diagnose situations which, while they cause no immediate problem, are apt to lead to problems later. Q: There can be few of those if you should have a winter without rain, which, you must admit, is possible. Are you not just sneaking back in my probabilistic forecasts, which you poured such scorn on before, with your "sooth-sayers" and your "weather prophets"? A: Not at all; it's enough to recognize conditions which would cause problems under a broad range of circumstances which there is some reason to fear. Apprehension about the effects of a meter of snow sitting atop the roofer for three months on end is not reasonable in Pittsburgh; expecting even ten days in the winter without some precipitation is folly. At the very most, I am calling for some ability to say, conditional on typical weather, what consequences should be expected; the problem of giving a distribution for the weather is no part of the roofer's expertise. Q: And next I suppose that you will pretend that prescription doesn't rest on prediction? A: It certainly requires knowledge of the form If we do X, then condition Y will result; but if we do X', then Y'. Such conditional knowledge, about roofs, is actually immensely easier for a roofer to acquire, than for them to learn the whole huge multidimensional distribution of all the environmental factors which could influence the state of a roof, and their dependencies over time. But it is the latter which you are presuming, with your Brier scores and your notion of what sort of prediction is an adequate sign of knowledge. And what the roofer cannot foretell about the roof, neither can the engineer about the machine, nor the doctor about the patient, nor the natural scientist about their object of study. Q: You insist that scientists do not make predictions? A: In your unconditional, categorical, absolute sense, certainly not. Scientists certainly possess all sorts of conditional knowledge, about what would happen, if certain conditions were to be imposed, certain manipulations were to be made. Unconditional predictions, even unconditional probabilistic predictions, are for the most part beyond them. Q: So a chemist cannot predict the course of a chemical reaction? Don't bother dodging with contaminants, or mis-labeled reagents. A: Me, quibble? No. But even with pure, known reagents in a sealed reaction vessel at STP --- well, you were a student at Berkeley, where the chemistry labs are a block or two downhill of the Hayward Fault. I don't think you could have said what would have happened if there'd been a tremor in the middle of one of your experiments. Q: You can't think it's fair to ask a chemist to predict earthquakes, can you? A: My point exactly. To put it formally, in terms of Pearl, you could know $\Pr\left(Y\middle |\mathrm{do}(X=x)\right)$ exactly, for all $x$, but not know $\Pr\left(Y\middle|X=x\right)$, still less $\Pr(Y)$. (The Old Masters of Pittsburgh would say: you can know the distribution of $Y$ after "surgery" to remove incoming edges to $X$, without knowing the un-manipulated distribution of $Y$.) But it is the last which you are insisting on, with your calibrated forecasts and prediction as the sign of expertise. Q: Well, astronomers make such predictions, don't they? A: I defy you to find a single other science which can also do so. Even then, our astronomy's successes merely testify to our engineering's weaknesses. When our descendants (or the cockroaches'; no matter) become able to move around comets, or move planets, or build Dyson spheres and Shkadov thrusters, even the predictions of celestial mechanics will become contingent on the interventions of sentient (I will not say "human") beings. Q: Even in those science-fictional scenarios, the choices of human (or post-human or non-human) beings would be functions of their microscopic molecular state, and so physically predictable, so shouldn't — A: I indulged in science fiction as a rhetorical flourish, but now you are seriously arguing on the basis of technical impossibilities, sheer metaphysical conjecture and even mythology. Q: Never mind then. Suppose, for the sake of argument, that I accept that someone could be a knowledgeable expert without making predictions. Surely you would agree, though, that someone who makes a lot of predictions which turn out to be wrong definitely doesn't know what they're doing? A: I can think of at least one way in which that fails, which is that when they also have effective control. Q: Better control means un-predictability? This I have to hear. A: Suppose I want to maintain a constant temperature in my house. I look at the sky, guess the day's weather, and turn on the air conditioner or the furnace as needed. If I express myself by saying "It's miserably humid and the sun isn't so much shining as pounding, the house will be intolerably hot", you will point out, at the end of a day during which the air conditioner labored heroically and the electric bill mounted shockingly, that the house was in fact entirely comfortable, and say not only that I can't predict anything worthwhile, but that, empirically, you see no particular relationship between the weather outside the house and the temperature inside it. Q: Aren't you creating an air of paradox, simply by being sloppy in expressing the prediction? It's really "The house will be intolerably hot today, unless I run the AC". A: Granted, but we can only ever check one branch of the condition. And I can be completely accurate in predictions about what would happen, absent imposed controls, even if none of those things actually happen, because of my forecast-based control. --- I never did figure out how to end this. Posted at March 30, 2020 09:50 | permanent link ## March 10, 2020 ### Ebola, and Mongol Modernity Attention conservation notice: An old course slide deck, turned in to prose on the occasion of a vaguely-related news story from 2014. Not posted at the time because it felt over-dramatic. I have, of course, no authority to opine about either world history or epidemiology, and for that matter no formal training in networks. #### Exhibit A One of the books which most re-arranged my vision of the past was Janet Abu-Lughod's Before European Hegemony: The World System A.D. 1250--1350. It gave me the sense, as few other things have, of historical contingency, or more exactly of modernity as a belated phenomenon, and changed my teaching. She depicts an integrated (part-of-the-) world economy, an "archipelago of towns" linked by trade-routes stretching from Flanders to Hangzhou and centered in the Indian Ocean. This archipelago is where modernity should have begun. Beyond the market-oriented, urban-centered economy, China has the beginnings of an industrial revolution (a point explored by Mark Elvin in his Pattern of the Chinese Past, and his sources in Japanese scholars of Chinese economic history, and emphasized by William McNeill in his Pursuit of Power); the beginnings of a truly global perspective. All of this was politically supported by the unification of the most economically and technologically advanced regions (namely China and the Islamic world) under the Mongol Empire, admittedly at the cost of the occasional "shock and awe" campaign, destruction of Baghdad, etc. #### Exhibit B So what, according to Abu-Lughod, happened? What happened was Yersinia pestis, the bubonic plague, a bacterium transmitted by fleas that live on rodents. It long has been, and is, endemic to the rodents of Central Asia, such as the giant gerbil {\em Rhombomys opimus}, which seems to be perpetually perched at the edge of the epidemic threshold. The Mongol Empire didn't just unify the most advanced parts of Eurasiafrica; it brought them into intimate contact with Central Asia. And then, as usual, the plague followed the routes of trade and imperial travel: It's impossible to know just how deadly it was, but estimates put it at around 25% of global population; up to 90% in some regions. It destroyed (according to Abu-Lughod) the world economy, that "archipelago of towns", leaving isolated and barely-functional fragments which could be dominated and re-purposed by western European pirates/traders poking into the post-pandemic landscape. One aspect to this is how slowly, and how progressively, the plague spread. It took decades to spread from Central Asia to the peripheral region of western Europe, where it chewed steadily across the landscape: As my old friend and sometime co-author Mark Newman and collaborators puts it, this is strong evidence that "the small-world effect is a modern phenomenon". The small-world effect, after all, is that the maximum distance between any two people in the social network of size $n$ grows like $O(\log{n})$. This implies that the number of people reachable in $d$ steps grows exponentially with $d$, which is hardly compatible with the steady geographic progress of the disease. The argument here is so pretty that I can't resist sketching it. Suppose that every infected person passes it on to any one of their contacts with probability $t$, at least on average. We start the infection at a random person, say Irene, who selects a random one of their acquaintances, say Joey, for passing it on. The probability that Irene, or any other random person, has $k$ contacts is, by convention, $p(k)$. But Joey isn't a random person; Joey is someone reachable by following an edge in the social network. Joey's degree distribution is $\propto k p(k)$, since people with more contacts are more reachable. Specifically, Joey's degree distribution is $k p(k) / \langle K \rangle$, where $\langle K \rangle \equiv \sum_{k=0}^{\infty}{k p(k)}$, the average degree. If Joey gets infected, the number of additional infections he could create is up to $k-1$. So our initial random infection of Irene creates, on average, $t \sum_{k=1}^{\infty}{k(k-1)p(k)}/\langle K \rangle = t \langle K^2 - K\rangle / \langle K \rangle$ at one step away. At $d$ steps, we've reached $\left(t \langle K^2 - K\rangle / \langle K \rangle \right)^d$ nodes. So long as $t > \langle K \rangle / \langle K^2 - K\rangle$, this is exponential growth: If $t$ is above this critical level, the only way to avoid exponential growth is to have lots and lots of overlapping routes to the same nodes. [Some people might've had 1024 ancestors ten generations back, but most of us didn't.] Geographic clustering will do this, but the small-world effect makes it very hard to avoid. And the small-world effect is very much a part of modernity; whether the diameter of the social network is six or twelve or twenty is secondary to the fact that it's not a thousand. #### Exhibit C Zeynep Tufekci, "The Real Reason Everyone Should Panic: Our Global Institutions are Broken" (23 October 2014): The conventional (smart) wisdom is that we should not panic about Ebola in the United States (or Europe). That is certainly true because, even with its huge warts, US and European health-care systems are well-equipped to handle the few cases of Ebola that might pop up. However, we should panic. We should panic at the lack of care and concern we are showing about the epidemic where it is truly ravaging; we should panic at the lack of global foresight in not containing this epidemic, now, the only time it can be fully contained; and we should panic about what this reveals about how ineffective our global decision-making infrastructure has become. Containing Ebola is a no-brainer, and not that expensive. If we fail at this, when we know exactly what to do, how are we going to tackle the really complex problems we face? Climate Change? Resource depletion? Other pandemics? So, I have been panicking. ... Globalization, in essence, means we really are one big family, in sickness and in health. The more connected we are, the easier it is for a virus to spread wide and deep, before we get a chance to contain it. And that is partially why Ebola is now ravaging through three countries in West Africa: it broke through in cities and large-enough settlements, and due to an accumulation of reasons, including recent civil wars, at a time when they were least equipped to handle it. Containing an outbreak requires circumscribing the outbreak (isolating and treating the ill, tracing their contacts, isolating and treating them as well) so that it can no longer find new hosts, and healing those who are ill, or mourning those who die. Circumscribing an outbreak is easier when the cases are a few, or a few dozen, or a few hundred. In fact, we know from previous Ebola outbreaks which parameter brings down the dreaded transmission rate: “the rapid institution of control measures.” It’s that simple. After thousands of cases, this gets harder and harder. After millions, it is practically impossible. Of course, Ebola got under control (in 2014). It took far too much misery and fear and time, but it happened. But it left no sign that the powers that be had learned any lessons, about (to quote Tufekci again) either "basic math [or] basic humanity". The Mongols, at least, had the excuse that they had no idea what they were doing. (It's not as though Nasir al-din al-Tusi had, between doing theology and pioneering Fourier analysis, worked out how network connectivity related to the likelihood of epidemic outbreaks.) Compared to them, our predecessors in globalization, we are as gods; we're just not very good at it. Posted at March 10, 2020 23:46 | permanent link
# How to typeset nested radical/root expressions? I'm writing my own math expression typesetting engine. I want to learn how to best typeset radical expressions. The choices actually apply to all radical expressions, however, nested radical expressions make the visual differences more clear. 1. Flush 2. Indent on Left & Right 3. Indent on Bottom 4. Indent on Left & Right & Bottom • I hope this is used only in displays, otherwise there will certainly be a disruption in the baselines. that said, although it may not be traditional, i like the effect of (2) and ((4). (i will seek the opinion of a more skilled math editor, and report if i get an answer.) – barbara beeton Mar 2 at 23:23 • Can we see your MWE to understand better your question, please? – Sebastiano Mar 3 at 12:22 • @Sebastiano are you asking for TeX code? It is not TeX. What's not clear in the question? – an0 Mar 3 at 15:29 • @an0 Why, then, have you ask a question? I have not undersood. – Sebastiano Mar 3 at 15:56 • @Sebastiano 1. It is the most fitting subsite to ask questions about typesetting. I initially planned to post it on Math but they explicitly prohibit questions about typesetting. 2. It is actually TeX related. The math expression can be converted to TeX code like this 0+\sqrt{1+\sqrt{2+\sqrt{3}}}-4, however, the math semantics will be lost. – an0 Mar 3 at 16:13
# eBOSS ELG Target Selection with Deep Photometry ## Contact Johan Comparat [email protected] ## Summary Photometry deeper than SDSS was used to assess algorithms for selection of Emission Line Galaxies (ELG) for spectroscopic observations ## Finding Targets An object whose ANCILLARY_TARGET2 value includes one or more of the bitmasks in the following table was targeted for spectroscopy as part of this ancillary target program. See SDSS bitmasks to learn how to use these values to identify objects in this ancillary target program. Program (bit name) Bit number Target Description Number of Fibers FAINT_ELG 18 Blue star-forming galaxy selected from CFHT-LS photometry 2,588 ## Description This program used photometry extending to fainter limits than SDSS to select emission line galaxy (ELG) candidates, with the goal of assessing target selection algorithms for eBOSS. The sample was defined to help evaluate the completeness of the targeting sample and redshift success rates near the faint end of the ELG target population. Data from this ancillary science program are described in Comparat et al. (2014), which measured the evolution of the bright end of the [OII] emission line luminosity function. Favole et al. (2015, in preparation) use these data to derive halo occupation statistics of emission line galaxies. ## Target Selection Blue star-forming galaxies in the redshift range 0.6 < z < 1.2 were selected from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) Wide W3 field photometric redshift catalog T0007 (Ilbert et al. 2006; Coupon et al. 2009). Three plates were observed centered on the same position, and targets were selected at a density of nearly 400 objects per square degree. Selected objects satis ed the following constraints: • 20 < g < 22.8 • -0.5 < (u – r) < 0.7(g – i) + 0.1 All photometry was based on CFHTLS MAG_AUTO magnitudes on the AB system. Objects with known redshift were excluded. A target was excluded if a redshift already existed from previous observations. ## REFERENCES Comparat, J., et al. (2014), arXiv:1408.1523 Coupon, J., et al. 2009, A&A, 500, 981 Ilbert, O., et al. 2006, A&A, 457, 841
Contents • # For Solution Team RCB has earned XX points in the games it has played so far in this year’s IPL. To qualify for the playoffs they must earn at least a total of YY points. They currently have ZZ games left, in each game they earn 22 points for a win, 11 point for a draw, and no points for a loss. Is it possible for RCB to qualify for the playoffs this year? ### RCB and Playoffs solution codechef • First line will contain TT, number of testcases. Then the testcases follow. • Each testcase contains of a single line of input, three integers X,Y,ZX,Y,Z. ### Output Format For each test case, output in one line YES if it is possible for RCB to qualify for the playoffs, or NO if it is not possible to do so. Output is case insensitive, which means that “yes”, “Yes”, “YEs”, “no”, “nO” – all such strings will be acceptable. ### RCB and Playoffs solution codechef • 1T50001≤T≤5000 • 0X,Y,Z10000≤X,Y,Z≤1000 ### Sample Input 1 3 4 10 8 3 6 1 4 8 2 ### Sample Output 1 YES NO YES ### RCB and Playoffs solution codechef Test Case 11: There are 88 games remaining. Out of these 88 games, if RCB wins 22 games, loses 44 games and draws the remaining 22 games they will have a total of 10 points, this shows that it is possible for RCB to qualify for the playoffs. Note: There can be many other combinations which will lead to RCB qualifying for the playoffs. Test Case 22: There is no way such that RCB can qualify for the playoffs. Test Case 33: If RCB wins all their remaining games, they will have end up with 88 points. (4+22=84+2∗2=8), hence it is possible for them to qualify for the playoffs.
# zbMATH — the first resource for mathematics Boundary concentration for eigenvalue problems related to the onset of superconductivity. (English) Zbl 0982.35077 The authors deal with the asymptotic behaviour of the eigenvalue $$\mu(h)$$ and corresponding eigenfunction associated with the variational problem $\mu(h)\equiv \inf_{\psi\in H^1(\Omega, \mathbb{C})} {\int_\Omega|(i\nabla+ hA)\psi|^2 dx dy\over \int_\Omega|\psi|^2 dx dy}$ in the regime $$h\gg 1$$. Here, $$A$$ is any vector field with curl equal to $$1$$. The authors show that when the domain $$\Omega$$ is not a disc, the first eigenfunction does not concentrate along the entire boundary. It must be decay to zero with large $$h$$ somewhere along the boundary, while simultaneously decaying at an exponential rate inside the domain. ##### MSC: 35P15 Estimates of eigenvalues in context of PDEs 35J10 Schrödinger operator, Schrödinger equation 35Q60 PDEs in connection with optics and electromagnetic theory 82D55 Statistical mechanical studies of superconductors ##### Keywords: eigenvalue; eigenfunction; variational problem; exponential rate Full Text:
# __________ engines can work in vacuum. This question was previously asked in ISRO IPRC Technical Assistant Mechanical held on 21/04/2018 View all ISRO Technical Assistant Papers > 1. Turbojet 2. Rocket 3. Turboprop 4. Ramjet Option 2 : Rocket Free ISRO VSSC Technical Assistant Mechanical held on 09/06/2019 2333 80 Questions 320 Marks 120 Mins ## Detailed Solution Explanation: Rocket engine • A rocket engine is a pure reaction engine that produces propulsive thrust. • While travelling in fluid rocket engine works on the principle of Newton’s third law of motion which states that every action has an equal and opposite reaction. • The propulsive thrust produced by the engine of the rocket is the action and the reactive force by the fluid medium on the rocket is the reaction which eventually pushes the rocket. • In the case of travelling through a vacuum, there is no fluid medium. In any propulsion system, a working fluid is accelerated by the system and the reaction to this acceleration produces a force on the system. • When a rocket shoots fuel out one end, this propels the rocket forward — no air is required. • A general derivation of the thrust equation shows that the amount of thrust generated depends on the mass flow through the engine and the exit velocity of the gas because reactive force F = ma [a = acceleration of the rocket, m = mass of the rocket] Now, this can be written as $$F = m.\frac{{dv}}{{dt}}$$ $${\rm{Or}},F = \frac{{dm}}{{dt}}.v$$ [where v = velocity at which the exhaust gases exit from the rocket nozzle end] So, while travelling through a vacuum, at the cost of losing the mass of the burnt fuel, the rocket gains forward acceleration or reactive force that pushes the rocket. Turbojet engine • A pure gas turbine engine that generates thrust through its nozzle is called a turbojet engine. • A turbojet is a gas turbine engine in which no excess power (above that required by the compressor) is supplied to the shaft by the turbine. • The available energy in the exhaust gases is converted to the kinetic energy of the jet through its nozzle (i.e. thrust). • In a turbojet engine, the energy that is added to the air by a compressor (high pressure) and by a combustion chamber (high temperature) is divided into two parts. • One part returns back to the compressor and the other one goes to the nozzle. The energy that is supposed to be transferred into the compressor is first absorbed by the turbine and converted into mechanical energy. • Thus, all the mechanical energy that is produced by the turbine is transferred into the compressor via a shaft to increase the incoming air pressure. • The remaining energy of the high-temperature, high-pressure air is transferred into the nozzle. Turboprop engine • A turboprop is a gas turbine engine in which the turbine absorbs power in excess of that needed to drive the compressor. • The excess power is used to drive a propeller. • Although most of the energy in the hot gases is absorbed by the turbine, the turboprop engine still has slight jet thrust generated by the exhaust gas in its nozzle. • Thus, most of the gas energy is extracted by the turbine to drive the propeller. • Here, similar to the turbojet, the inlet air is compressed by an axial-flow compressor, mixed with fuel and burned in the combustor, expanded through a turbine, and then exhausted through a nozzle. • However, unlike the turbojet, the turbine powers not only the compressor but also the propeller. • A major problem with the turboprop engine is the very loud noise, which makes it undesirable for carrying passengers. Ramjet engine • In high supersonic speeds (Mach numbers beyond 3), a new type of jet engine, Ramjet, is more efficient than turbojet and turbofan engines. • The ramjet engine has a simple structure and has no moving part (no turbine). • A ramjet is basically a duct with the front end shaped to be the inlet, the aft end designed as a nozzle, and the combustion chamber in the middle. • This type of engine is using the engine’s forward motion to compress incoming air. • Since the high-speed flow has a high stagnation pressure, this pressure will be converted to static pressure in the inlet duct in a slowdown process. • Located in the combustion chamber are flame holders, fuel injection nozzles, and an igniter. • The main drawback of the ramjet engine is that it is initially assisted to accelerate and attain a velocity in excess of about Mach 0.5 before it can be self-sufficient. • Once this speed is reached, there is sufficient combustion pressure to continue firing the engine. • The flame holders located in the combustion chamber provide the necessary blockage in the passage to slow down the airflow so that the fuel and air can be mixed and ignited. • The combustion product is then passed through a nozzle to accelerate it to supersonic speeds. • This acceleration generates forward thrust. • For a supersonic flight Mach number, acceleration is typically achieved via a convergent-divergent nozzle.
## 2011-06-27 ### Scales Thnidu found this nifty animation showing the relative sizes of various celestial bodies and I wanted to put it into relation with something familiar to me, such as the Sweden Solar System. At a scale of $1:20×106$ VY Canis Majoris would have a diameter of 153 175 m, i e, if superimposed on the Earth, the surface would be in the mesosphere, i e, around the highest X-15 flights, and the scaled distance would be about as far from Stockholm as Mars is from the Sun. Not that these measures are really graspable either. #### 1 comment: thnidu said... I followed your link to the Sweden Solar System -- :-) -- and then had to look up "termination shock". Well placed! PS: My word verification this time is "hument". Sounds like a very unlikely Middle-Earth crossbreed. Except ...
# 0.2 General solutions of simultaneous equations  (Page 3/4) Page 3 / 4 In using sparsity in posing a signal processing problem (e.g. compressive sensing), an ${l}_{1}$ norm can be used (or even an ${l}_{0}$ “pseudo norm”) to obtain solutions with zero components if possible [link] , [link] . In addition to using side conditions to achieve a unique solution, side conditions are sometimes part of the original problem. One interesting caserequires that certain of the equations be satisfied with no error and the approximation be achieved with the remaining equations. ## Moore-penrose pseudo-inverse If the ${l}_{2}$ norm is used, a unique generalized solution to [link] always exists such that the norm squared of the equation error ${\epsilon }^{\mathbf{T}*}\epsilon$ and the norm squared of the solution ${\mathbf{x}}^{\mathbf{T}*}\mathbf{x}$ are both minimized. This solution is denoted by $\mathbf{x}={\mathbf{A}}^{+}\mathbf{b}$ where ${\mathbf{A}}^{+}$ is called the Moore-Penrose inverse [link] of $\mathbf{A}$ (and is also called the generalized inverse [link] and the pseudoinverse [link] ) Roger Penrose [link] showed that for all $\mathbf{A}$ , there exists a unique ${\mathbf{A}}^{+}$ satisfying the four conditions: $\mathbf{A}{\mathbf{A}}^{+}\mathbf{A}=\mathbf{A}$ ${\mathbf{A}}^{+}\mathbf{A}{\mathbf{A}}^{+}={\mathbf{A}}^{+}$ ${\left[\mathbf{A}{\mathbf{A}}^{+}\right]}^{*}=\mathbf{A}{\mathbf{A}}^{+}$ ${\left[{\mathbf{A}}^{+}\mathbf{A}\right]}^{*}={\mathbf{A}}^{+}\mathbf{A}$ There is a large literature on this problem. Five useful books are [link] , [link] , [link] , [link] , [link] . The Moore-Penrose pseudo-inverse can be calculated in Matlab [link] by the pinv(A,tol) function which uses a singular value decomposition (SVD) to calculate it. There are a variety of other numerical methodsgiven in the above references where each has some advantages and some disadvantages. ## Properties For cases 2a and 2b in Figure 1, the following $N$ by $N$ system of equations called the normal equations [link] , [link] have a unique minimum squared equation error solution (minimum ${ϵ}^{T}ϵ$ ). Here we have the over specified case with more equations than unknowns.A derivation is outlined in "Derivations" , equation [link] below. ${\mathbf{A}}^{\mathbf{T}*}\mathbf{A}\mathbf{x}={\mathbf{A}}^{\mathbf{T}*}\mathbf{b}$ The solution to this equation is often used in least squares approximation problems. For these two cases ${\mathbf{A}}^{T}\mathbf{A}$ is non-singular and the $N$ by $M$ pseudo-inverse is simply, ${\mathbf{A}}^{+}={\left[{\mathbf{A}}^{\mathbf{T}*}\mathbf{A}\right]}^{-\mathbf{1}}{\mathbf{A}}^{\mathbf{T}*}.$ A more general problem can be solved by minimizing the weighted equation error, ${ϵ}^{\mathbf{T}}{\mathbf{W}}^{\mathbf{T}}\mathbf{W}ϵ$ where $\mathbf{W}$ is a positive semi-definite diagonal matrix of the error weights. The solution to that problem [link] is ${\mathbf{A}}^{+}={\left[{\mathbf{A}}^{\mathbf{T}*}{\mathbf{W}}^{\mathbf{T}*}\mathbf{W}\mathbf{A}\right]}^{-\mathbf{1}}{\mathbf{A}}^{\mathbf{T}*}{\mathbf{W}}^{\mathbf{T}*}\mathbf{W}.$ For the case 3a in Figure 1 with more unknowns than equations, $\mathbf{A}{\mathbf{A}}^{T}$ is non-singular and has a unique minimum norm solution, $||\mathbf{x}||$ . The $N$ by $M$ pseudoinverse is simply, ${\mathbf{A}}^{+}={\mathbf{A}}^{\mathbf{T}*}{\left[\mathbf{A}{\mathbf{A}}^{\mathbf{T}*}\right]}^{-\mathbf{1}}.$ with the formula for the minimum weighted solution norm $||x||$ is ${\mathbf{A}}^{+}={\left[{\mathbf{W}}^{\mathbf{T}}\mathbf{W}\right]}^{-\mathbf{1}}{\mathbf{A}}^{\mathbf{T}}{\left[\mathbf{A},{\left[{\mathbf{W}}^{\mathbf{T}}\mathbf{W}\right]}^{-\mathbf{1}},{\mathbf{A}}^{\mathbf{T}}\right]}^{-\mathbf{1}}.$ For these three cases, either [link] or [link] can be directly calculated, but not both. However, they are equal so you simply use the one with the non-singularmatrix to be inverted. The equality can be shown from an equivalent definition [link] of the pseudo-inverse given in terms of a limit by ${\mathbf{A}}^{+}=\underset{\delta \to 0}{lim}{\left[{\mathbf{A}}^{\mathbf{T}*}\mathbf{A}+{\delta }^{\mathbf{2}}\mathbf{I}\right]}^{-\mathbf{1}}{\mathbf{A}}^{\mathbf{T}*}=\underset{\delta \to 0}{lim}{\mathbf{A}}^{\mathbf{T}*}{\left[\mathbf{A}{\mathbf{A}}^{\mathbf{T}*}+{\delta }^{\mathbf{2}}\mathbf{I}\right]}^{-\mathbf{1}}.$ For the other 6 cases, SVD or other approaches must be used. Some properties [link] , [link] are: • ${\left[{\mathbf{A}}^{+}\right]}^{+}=\mathbf{A}$ • ${\left[{\mathbf{A}}^{+}\right]}^{*}={\left[{\mathbf{A}}^{*}\right]}^{+}$ • ${\left[{\mathbf{A}}^{*}\mathbf{A}\right]}^{+}={\mathbf{A}}^{+}{\mathbf{A}}^{*+}$ • ${\lambda }^{+}=1/\lambda$ for $\lambda \ne 0$ else ${\lambda }^{+}=0$ • ${\mathbf{A}}^{+}={\left[{\mathbf{A}}^{*}\mathbf{A}\right]}^{+}{\mathbf{A}}^{*}={\mathbf{A}}^{*}{\left[\mathbf{A}{\mathbf{A}}^{*}\right]}^{+}$ • ${\mathbf{A}}^{*}={\mathbf{A}}^{*}\mathbf{A}{\mathbf{A}}^{+}={\mathbf{A}}^{+}\mathbf{A}{\mathbf{A}}^{*}$ It is informative to consider the range and null spaces [link] of $\mathbf{A}$ and ${\mathbf{A}}^{+}$ • $R\left(\mathbf{A}\right)=R\left(\mathbf{A}{\mathbf{A}}^{+}\right)=R\left(\mathbf{A}{\mathbf{A}}^{*}\right)$ • $R\left({\mathbf{A}}^{+}\right)=R\left({\mathbf{A}}^{*}\right)=R\left({\mathbf{A}}^{+}\mathbf{A}\right)=R\left({\mathbf{A}}^{*}\mathbf{A}\right)$ • $R\left(I-\mathbf{A}{\mathbf{A}}^{+}\right)=N\left(\mathbf{A}{\mathbf{A}}^{+}\right)=N\left({\mathbf{A}}^{*}\right)=N\left({\mathbf{A}}^{+}\right)=R{\left(\mathbf{A}\right)}^{\perp }$ • $R\left(I-{\mathbf{A}}^{+}\mathbf{A}\right)=N\left({\mathbf{A}}^{+}\mathbf{A}\right)=N\left(\mathbf{A}\right)=R{\left({\mathbf{A}}^{*}\right)}^{\perp }$ ## The cases with analytical soluctions The four Penrose equations in [link] are remarkable in defining a unique pseudoinverse for any A with any shape, any rank, for any of the ten cases listed in Figure 1.However, only four cases of the ten have analytical solutions (actually, all do if you use SVD). can someone help me with some logarithmic and exponential equations. 20/(×-6^2) Salomon okay, so you have 6 raised to the power of 2. what is that part of your answer I don't understand what the A with approx sign and the boxed x mean it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared Salomon I'm not sure why it wrote it the other way Salomon I got X =-6 Salomon ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6 oops. ignore that. so you not have an equal sign anywhere in the original equation? Commplementary angles hello Sherica im all ears I need to learn Sherica right! what he said ⤴⤴⤴ Tamia what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks. a perfect square v²+2v+_ kkk nice algebra 2 Inequalities:If equation 2 = 0 it is an open set? or infinite solutions? Kim The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined. Al y=10× if |A| not equal to 0 and order of A is n prove that adj (adj A = |A| rolling four fair dice and getting an even number an all four dice Kristine 2*2*2=8 Differences Between Laspeyres and Paasche Indices No. 7x -4y is simplified from 4x + (3y + 3x) -7y is it 3×y ? J, combine like terms 7x-4y im not good at math so would this help me yes Asali I'm not good at math so would you help me Samantha what is the problem that i will help you to self with? Asali how do you translate this in Algebraic Expressions Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)= . After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight? what's the easiest and fastest way to the synthesize AgNP? China Cied types of nano material I start with an easy one. carbon nanotubes woven into a long filament like a string Porter many many of nanotubes Porter what is the k.e before it land Yasmin what is the function of carbon nanotubes? Cesar what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it... what is system testing what is the application of nanotechnology? Stotaw In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google Azam anybody can imagine what will be happen after 100 years from now in nano tech world Prasenjit after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments Azam name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world Prasenjit how hard could it be to apply nanotechnology against viral infections such HIV or Ebola? Damian silver nanoparticles could handle the job? Damian not now but maybe in future only AgNP maybe any other nanomaterials Azam can nanotechnology change the direction of the face of the world At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light. the Beer law works very well for dilute solutions but fails for very high concentrations. why? how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
# 3.11: Noble Gas Configuration ### How does it feel to be full after a meal? Envision that you have nearly finished a great meal, but cannot put another bite in your mouth because there is no place for it to go. The noble gases have the same problem—there is no room for any more electrons in their outer shells. They are completely full and cannot handle any more. ## Noble Gas Configuration Sodium, element number 11, is the first element in the third period of the periodic table. Its electron configuration is $$1s^2 2s^2 2p^6 3s^1$$. The first ten electrons of the sodium atom are the inner-shell electrons and the configuration of just those ten electrons is exactly the same as the configuration of the element neon $$\left( Z=10 \right)$$. This provides the basis for a shorthand notation for electron configurations called the noble gas configuration. The elements that are found in the last column of the periodic table are an important group of elements called the noble gases. They are helium, neon, argon, krypton, xenon, and radon. A noble gas configuration of an atom consists of the elemental symbol of the last noble gas prior to that atom, followed by the configuration of the remaining electrons. So for sodium, we make the substitution of $$\left[ \ce{Ne} \right]$$ for the $$1s^2 2s^2 2p^6$$ part of the configuration. Sodium's noble gas configuration becomes $$\left[ \ce{Ne} \right] 3s^1$$. Table $$\PageIndex{1}$$ shows the noble gas configurations of the third period elements. Table $$\PageIndex{1}$$: Electron Configurations of Third-Period Elements Element Name Symbol Atomic Number Noble Gas Electron Configuration Sodium $$\ce{Na}$$ 11 $$\left[ \ce{Ne} \right] 3s^1$$ Magnesium $$\ce{Mg}$$ 12 $$\left[ \ce{Ne} \right] 3s^2$$ Aluminum $$\ce{Al}$$ 13 $$\left[ \ce{Ne} \right] 3s^2 3p^1$$ Silicon $$\ce{Si}$$ 14 $$\left[ \ce{Ne} \right] 3s^2 3p^2$$ Phosphorus $$\ce{P}$$ 15 $$\left[ \ce{Ne} \right] 3s^2 3p^3$$ Sulfur $$\ce{S}$$ 16 $$\left[ \ce{Ne} \right] 3s^2 3p^4$$ Chlorine $$\ce{Cl}$$ 17 $$\left[ \ce{Ne} \right] 3s^2 3p^5$$ Argon $$\ce{Ar}$$ 18 $$\left[ \ce{Ne} \right] 3s^2 3p^6$$ Again, the number of valence electrons increases from one to eight across the third period. The fourth and subsequent periods follow the same pattern, except for the use of a different noble gas. Potassium has nineteen electrons, one more than the noble gas argon, so its configuration could be written as $$\left[ \ce{Ar} \right] 4s^1$$. In a similar fashion, strontium has two more electrons than the noble gas krypton, which would allow us to write its electron configuration as $$\left[ \ce{Kr} \right] 5s^2$$. All elements can be represented in this fashion. ## Summary • The noble gas configuration system allows some shortening of the total electron configuration by using the symbol for the noble gas of the previous period as part of the pattern of electrons. ## Review 1. What is the element represented by $$\left[ \ce{Ne} \right] 3s^2 3p^2$$? 2. What element has this electron configuration $$\left[ \ce{Ar} \right] 3d^7 4s^2$$? 3. What noble gas would be part of the electron configuration notation for Mn? 4. How would you write the electron configuration for Ba? 3.11: Noble Gas Configuration is shared under a CC BY-NC license and was authored, remixed, and/or curated by LibreTexts.
# Derivation of Christoffel symbol by rolandk Tags: christoffel, derivation, symbol P: 2 Has anyone a derivation at hand of the Christoffel symbol by permuting of indices in a free fall system? Roland Emeritus Sci Advisor P: 7,620 If you have the equations for geodesic motion in a coordinate basis, you can "read off" the Christoffel symbols from the equation using the geodesic deviation equation, i.e if $$x^a(\tau)$$ is a geodesic, and we represent the differentaition with respect to $$\tau$$ by a dot, we can write $$\ddot{x^a} + \Gamma^a{}_{bc} \dot{x^b}\dot{x^c} = 0$$ MTW gives the example on pg 345 in "Gravitation" If $$\ddot{\theta} - sin(\theta) cos(\theta) (\dot{\phi})^2 = 0$$ then $$\Gamma^{\theta}{}_{\phi \phi} = -sin(\theta}) cos(\theta)$$ and the other Christoffel symbols are zero. I'm not sure if this is what you're looking for, though. Sci Advisor HW Helper P: 11,915 Section 6.4 of [1] gives exacly what u want.Isn't that $$\Gamma^{\sigma}{}_{\mu\rho}=:e_{\mu}{}^{m}D_{\rho}e_{m}{}^{\sigma}$$ ,where the covariant derivative uses the ordinary $\partial_{\rho}$and the spin-connection...? Daniel. --------------------------------------------- [1]Pierre Ramond "Field Theory:A Modern Primer",Addison-Wesley,2-nd ed.,1989 P: 2,954 Derivation of Christoffel symbol Quote by rolandk Has anyone a derivation at hand of the Christoffel symbol by permuting of indices in a free fall system? Roland Yup. See - http://www.geocities.com/physics_world/ma/chris_sym.htm This is strictly a mathematical derivation so there is no mention of free-fall. However the condition which is equivalent to it is in Eq. (3). Please note that one does not "derive" the Christoffel symbols (of the second kind). They are "defined." Once they are defined then one demonstrates relationships between them and other mathematical objects such as the metric tensor coefficients etc. In the link above the Christoffel symbols are defined in the same way Kaplan defines them in his advanced calculus text. In my page that is given in Eq. (8). I know of at least 3 different ways to define them though. Pete Sci Advisor HW Helper P: 11,915 Well,Pete,either u or Dirac[1] have it all mixed up.I'd go for you,as Dirac got a Nobel prize and i've been taught GR from his book[1]. Your formula #2 is valid for contravariant vectors ([1],eq.3.3,page 6) (a.k.a.vector,which is defined on the tangent bundle to a flat/curved $\mathbb{M}_{4}$)...So how about getting it all done correctly or,don't give that link anymore and exlude it from your post. Daniel. -------------------------------------------------- [1]P.A.M.Dirac,"General Relativity",1975. P: 2,954 Quote by dextercioby Well,Pete,either u or Dirac[1] have it all mixed up.I'd go for you,as Dirac got a Nobel prize and i've been taught GR from his book[1]. Your formula #2 is valid for contravariant vectors ([1],eq.3.3,page 6) (a.k.a.vector,which is defined on the tangent bundle to a flat/curved $\mathbb{M}_{4}$)...So how about getting it all done correctly or,don't give that link anymore and exlude it from your post. Daniel. -------------------------------------------------- [1]P.A.M.Dirac,"General Relativity",1975. What's with the atttitude dude?? I don't delete portions of my web pages due to the comment of a readed who is a bit ignorant on the subject. As for "giving it" to you, I wasn't. I was giving it to Roland. As I said, their are several definitions of the Christoffel symbols. Dirac uses one definition, Kaplan another, Ohanian yet another, Lovelock and Rund yet another. However, I'm looking at the text you refer to and I don't see what you're talking about. That equations is not directly related to this topic. Also what you refer to as "valid for contravariant vectors" (really the components of such a vector in a particular coordinate system) is identical in meaning to what I refered to above as the Christoffel symbols of the first kind. The other ones are the Christoffel symbols of the first kind. So how about first learning the subject completely before you make another attempt at correcting me in such a rude manner? Sci Advisor HW Helper P: 11,915 Alright,u didn't get it...Let's take it slowly:What's formula #2 about...? Daniel. P: 2,954 Quote by dextercioby Alright,u didn't get it.. So why bother with me? Seems that you're unwilling to entertain the possibility that you made an error. In any case Eq. #2 in my page is the transformation properties of the components of a contravariant vector. However I see your point in Dirac. It was confusing since he never named them. Dirac and Ramond, two authors you quote, use different definitions of the Christoffel symbols of the second kind. However it appears that you're having a bad day since I find your comments to be irritating. Later P: 2,954 So why bother with me? Seems that you're unwilling to entertain the possibility that you made an error. In any case Eq. #2 in my page is the transformation properties of the components of a contravariant vector. However I see your point in Dirac. It was confusing since he never named them. Dirac and Ramond, two authors you quote, use different definitions of the Christoffel symbols of the second kind. However it appears that you're having a bad day since I find your comments to be irritating. A long time ago I found that discussing anything with someone posting in such a grating manner not worth posting to. Li'fe's too short and I have many other irritating things which deserve more attention. Later Sci Advisor HW Helper P: 11,915 You specifically use "covariant vector",and 2-ice...And now u turn it and use "contravariant" vector...What should i understand,that I'm having a bad day...? Daniel. P: 2,954 Quote by rolandk Has anyone a derivation at hand of the Christoffel symbol by permuting of indices in a free fall system? Roland There is a minor point I'd like to make to add to those of mine above. There are four symbols in tensor analysis which are tightly related. In certain circumstances they are identical. In most circumstances you'll see in GR they are identical. Two of the symbols are referred to as the Christoffel symbols (of the first and second kind) and the affine connection symbols (of the first and second kind). The affine connection has a capital gamma as a kernal lettter. There is also two terms referred to as "affine geometry" and "metric geometry. See http://www.geocities.com/physics_wor...c_geometry.htm I'm not 100% sure which symbols are which since I'm not sure everyone uses the same definition of each. In any case they are the same when the manifold has both a connection and metric defined on it and the metric geodesics and the affine geodesics are the same. Pete Related Discussions Special & General Relativity 7 Differential Geometry 4 Math & Science Software 2 Differential Geometry 20 Special & General Relativity 4
Find the following limit. Notes: Enter "DNE" if limit Does Not Exist. $\displaystyle \lim_{x\to -\infty} \frac{2x^{3}+4x-5}{6x^{2}+2}$ =
# The importance of velocity in simultaneity 1. Dec 31, 2014 ### sjkelleyjr Please take this post with a grain of salt as I am a computer scientist and not a mathematician or physicist. I started in on Einstein's Relativity: The Special and The General, and just finished the section on the train thought experiment involving simultaneity. I then started in on the Lorentz section following that, and I feel I understand the math involved fairly well. So I believe fully in what is being said in the thought experiment, however I feel Einstein had to kind of "dumb it down" for the general populace and I feel this is leading to my confusion. Here's why. If we take a snapshot of the train at the moment the lightening strikes hit (i.e. the moment of simultaneity to the embankment observer) why does the velocity of the train have anything to do with this moment? In my mind I'm stopping everything at this exact moment in time (even though there isn't any one "exact" moment) and since the speed of light is constant, wouldn't the lightening bolts reach the midpoint of the train simultaneously regardless of the velocity of the train? Writing it out now I feel like my problem is my attempt to "snapshot" the train. In reality time is still moving forward, and thus velocity does become a factor. Is this where my confusion lies? Again, I understand the Lorentz transforms and how velocity effects these, and I must admit I feel safer in the realm of math than the thought experiment. As stated in this post "there is absolutely no ambiguity in the math, and it is quite clear what Einstein is saying." and I agree. Thanks! 2. Dec 31, 2014 ### ShayanJ 3. Dec 31, 2014 ### Staff: Mentor A snapshot is taken with a camera, and light has to travel from the events being captured in the snapshot to the camera before they can be recorded. So when you look at a snapshot, you are not seeing a picture of everything at the same time, you're seeing a picture in which the more distant stuff happened further back in time than the nearer stuff. Cameras moving relative to one another will capture different snapshots and different notions of what happened "at the same time" even if they are in the same place at the same time when they are triggered. And if by "snapshot" you mean some magic process that captures everything at the same time without having to allow for light travel times... Well, how are you going to define "at the same time" in a way that works equally well when we assume the train is at rest and the platform is moving backwards and when we assume that the platform is at rest and the train is moving forward? That is right. 4. Dec 31, 2014 ### Joshua L When I first started to learn Einstein's theory of relativity, I used a similar, yet different, thought experiment to understand the "issue" of simultaneity. In my opinion, it is easier to analyze. Here are my two cents. There is a train car moving with a constant velocity, $\vec v$ , on a straight railroad. There is an observer outside the train car standing right beside the track. There is another observer standing in the train car. Also, there is a strange device placed in the exact center the train car; it is a double-sided photon gun. When this photon gun activates, it will fire a photon towards the front end of the train car and it will fire another photon towards the opposite side of the train car. Now, Maxwell's equation implies the speed of all electrodynamic radiation, or in our case, light, travels with a constant speed, $c$ (~3.00e8 m/sec). But the question at the time was: relative to what? Everything. Light travels at the same speed for any inertial reference frame. This realization revealed humanity's misconception of the idea of simultaneity. Now back to the thought experiment. The double photon gun is set to activate when the exact center of the train is aligned with the observer outside of the train car. When that happens, the observer inside the train car would see the two photons traveling at the same speed, $c$, in different directions and hit the sides of the train car at the same time. To the observer inside the train car, the two photons would hit the walls simultaneously. This would be expected. Now, lets imagine the event though the other observer's eyes. When the center of the train car is aligned with this observer, the double photon gun will activate. But the train car is traveling at velocity $\vec v$. What is the speed of each photon relative to this observer? The answer is $c$. The observer would see that the photons are traveling towards the walls of the train car with speed $c$. Since the observer also notices that the walls of the train car moving at velocity $\vec v$, he/she would find that the back wall of the train car would be struck by a photon before the front wall of the train car. To the observer outside the train car, this event would not be simultaneous. This is one of my favorite thought experiments. 5. Dec 31, 2014 ### Staff: Mentor It's actually the exact same thought experiment, except with the light going in the other direction. The firing of your bidirectional light source is the common event analogous to the two flashes reaching the observer's eyes at the same time; and the arrival of the flashes at the two ends of the train is analogous to the two lightning strikes. (This is a comment, not a criticism or a complaint.) I presume that you've noticed that you could put the bidirectional light source on the platform and get the exact same result: signals reach the ends of the train simultaneously according to the train observer but not the platform observer? 6. Jan 1, 2015 ### sjkelleyjr But herein lies my intuitive confusion. It seems to me that if the photons fire at $c$ they too would hit each side of the train simultaneously with reference to the inside passenger, albeit farther down the tracks. I think I'm visualizing $\vec v$ as something roughly equal to a real train, in which the $\vec v$ is kind of inconsequential (my first error). I also feel also though $c$ is being added to $\vec v$ and thus traveling with the train (or carrying it down the railway with it, similar to a passenger), rather than being the limiting velocity (my second error). Yes this is what I meant. Whoa...that's and interesting way of looking at the embankment observer vs passenger perspective (and perhaps a more mature way). But I guess what I'm saying is, if we assume this hadn't all been proven by the math and by Einstein (ie we don't assume there are different times with which to take the snapshot) if we had this ability to take a magic snapshot in both perspectives, wouldn't it show the same thing? lightning striking at each end of the train simultaneously to the outside observer, and lightening striking the front and the back of the train simultaneously to the insider observer. At the time of this magic snapshot, velocity of the train would be irrelevant would it not? I suppose this point in time is kind of irrelevant though, because the thought experiment is observing $c$ throughout the experiment, and thus we need to "snapshot" even further in time after the initial strikes to understand fully the purpose of the thought experiment. I think all of these responses led me to a better understanding. Thanks to all who responded. 7. Jan 1, 2015 ### Staff: Mentor Looking at it that way (train at rest and platform moving backwards has to be equivalent to platform at rest and train moving forwards) is one of the essential breakthrough moments in understanding special relativity. It's hard to rid ourselves of the habit of thinking that "at rest relative to the ground under my feet" means "not moving" - but you might consider that a hypothetical Martian, seated comfortably in his chair in front of a telescope on Mars and watching the Earth spinning on its axis and careening around the sun at several kilometers per second will consider the suggestion that either the platform or the train is "not moving" to be absurd. 8. Jan 1, 2015 ### pervect Staff Emeritus That would be online at http://www.bartleby.com/173/9.html, correct? Take a look at Figure 1 again, and read the text: The added emphasis on the word embankment is mine, when we compare Einstein's words to yours, we see that you somehow switched from the embankment frame to the train frame. Note that Einstein (and us in his footsteps) are doing the analysis in the embankment frame, and that because of this, the correct conclusions we can draw apply in the embankment frame. When we compare your words to Einstein, we see a crucial difference, an intuitive leap perhaps. The answer is no, the lightening bolts reach the midpoint of the embankment simultaneously. Now, the next thing you/we need to ask is: When the light rays meet together at the midpoint of the embankment, where is the midpoint of the train? I'm torn between spelling it out and letting you think. As a compromise, I'll add the answer that I think you should be getting at the end of this post. .... Maybe I should stop there, but I wanted to add something else. An additional resource that might help (let me know if it does) "The challenge of changing deeply-held student beliefs about the relativity of simultaneity", by Scherr et al, available online at http://arxiv.org/ftp/physics/papers/0207/0207081.pdf Of particular interest and relevance are the following sections: Not quite stated in the above summary is the second observer Beth, at the midpoint of the train. Note that this is similar to Einstein's original presentation, with a few elaborations, the addition of char marks and the specification of a particular observer Alan at the midpoint. Note that it is directly specified here that Alan receives the flashes at the same time (rather than relying on the reader to derive this from the definition of simultaneity). The next section of interest suggests a further refinement, and the reason it was added. ----- As promised, the answer to my question. The center of the train is not at the center of the embankment when the light flashes meet. The center of the train was at the center of the embankment when the light flashes were emitted (per figure 1), but because the light signals take time to meet, by the time they do meet, the center of the train has moved to the right of the center of the embankment. 9. Jan 1, 2015 ### sjkelleyjr Yes I understand all of this. I was absolutely making "intuitive leaps", perhaps because I was attempting to pretend as though none of this had already been proven thereby purposefully blinding myself logically. My "snapshot" conception wasn't allowing for this time needed for the light signals to meet, and therein lay my misconception. You spelled it out elegantly with this bolded section. I believe Nugatory also mentioned it in his/her first response, by asking whether these snapshots were camera snapshots or "magic" snapshots. Thanks again for all the responses. 10. Mar 18, 2015 ### Micheth Hi. What if you have two clocks, situated at the front and rear of the train (traveling at 99+c) Lightning hits both of them simultaneously (as viewed by a person on the station platform). The person on the train sees the front lightning hit much earlier than the rear lightning, but the lightning hitting each clock breaks it, leaving it displaying the respective times at which each was hit (which were exactly equal times, say at exactly 12:00:00: ... :00 The person on the train swears the first one hit first, but when the train comes to a stop and examines both broken clocks, the values are both exactly the same, no? Reference frame-based simultaneity might say that the clocks would register different records at the time of their stopping, but what if the person on the platform took a (video/picture) of the clocks breaking and the times they displayed as viewed in his time frame? They would display the same times, and that can't be different from what they would find upon stopping the train and inspecting the stopped clocks times, right? To me, this would seem to suggest that the non-simultaneity for the person on the train was merely an illusion and in fact the events occurred with an objective simultaneity... 11. Mar 18, 2015 ### Ibix To the person on the platform the two clocks were not correctly synchronised, so one would have read 12:00:00 and the other 12:00:01 (or whatever) when they were simultaneously struck. According to the person on the train, the clocks were correctly synchronised but were struck at 12:00:00 and 12:00:01 respectively. 12. Mar 18, 2015 ### Micheth I don't see why the platform viewer would not see the clocks synchronized? They're both moving at the same speed with respect to him, right? And light from both of them take the same amount of time to reach him... 13. Mar 18, 2015 ### Ibix Observers in relative motion disagree on lengths, clock tick rates, and when clocks at different places where zeroed. All inertial observers will agree that the clocks tick at the same rate as each other, although they will not agree what rate that is. They will all agree that the clocks were never set to tht same time, except for the ones stationary with respect to the clocks. This actually follows from the experiment you are proposing. If it weren't the case we'd end up with the result you thought of, with the implication that the platform is in an absolute rest frame. No evidence of any such rest frame has ever been found - every experiment has pointed to all inertial frames being equivalent. Look up "relativity of simultaneity" to find out more detail. 14. Mar 18, 2015 ### A.T. It's not about what he actually sees with his eyes, but what happens in his frame when signal delays are accounted for. This might help: Yes, they tick at the same rate in his frame, but with an offset. That is not relevant. The key is that the signals that start the clocks, do not reach them simultaneously in his frame. 15. Mar 18, 2015 ### Micheth Hm? Why are they at an offset, when they get struck by the lightning at the same moment (in his frame), and are exactly the same distance away from him, moving at the same speed, at the moment they're struck? 16. Mar 18, 2015 ### A.T. Before you come to the lighting stopping the clocks, you need to start them first. How are the clocks synchronized initially? 17. Mar 18, 2015 ### Micheth I'm assuming they were synchronized initially when the train was at rest somewhere. Then they were accelerated together to their current uniform velocity of 99+c, so they must still be synchronized, right? The moment they pass him, they are still synchronized, and at exact equal distances to him, and the lightning is in his own frame, so I would guess he would see both lightning bolts hit both clocks simultaneously and registering the same time when that happened. 18. Mar 18, 2015 ### A.T. They were accelerated together with the train, which contracts during the acceleration. So the clocks at its ends undergo different accelerations and don't stay synchronized. 19. Mar 18, 2015 ### Staff: Mentor Even if you had two guys on the ground at the exact locations where the lightning strikes hit recording the exact time that the strikes occurred, and you had two other guys on the train, also at the exact locations on the train where the lightning strikes hit recording the exact time that the two strikes occurred, if the guys on the ground report that the two strikes occurred simultaneously, the two guys on the train will report that the two strikes did not occur simultaneously. (That is, if all the clocks on the ground are synchronized with one another, while all the clocks on the train are independently synchronized with one another.) Chet Last edited: Mar 18, 2015 20. Mar 18, 2015 ### Micheth Why would the contraction of the train matter...? Could we not dispense with the train altogether and just say the two clocks accelerate in the same direction (they have independent propulsion systems...) They were at rest in the same frame, synchronized, and accelerated at the same time in the same direction, and register the same times at the instant the guy on the platform sees them at equal distances. 21. Mar 18, 2015 ### bahamagreen Now maybe I'm confused but I think your synch / non-sych polarity is backwards and your after 12 stopped clock times can't exist? It looks to me like the data result is when the train and platform observers meet to examine the two clocks all together in the same place after the experiment and both stopped clocks read 12:00:00.000... , neither clock could have ever been showing a later time to either observer during the run of the experiment, only earlier times up to 12:00:00.000... According to the person on the train, the strikes were staggered front first rear second, but at no point does he say he believes the clocks are synchronized. Clearly he must conclude they were not synchronized; the rear clock had to have been running behind the front clock in order to be struck after the first clock and have both clocks stopped reading 12. According to the platform observer the clocks were synchronized and were both struck and stopped when reading 12. If that is not correct, then I am confused. But to Micheth's question... there is nothing "more real" about the clock synchrony. The Wiki link Shyan posted is the scenario where the clock synchrony is for the train observer rather than for the one on the platform. You can also take both versions and take the train to be at rest with respect to a moving platform... you can arrange the experiment to make clock synchrony for any third inertial observer and neither of the train or platform observers... nothing special about clock synchrony, it just makes the thought experiments clearer to arrange it for one of the observers. 22. Mar 18, 2015 ### jbriggs444 This acceleration at the same time... is that at the same time according to the track frame or according to the train frame? 23. Mar 18, 2015 ### Micheth Both, since it is before they accelerate, so they are at rest with respect to the track & platform (though still being far, far away) 24. Mar 18, 2015 ### A.T. Because it's ends move at different speeds during the acceleration & contraction, and thus the clocks desynchronize. Yes we can do this, but then the clocks will be desynchronized in their final rest frame. 25. Mar 18, 2015 ### Micheth Why is that? I'm assuming they both accelerate and decelerate exactly in tandem. (But of course in this example it doesn't matter what happens to them after they get hit by the lightning bolts since they're now broken and just continue to register the time they got hit).
» » » rhombus angles add up to posted in: Uncategorized | The four interior angles of a rhombus always add up to 360°. Rhombus Angles Add Up To . A rhombus is a quadrilateral, a two-dimensional shape with four straight sides, and we have the following... Our experts can answer your tough homework and study questions. The rhombus also has two pairs of side where the sides opposite to each other are parallel. Find the sum of angles  and. Then add up to the two alternating interior angles that make up the angle at the vertex using the angle addition postulate to show they are congruent. It follows that  must be a right angle. A convex quadrilateral has all angles … Given: Rhombus  with diagonals  and  intersecting at point . Area of a Rhombus. i.e. Now look at the measurements for the other triangles— they also add up to 180º! If Varsity Tutors takes action in response to All squares are rhombuses, but not all rhombuses are squares. We also know that they are equal because they are opposite angles. A rhombus adds three more specifications to the definition of a parallelogram. The opposite interior angles must be equivalent, and the adjacent angles have a sum of  degrees. Diagonals of a Rhombus One thing you should remember about the diagonal of a rhombus is that it always bisects at a 90° angle, which means that the two sides bisected will be the same length. Angles. Become a Study.com member to unlock this Create your account. The opposite angles are congruent. The interior angles of a quadrilateral add up to $$\text{360}$$ . Other names for a rhombus include “equilateral quadrilateral” and “lozenge”, though the latter term is normally reserved for the special rhombus that has 2 45° angles and 2 135° angles. In terms of the angles inside a rhombus, the four angles inside the rhombus add up to 360 degrees. SURVEY . Q. The sum of the internal angles of any four-sided figure, regular or irregular is 360 degrees. Opposite angles are equal. • If the adjacent angles are not on the same base of the trapezoid, they are supplementary angles. The […] Trapezium. 2. 4. University of Michigan-Ann Arbor, Bachelor of Science, Biochemistry. We have the following theorem about the angles of a quadrilateral. One characteristic of a rhombus is that its diagonals are perpendicular. Now we know that the remaining two angles S and Q need to add up to 292. A rhombus is a 4 sided polygon with all equivalent sides. 4. (d) Diagonals in this problem. ZDNet. Rhombus ABCD has an area of 264 square units. Tags: Question 11 . Clicking on any of the named quadrilaterals will take you to a page specific to that quadrilateral. A rhombus has two interior angles with a measurement of  degrees. But regular figures like square, rectangle, parallelogram, or rhombus have an additional feature that the sum of any two adjacent angles will be 180 degrees. 3. The opposite interior angles must be equivalent, and the adjacent angles have a sum of  degrees. A quadrilateral is a geometric figure having four sides and four angles which always total 360 . ChillingEffects.org. The interior angles add up to 360 degrees. Tech Republic. - Definition, Shapes & Angles, What is a Scale Factor? For example a square, rhombus andrectangle are also parallelograms. Q. Not enough information is provided to solve this problem. Includes full solutions and score reporting. What do the angles add up to in a rhombus? Therefore, to know the properties of a square just add up all the properties you The diagonals bisect each other and are perpendicular. ... A square also fits the definition of a rectangle (all angles are 90 ), and a rhombus (all sides are equal length). To calculate the area of a rhombus, draw the surrounding rectangle. Send your complaint to our designated agent at: Charles Cohn the 5. The difference between a square and rhombus is that all angles of a square are right angles, but the angles of a rhombus need not be right angles. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. The diagonals bisect each other: E is the midpoint of both the diagonals AC and BD. Angles inside a shape are called interior angles. In a rhombus PQRS, PQ = y + 8, QS = 4y - 7. The angle bisector of \angle ACD in rhombus ABCD... What is the altitude of a rhombus if its area is... Properties of Shapes: Rectangles, Squares and Rhombuses, Kites in Geometry: Definition and Properties, Parallelograms: Definition, Properties, and Proof Theorems, Parallelogram in Geometry: Definition, Shapes & Properties, What is a Quadrilateral? Diagonals bisect vertex angles. University of Kentucky, Bachelor in Arts, Communication, General. We can subtract 360-34-34=292. as This also means that opposite angles of a rhombus have equal measure. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and aperiodic means that shifting any tiling with these shapes by any finite distance, without rotation, cannot produce the same tiling. Your name, address, telephone number and email address; and Rhombus A rhombus has four sides of equal lengths. which specific portion of the question – an image, a link, the text, etc – your complaint refers to; - Definition, Formula & Examples, GRE Quantitative Reasoning: Study Guide & Test Prep, SAT Subject Test Mathematics Level 1: Practice and Study Guide, NY Regents Exam - Integrated Algebra: Help and Review, NY Regents Exam - Integrated Algebra: Tutoring Solution, High School Geometry: Homework Help Resource, Ohio Graduation Test: Study Guide & Practice, Praxis Mathematics - Content Knowledge (5161): Practice & Study Guide, SAT Subject Test Chemistry: Practice and Study Guide, Biological and Biomedical Angles. Services, What Is a Rhombus? Opposite sides are parallel. If you believe that content available by means of the Website (as defined in our Terms of Service) infringes one SURVEY . 2. Adjacent angles add up to 180 degrees. the number of degrees that can be measured in any quadrilateral. Each of these angles is 146. In terms of the angles inside a rhombus, the four angles inside the rhombus add up to 360 degrees This follows from the rules of parallel lines and opposing and adjacent angles. 50. Now, we also know that all the interior angles in a quadrilateral add up to 360. True or false: Parallelogram  cannot be a rhombus. Seeing that the square is the endmost in the hierarchy, thus, it must have encompassed all the properties of a parallelogram, rectangle, and rhombus. Angles on a straight line add up to 180 0. It has two pairs of equal angles. There is rotational symmetry In a rhombus, opposite sides are parallel and the opposite angles are equal. sufficient detail to permit Varsity Tutors to find and positively identify that content; for example we require The length of each side of a rhombus is 16. Diagonally opposite angles are equal. We have new pairs of corresponding angles. To see why, consider what would The rhombus is also a parallelogram. If you've found an issue with this question, please let us know. Track your scores, create tests, and take your learning to the next level! One property that all quadrilaterals share in common is that their interior angles necessarily add up to 360°. Practice [] 1. A rhombus has two equal opposite acute angles and two opposite equal obtuse angles and the four angles add up to 360 degrees There are four angles … - Definition and Properties, Working Scholars® Bringing Tuition-Free College to the Community. 4. https://www.answers.com/Q/What_do_the_Angles_of_a_rhombus_add_up_to So the sum of angles  and  degrees. The first of these have to do with it's side lengths, it is that all sides are congruent. Sciences, Culinary Arts and Personal Again the parallel line conjectures and linear pairs conjecture can help us. In a rhombus you will have 2 pairs angles with the same value. In a rhombus or parallelogram, opposite angles are equal, and adjacent angles add up to 180º. Scroll down the page for more examples on the properties of a quadrilaterals. Then add up to the two alternating interior angles that make up the angle at the vertex using the angle addition postulate to show they are congruent. Interior angles in a quadrilateral add up to 360°. It is also important to know that the four angles of a rhombus add up to . But they have different properties. link to the specific question (not just the name of the question) that contains the content and a description of In the In the rhombus to the right, the distance There are four angles inside any polygon with four sides (a quadrilateral). The rhombus is also called a diamond or rhombus diamond. you will have two 45 degree angles. Angles. A rhombus is a kind of quadrilateral, the general name for a closed convex polygon with exactly 4 sides. Rhombus: In geometry, a rhombus is a two-dimensional shape that has four sides, such that all of the side lengths are equal, and opposite sides are parallel. Angle  has a measurement of  degrees. We can prove that adjacent angles in a rhombus are supplementary by the following: We can prove that adjacent angles in a rhombus are supplementary by the following: Find the sum of angles  and. According to this definition, a square is also a rhombus, one in which all of its angles are right angles. I just realized st. Please follow these steps to file a notice: A physical or electronic signature of the copyright owner or a person authorized to act on their behalf; Angles. It’s a parallelogram with equal side lengths. The interior angles always add up to 360 degrees, for any quadrilateral, including a rhombus. Metacritic. ]° 3x-119 X+230 Enter - the answers to estudyassistant.com Consecutive Angles In Rhombus. Answer: 2 question In order for the parallelogram to be a rhombus, x = [ ? But as the sides of the rhombus are all equal, it is simpler to show that the triangles created by each diagonal are congruent, using the Side-Side-Side postulate. Diagonals bisect each other at right angles. What Is A Quadrilateral Articles & Shopping. Area of a rhombus. Your Infringement Notice may be forwarded to the party that made the content available or to third parties such they add up to 180 (BA D+AD C = AB C+BC D = 180 ) • The two diagonals of the trapezium intersect at the same ratio (ratio between the section of the diagonals are equal). Free practice questions for ACT Math - How to find an angle in a rhombus. (n-2)(180)=… rhombus. Angles on the same side are supplementary which means that they add up to {eq}180^\circ {/eq}. University of Houston, Master of Arts, Mass Communications. Sets A rhombus is a four sided shape with sides of equal lengths and opposite ones parallel to each other. your copyright is not authorized by law, or by the copyright owner or such owner’s agent; (b) that all of the (b) Opposite sides are parallel. We will discuss all types of quadrilaterals except the concave quadrilateral. which add up to 90 degrees. In this problem, since the two acute angles add up to and they must both be the same amount, each of the acute angles must be . Step-by-step explanation: means that opposite angles of a rhombus have equal measure. on or linked-to by the Website infringes your copyright, you should consider first contacting an attorney. a Varsity Tutors LLC All sides are congruent. Infringement Notice, it will make a good faith attempt to contact the party that made such content available by Rhombus. Starting from the most general kind of quadrilateral, the different 4-sided … Since  then, In the above rhombus, angle  degrees. misrepresent that a product or activity is infringing your copyrights. As a rhombus is a parallelogram, consecutive angles  and  are supplementary - that is. The Area can be calculated by: the altitude times the side length: Area = altitude × s. the side length squared (s 2) times the sine of angle A (or angle B): Area = s2 sin (A) Area = s2 sin (B) by multiplying the lengths of the diagonals and then dividing by 2: Area = (p × q)/2. 3. square. What can be said about the adjacent angles of a parallelogram. Because a rhombus is defined this way, it has specific rules and properties that it will always satisfy. (See diagram). Both of the remaining angles are  degrees. The angles of a rhombus add up to 360°. Varsity Tutors. The four interior angles in any rhombus must have a sum of  degrees. The rhombus also has two pairs of side where the sides opposite to each other are parallel. In a rhombus or parallelogram, opposite angles are equal, and adjacent angles add up to 180º. The opposite angles are congruent. A rhombus adds three more specifications to the definition of a parallelogram. A rhombus is a type of parallelogram, and what distinguishes its shape is that all four of its sides are congruent.. When simplified, . In plane Euclidean geometry, a rhombus (plural rhombi or rhombuses) is a quadrilateral whose four sides all have the same length. For example a square, rhombus andrectangle. If area of rhombus is given by Area = s^2 sin (A)... Are the diagonals of a rhombus perpendicular? The solution is: In the rhombus shown above, angle  has a measurement of  degrees. The correct choice is "false". The four interior angles in any rhombus must have a sum of  degrees. The opposite interior angles must be equivalent, and the adjacent angles have a sum of  degrees. improve our educational resources. (c) Opposite angles are equal. Is A Rhombus A Square Rhombus Angles Add Up To Rhombus Math Is Fun Parallelogram Angles. Rhombus is a special type of a parallelogram whose all sides are equal. Since we are given a parallelogram with all equal sides, it's actually a rhombus and we need to The formulas for area differ for different types of paralleograms? The rhombus has the following properties: All the properties of a parallelogram apply (the ones that matter here are parallel sides, opposite angles are congruent, and consecutive angles … 360-90=270 degrees. rectangle. Please be advised that you will be liable for damages (including costs and attorneys’ fees) if you materially To see why the sum of the degrees of the angles on a triangle add to 180 look at a But as the sides of the rhombus are all equal, it is simpler to show that the triangles created by each diagonal are … They are given below: Rhombus: Its properties are (a) All sides are equal. Diagonals of a Rhombus A wonderful and rare property of a rhombus is that its diagonals are always perpendicular to each other. - Definition, Properties, Types & Examples, Properties of Shapes: Quadrilaterals, Parallelograms, Trapezoids, Polygons, Isosceles Trapezoid: Definition, Properties & Formula, Triangle Congruence Postulates: SAS, ASA & SSS, What is a Cuboid Shape? The four interior angles in any rhombus must have a sum of  degrees. A statement by you: (a) that you believe in good faith that the use of the content that you claim to infringe Thus, if a rhombus has two interior angles of  degrees, there must also be two angles that equal: Check: Using the rhombus above, find the sum of angles  and. The diagonals bisect each other at right angles. CNET. Therefore. With the help of the community we can continue to There are several formulas for the rhombus that have to do with its: Sides (click for more detail) All 4 sides are Angles A rhombus is defined to be a parallelogram with four congruent sides; there is no restriction as to the measures of the angles. interior angles that add to 360 degrees: Try drawing a quadrilateral, and measure the angles. A description of the nature and exact location of the content that you claim to infringe your copyright, in \ Find the measure of angle 2. answer choices . four internal angles (or vertices) the interior angles add up to 360° The following diagram shows the properties of some quadrilaterals: square, rectangle, parallelogram, trapezoid, rhombus, kite. Interior angles in a triangle add up to 180°. The opposite interior angles must be equivalent, and the adjacent angles have a sum of  degrees. Suggestions . 6. Find the measurement of angle . For a complex quadrilateral, interior angles add to 720 because two of the interior angles are reflex angles, each greater than 180 but less than 360 Diagonals of a Quadrilateral All convex quadrilaterals have diagonals (line segments connecting non-adjacent vertices) inside their enclosed space. Tags: Question 10 . means of the most recent email address, if any, provided by such party to Varsity Tutors. The area of a rhombus is half the area of the rectangle. 2. Language levels in the Formal 1 Rack and pinion; 3. All sides are congruent. - Definition, Area & Properties, Proving That a Quadrilateral is a Parallelogram, What is a Polygon? The rhombus is often called a diamond, after the diamonds suit in playing cards which resembles the projection of an octahedral diamond, or a lozenge, though the former sometimes refers specifically to a rhombus with a 60° angle (which some authors call a calisson after the French sweet – also see Polyiamond), and the latter sometimes refers specifically to a rhombus with a 45° angle. In a crossed quadrilateral, the four "interior" angles on either side of the crossing (two acute and two reflex, all on the left or all on the right as the figure is traced out) add up to 720°. The angles in a parallelogram will always add up to 360 . There are several formulas for the rhombus that have to do with its: Sides (click for more detail). I think you will then find the angles add up to more than 180 degrees. If the two acute angles. 3. THE RECTANGLE A rectangle is a four-sided shape where every angle is a right angle (90°). Correct answer: Explanation: The four interior angles in any rhombus must have a sum of degrees. Since, both angles  and  are adjacent to angle --find the measurement of one of these two angles by: .Angle  and angle  must each equal  degrees. This follows from the rules of parallel lines and opposing and adjacent angles. 180 seconds . The opposite interior angles must be equivalent, and the adjacent angles have a sum of degrees. Diagonals bisect vertex angles. or more of your copyrights, please notify us by providing a written notice (“Infringement Notice”) containing The measures of the adjacent angles of a parallelogram add up to be 180 degrees, or they are supplementary. All 4 sides are congruent. Properties 1. Click here to try making your own quadrilaterals!!! No right angles, and two pairs of parallel sides. In the rhombus shown above, angle  has a measurement of  degrees. A rhombus is a type of parallelogram, and what distinguishes its shape is that all four of its sides are congruent.. The interior angles add up to 360 degrees. Another name is equilateral quadrilateral, since equilateral means that all of its sides are equal in length. A rhombus is a special case of a parallelogram, and it is a four-sided quadrilateral. And the diagonals 'p' and 'q' of a rhombus bisect each other at right angles. Given a figure with four right angles and diagonals congruent, which of the following shapes do you have for sure? Since angle  is adjacent to angle , they must have a sum of  degrees. A Rhombus is a flat shape with 4 equal straight sides. One angle is  degrees and the other angle must be  degrees. Consecutive angles add up to 180. The interior angles of a quadrilateral add up to 360 degrees, so , or . information described below to the designated agent listed below. All 4 sides are congruent. All 4 sides are congruent. answer! The area of rhombus RSTV is 1326 ft^2. The opposite sides are parallel. The interior angles of a 4 sided polygon must add up to 360 degrees. Click here to try making your own quadrilaterals!!! CBS News. Cardi B threatens 'Peppa Pig' for giving 2-year-old silly idea You can solve for the other angles as they will be complementary. The angles which are on opposite sides of the vertex are called vertically opposite angles. The first of these have to do with it's side lengths, it is that all sides are congruent. In geometry, a rhombus is a two-dimensional shape that has four sides, such that all of the side lengths are equal, and opposite sides are parallel. This would also mean that all angles of the rhombus add up to {eq}360^\circ {/eq}. either the copyright owner or a person authorized to act on their behalf. Scroll down the page for more examples on the properties of a quadrilaterals. Adjacent angles in a rhombus add up to 180°. Which of the following would prove that a parallelogram is a rhombus? The solution is: information contained in your Infringement Notice is accurate, and (c) under penalty of perjury, that you are St. Louis, MO 63105. Crossed trapezoid (US) or trapezium (Commonwealth): a crossed quadrilateral in which one pair of nonadjacent sides is parallel (like a trapezoid) Angles  and  are opposite interior angles, so they must have equivalent measurements. If two lines are cut by a transversal and same-side interior angles add up to 180 degrees, the lines are parallel. © copyright 2003-2021 Study.com. A diagonal of a rhombus bisects the angles at its endpoints, so, specifically,  bisects . These measurements add up to 180º. Since ,  And  So. A rhombus is a type of parallelogram, and what distinguishes its shape is that all four of its sides are congruent. Rhombus Angles Equal Finding Angles In A Rhombus. He wants triangles on a flat surface degrees that can be measured in any rhombus must have sum... All sides are congruent Get your Degree, Get access to this video and our entire Q & a.... Equal side lengths the sides are parallel solve for the parallelogram to be 180 degrees, for any quadrilateral our. 360 degrees & angles, what is a special case of a rhombus or parallelogram, opposite sides congruent... Vertex are called vertically opposite angles are equal rules of parallel lines and opposing and adjacent angles in a add. Equal side lengths, it is a quadrilateral whose four sides and four angles having a common.! Symmetry a Penrose tiling is an example of an aperiodic tiling tiling with rhombi exhibiting symmetry. To this video and our entire Q & a library also parallelograms the to... A figure with four sides ( click for more examples on the same.... Rhombus that have to do with its: sides ( click for more detail ) congruent. Our educational resources polygon with four sides ( a ) all sides are parallel to do with its sides. Triangles on a flat surface Classes in Dallas Fort Worth pairs of parallel lines and opposing and angles! Rhombus ( plural rhombi or rhombuses ) is a flat shape with four straight sides, and the adjacent have. And properties that it will always satisfy this follows from the rules of parallel lines that intersect 4. Other: E is the measurement of degrees but not all rhombuses are squares or rhombuses ) is a case!: Explanation: means that opposite angles of any... See full answer below follows. This definition, area & properties, Working Scholars® Bringing Tuition-Free College the. The measurement of degrees be said about the adjacent angles a diagonal a! & Classes in Dallas Fort Worth, GMAT Courses & Classes in Dallas Fort Worth, GMAT Courses Classes... Both the diagonals of a rhombus, opposite sides of a rhombus, draw the surrounding rectangle Formal. Sin ( a )... are the diagonals bisect each other are parallel and the diagonals of a parallelogram always. You 've found an rhombus angles add up to with this question, please let us know an area a! For example a square, rhombus andrectangle are also included in the Formal I think rhombus angles add up to! Mass Communications have to do with it 's side lengths because there several. Can have angles of a rhombus add up to 180 Tags rhombus angles add up to question 9 SURVEY seconds... It will always add up to 292 and opposite ones parallel to each other are parallel and... Are several formulas for the parallelogram to be a parallelogram add up to 360° wants triangles on a shape... Sides and four angles inside any polygon with exactly 4 sides then find the angles of a are... 90° ) to solve this problem problem, though, I think he wants triangles on flat! Pq = y + 8, QS = 4y - 7 following... See full answer.. More examples on the same length, create tests, and the adjacent angles of a rhombus are.... The page for more detail ), the General name for a convex. ' and ' Q ' of a rhombus are degrees this video and our entire Q a... A Penrose tiling is an example of an aperiodic tiling also has pairs! Point two intersecting straight lines form four angles will give you 360,.! To rhombus Math is Fun parallelogram angles may be forwarded to the measures of the rectangle opposite to each at. Example of an aperiodic tiling parallelogram will always add up to 360.. A common vertex sided shape with 4 equal straight sides properties, Proving that a parallelogram with equal lengths! In the definition of a rhombus adds three more specifications to the next level the! Is 16 to solve this problem a diagonal of a rhombus add up to 180 Tags: question SURVEY. The Community we can continue to improve our educational resources, Master of,...
## Welcome I’m an Inria researcher in  the AriC project, which is part of the LIP lab in ENS Lyon. I recently moved there after more than 20 years in the stimulating environment of the Algorithms project in Rocquencourt (near Paris). My field of research is computer algebra. It is the study of effective mathematics and their complexity: how much mathematics can be done by a computer, and how fast? In this area, I am mostly interested in applications to classical analysis (asymptotics, special functions) and combinatorics. See the menus “Software” and “Publications” for more information. ## Recent Work Gröbner bases are a standard tool in the manipulation and resolution of polynomial systems. Currently, the best known algorithm for their computation is Faugère’s F5 algorithm from 2002. By studying the structure of the matrices it constructs, it is possible to gain insight on the shape of a Gröbner basis in a generic situation, as well as bound the complexity of the algorithm [1] This is a walk of 100 steps in the quarter plane, starting at the origin, using only steps N,S,E,W,SE, and ending at the origin. The number of such walks (called excursions) of length n does not satisfy a linear recurrence with polynomial coefficients, while it does for other step sets. A computer-aided proof of this and related facts relies on probability theory (eigenvalues of a Laplacian); number theory (G-functions); and an irrationality result. See [2] for more. The picture on the right, from the cover of a book by Kenneth Chan on “Spacecraft collision probability”, illustrates the quantity of debris orbiting around Earth and the importance of predicting collisions. We describe a method for the evaluation of the probability of such collisions using a closed form for the Laplace transform, an explicit recurrence for its Taylor coefficients and a technique to evaluate these in a numerically stable way. This method compares favorably to other existing techniques. See [3]. Definite integrals of rational functions in several variables provide elementary representations for a large number of special functions, or combinatorial sequences through their generating functions. These integrals satisfy linear differential equations from which a lot of information about them can be extracted easily. The computation of these integrals is a classical topic, going back at least to Émile Picard, with recent progress based on creative telescoping. See [4] for more. A combinatorial version of Newton’s iteration leads to fast routines for the enumeration and random generation of series-parallel graphs. This is just an example of a very general approach to a large family of combinatorial structures that can be defined recursively [5]. 1. M. Bardet, J. Faugère, and B. Salvy, “On the complexity of the F5 Gröbner basis algorithm,” Journal of symbolic computation, 2014. arXiv:abs/1312.1655 doi:10.1016/j.jsc.2014.09.025 [BibTeX] [Abstract] We study the complexity of Gröbner bases computation, in particular in the generic situation where the variables are in simultaneous Noether position with respect to the system. We give a bound on the number of polynomials of degree $d$ in a Gröbner basis computed by Faugère’s $F_5$ algorithm (2002) in this generic case for the grevlex ordering (which is also a bound on the number of polynomials for a reduced Gröbner basis, independently of the algorithm used). Next, we analyse more precisely the structure of the polynomials in the Gröbner bases with signatures that $F_5$ computes and use it to bound the complexity of the algorithm. Our estimates show that the version of~$F_5$ we analyse, which uses only standard Gaussian elimination techniques, outperforms row reduction of the Macaulay matrix with the best known algorithms for moderate degrees, and even for degrees up to the thousands if Strassen’s multiplication is used. The degree being fixed, the factor of improvement grows exponentially with the number of variables. @article{BardetFaugereSalvy2014, Arxiv = {abs/1312.1655}, Author = {Bardet, Magali and Faug{\e}re, Jean-Charles and Salvy, Bruno}, Journal = {Journal of Symbolic Computation}, Institution = {arXiv}, Title = {On the Complexity of the {F}5 {G}r{\"o}bner basis Algorithm}, Year = {2014}, doi = {10.1016/j.jsc.2014.09.025}, Note = {To appear.}, Abstract = {We study the complexity of Gröbner bases computation, in particular in the generic situation where the variables are in simultaneous Noether position with respect to the system. We give a bound on the number of polynomials of degree $d$ in a Gröbner basis computed by Faugère's $F_5$ algorithm (2002) in this generic case for the grevlex ordering (which is also a bound on the number of polynomials for a reduced Gröbner basis, independently of the algorithm used). Next, we analyse more precisely the structure of the polynomials in the Gröbner bases with signatures that $F_5$ computes and use it to bound the complexity of the algorithm. Our estimates show that the version of~$F_5$ we analyse, which uses only standard Gaussian elimination techniques, outperforms row reduction of the Macaulay matrix with the best known algorithms for moderate degrees, and even for degrees up to the thousands if Strassen's multiplication is used. The degree being fixed, the factor of improvement grows exponentially with the number of variables. } } 2. A. Bostan, K. Raschel, and B. Salvy, “Non-D-finite excursions in the quarter plane,” Journal of combinatorial theory, series A, vol. 121, pp. 45-63, 2014. arXiv:abs/1205.3300 doi:10.1016/j.jcta.2013.09.005 [BibTeX] [Abstract] We prove that the sequence $(e^{\mathfrak{S}}_n)_{n\geq 0}$ of excursions in the quarter plane corresponding to a nonsingular step set $\mathfrak{S}\subseteq\{0,\pm 1\}^2$ with infinite group does not satisfy any nontrivial linear recurrence with polynomial coefficients. Accordingly, in those cases, the trivariate generating function of the numbers of walks with given length and prescribed ending point is not D-finite. Moreover, we display the asymptotics of $e^{\mathfrak{S}}_n$. @article{BostanRaschelSalvy2014, Arxiv = {abs/1205.3300}, Author = {Bostan, Alin and Raschel, Kilian and Salvy, Bruno}, Date-Modified = {2013-10-04 14:04:09 +0000}, Doi = {10.1016/j.jcta.2013.09.005}, Eprint = {1205.3300}, Institution = {arXiv}, Journal = {Journal of Combinatorial Theory, Series {A}}, Pages = {45--63}, Title = {Non-{D}-finite excursions in the quarter plane}, Volume = {121}, Year = {2014}, Abstract = {We prove that the sequence $(e^{\mathfrak{S}}_n)_{n\geq 0}$ of excursions in the quarter plane corresponding to a nonsingular step set $\mathfrak{S}\subseteq\{0,\pm 1\}^2$ with infinite group does not satisfy any nontrivial linear recurrence with polynomial coefficients. Accordingly, in those cases, the trivariate generating function of the numbers of walks with given length and prescribed ending point is not D-finite. Moreover, we display the asymptotics of $e^{\mathfrak{S}}_n$. }} 3. R. Serra, D. Arzelier, M. Joldes, J. Lasserre, A. Rondepierre, and B. Salvy, “A new method to compute the probability of collision for short-term space encounters,” in AIAA/AAS astrodynamics specialist conference, 2014, pp. 1-7. doi:10.2514/6.2014-4366 [BibTeX] [Abstract] This article provides a new method for computing the probability of collision between two spherical space objects involved in a short-term encounter. In this specific framework of conjunction, classical assumptions reduce the probability of collision to the integral of a 2-D normal distribution over a disk shifted from the peak of the corresponding Gaussian function. Both integrand and domain of integration directly depend on the nature of the short-term encounter. Thus the inputs are the combined sphere radius, the mean relative position in the encounter plane at reference time as well as the relative position covariance matrix representing the uncertainties. The method presented here is based on an analytical expression for the integral. It has the form of a convergent power series whose coefficients verify a linear recurrence. It is derived using Laplace transform and properties of D-finite functions. The new method has been intensively tested on a series of test-cases and compares favorably to other existing works. @inproceedings{SerraArzelierJoldesLasserreRondepierreSalvy2014, Author = {Serra, Romain and Arzelier, Denis and Joldes, Mioara and Lasserre, Jean-Bernard and Rondepierre, Aude and Salvy, Bruno}, Booktitle = {{AIAA/AAS} Astrodynamics Specialist Conference}, Date-Modified = {2014-09-01 13:58:59 +0000}, Doi = {10.2514/6.2014-4366}, Pages = {1--7}, Publisher = {American Institute of Aeronautics and Astronautics}, Title = {A New Method to Compute the Probability of Collision for Short-term Space Encounters}, Year = {2014}, Abstract = {This article provides a new method for computing the probability of collision between two spherical space objects involved in a short-term encounter. In this specific framework of conjunction, classical assumptions reduce the probability of collision to the integral of a 2-D normal distribution over a disk shifted from the peak of the corresponding Gaussian function. Both integrand and domain of integration directly depend on the nature of the short-term encounter. Thus the inputs are the combined sphere radius, the mean relative position in the encounter plane at reference time as well as the relative position covariance matrix representing the uncertainties. The method presented here is based on an analytical expression for the integral. It has the form of a convergent power series whose coefficients verify a linear recurrence. It is derived using Laplace transform and properties of D-finite functions. The new method has been intensively tested on a series of test-cases and compares favorably to other existing works.} } 4. A. Bostan, P. Lairez, and B. Salvy, “Creative telescoping for rational functions using the Griffiths-Dwork method,” in Issac ’13: proceedings of the 38th international symposium on symbolic and algebraic computation, 2013, pp. 93-100. arXiv:abs/1301.4313 doi:10.1145/2465506.2465935 [BibTeX] [Abstract] Creative telescoping algorithms compute linear differential equations satisfied by multiple integrals with parameters. We describe a precise and elementary algorithmic version of the Griffiths–Dwork method for the creative telescoping of rational functions. This leads to bounds on the order and degree of the coefficients of the differential equation, and to the first complexity result which is simply exponential in the number of variables. One of the important features of the algorithm is that it does not need to compute certificates. The approach is vindicated by a prototype implementation. @inproceedings{BostanLairezSalvy2013, Arxiv = {abs/1301.4313}, Author = {Bostan, Alin and Lairez, Pierre and Salvy, Bruno}, Booktitle = {ISSAC '13: Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation}, Date-Modified = {2013-07-02 11:35:49 +0000}, Doi = {10.1145/2465506.2465935}, Editor = {Kauers, Manuel}, Eprint = {1301.4313}, Pages = {93--100}, Publisher = {ACM Press}, Title = {Creative telescoping for rational functions using the {G}riffiths-{D}work method}, Year = {2013}, Abstract = {Creative telescoping algorithms compute linear differential equations satisfied by multiple integrals with parameters. We describe a precise and elementary algorithmic version of the Griffiths--Dwork method for the creative telescoping of rational functions. This leads to bounds on the order and degree of the coefficients of the differential equation, and to the first complexity result which is simply exponential in the number of variables. One of the important features of the algorithm is that it does not need to compute certificates. The approach is vindicated by a prototype implementation. }, Bdsk-Url-1 = {http://arxiv.org/abs/1301.4313}} 5. C. Pivoteau, B. Salvy, and M. Soria, “Algorithms for combinatorial structures: well-founded systems and Newton iterations,” Journal of combinatorial theory, series a, vol. 119, pp. 1711-1773, 2012. arXiv:abs/1109.2688 doi:10.1016/j.jcta.2012.05.007 [BibTeX] [Abstract] We consider systems of recursively defined combinatorial structures. We give algorithms checking that these systems are well founded, computing generating series and providing numerical values. Our framework is an articulation of the constructible classes of Flajolet and Sedgewick with Joyal’s species theory. We extend the implicit species theorem to structures of size zero. A quadratic iterative Newton method is shown to solve well-founded systems combinatorially. From there, truncations of the corresponding generating series are obtained in quasi-optimal complexity. This iteration transfers to a numerical scheme that converges unconditionally to the values of the generating series inside their disk of convergence. These results provide important subroutines in random generation. Finally, the approach is extended to combinatorial differential systems. @article{PivoteauSalvySoria2012, Arxiv = {abs/1109.2688}, Author = {Pivoteau, Carine and Salvy, Bruno and Soria, Mich{\e}le}, Doi = {10.1016/j.jcta.2012.05.007}, Journal = {Journal of Combinatorial Theory, Series A}, Pages = {1711--1773}, Title = {Algorithms for Combinatorial Structures: Well-founded systems and {N}ewton iterations}, Volume = {119}, Year = {2012}, Abstract = {We consider systems of recursively defined combinatorial structures. We give algorithms checking that these systems are well founded, computing generating series and providing numerical values. Our framework is an articulation of the constructible classes of Flajolet and Sedgewick with Joyal's species theory. We extend the implicit species theorem to structures of size zero. A quadratic iterative Newton method is shown to solve well-founded systems combinatorially. From there, truncations of the corresponding generating series are obtained in quasi-optimal complexity. This iteration transfers to a numerical scheme that converges unconditionally to the values of the generating series inside their disk of convergence. These results provide important subroutines in random generation. Finally, the approach is extended to combinatorial differential systems.}, Bdsk-Url-1 = {http://dx.doi.org/10.1016/j.jcta.2012.05.007}} The full collection of my recent (and not so recent) papers is here. LIP – ENS Lyon 46, allée d’Italie 69364 Lyon Cedex 07 France email: [email protected] phone: (+33) (0)4 37 28 76 23 ## Visitors My office is #274 on the 2nd floor of the LUG. Here are directions to get there. Call me if you cannot get inside the building.
# [OS X TeX] Re: Apple PDF viewer bugs on this shading Herbert Schulz herbs at wideopenwest.com Tue Dec 1 00:20:34 CET 2009 On Nov 30, 2009, at 5:02 PM, Ross Moore wrote: > Hi Quark (?) > > On 01/12/2009, at 7:00 AM, Quark67 wrote: > >> Hello, sorry for my english, I'm the original poster from the french newsgroup which Alain Matthes speak. >> >> In your message http://www.tug.org/pipermail/macostex-archives/2009-November/042073.html you have posted a screenshoot attachment and you say "It looks fine for me, in both Preview and TeXshop. " . I suppose you have compiled this with tex+dvi in TexShop ? > > No. It was compiled with pdftex as the engine. > When I do it again using TeX+dvi mode then the squares are more > prominent, as you say. > > > On the MacOS X TeX list the interest seemed to be in the results > obtained with different viewers. I seemed to be getting different > results from what others claimed, but they didn't post any images > to show what was the issue. Now that you have shown what is the > real problem, we can examine it in more detail. > > > Even with pdftex, the squares are visible when blowing-up to > ~400% , and even noticeable at ~200%. > An attached image shows this, where the squares are just discernible > on the right, in Preview, but not on the left using Adobe Reader. > > > <withPdfTeX.png> > > These were done using pdfTeX: > This is pdfTeXk, Version 3.141592-1.40.3 (Web2C 7.5.6) > > > Using TeX+dvi+Ghostscript the squares are much more prominent. > Both PDF viewers seem to give similar results (as in the 2nd attached > image): > > <withDVI+GS.png> > > > > The results that I see are explained reasonably easily. > > With TeX+dvi+GS the Postscript code produces small lines of colour > that are stroked for a particular length: 1.1 setlinewidth > and with equal linewidth: 1.1 0 rlineto stroke > This gives the coloured squares that you see. > > > With pdfTeX, the coding is wrapped up differently: > > 12 0 obj << > /Shading << /Sh << /ShadingType 1 > /ColorSpace /DeviceRGB /Matrix [1 0 0 1 56.69363 56.69363] > /Domain [-56.69363 56.69363 -56.69363 56.69363] /Function 4 0 R >> >> > > /ShadingType 1 sets the shading to be constant in the squares, > with the colour being calculated by /Function 4 0 R > and the grid determined by the /Domain specification. > > It would seem that Adobe handles the /ShadingType 1 a bit more > smoothly than does Preview. (Perhaps it has some extra > anti-aliasing at the edges of the squares?) > > > The conceptual difference between the PDF coding and the > PostScript coding is that with the PDF it is clear that we are > shading an extended region, so the result is supposed to look > smooth. (There is even the possibility of an extra setting > /Antialias /True > which a viewer might implement, but doesn't have to.) > > With the PostScript, on the other hand, it is really just > a collection of separate strokes in different colours. > The renderer does not have the extra hint that these are > meant to be filling a region. In particular there could > be some interaction with the pixel boundaries, leading > to a less-smooth result, visually. > > > > The PDF Spec offers the following alternatives: > > > 8.7.4.3 Shading Dictionaries > > A shading dictionary specifies details of a particular gradient fill, including the type of shading to be used, the geometry of the area to be shaded, and the geometry of the gradient fill. Various shading types are available, depending on the value of the dictionary’s ShadingType entry: > > •Function-based shadings (type 1) define the colour of every point in the domain using a mathematical function (not necessarily smooth or continuous). > > •Axial shadings (type 2) define a colour blend along a line between two points, optionally extended beyond the boundary points by continuing the boundary colours. > > •Radial shadings (type 3) define a blend between two circles, optionally extended beyond the boundary circles by continuing the boundary colours. This type of shading is commonly used to represent three-dimensional spheres and cones. > > ... and more ... > > > In an uncompressed version of the PDF you can simply edit > to replace /ShadingType 1 by /ShadingType 2 . > However Adobe Reader reports an error and shows no colour. > > Preview, on the other hand, seems to be ignoring this parameter. > The colour remains, with no perceptible change. > > > But the /Function needs to produce a different collection > of data, to suit the type 2 and type 3 shadings. > Tikz can use type 2 --- with \pgfdeclarehorizontalshading > and \pgfdeclareverticalshading --- and type 3 with > It would be interesting to see the results using these. > > Perhaps the biggest problem with this, though, is that > whereas type 1 uses a single function of 2 parameters, > to assign colours within a 2-dimensional region, > types 2 and 3 use only a single parameter, varying in > a 1-dimensional way. So you would need to create a number > of radial strips (narrow sectors) and setup the shading > separately within each of these. > You would need to setup the sectors in TeX; or do a single > sector multiply, each shaded differently then rotated into > the correct place. > >> >> But, for me, this is not very good : if you look closer, you can see this (compiled with tex+dvi): >> <pgf4.png> >> >> Not smooth shade... I see big squarre (compare picture 1 and picture 3 here : http://quark67.free.fr/pgf.html ). > > > Hope this helps. > > And I'd still like to see what others get with later versions > of different viewer software. > How does it fit with the above analysis? > > > Cheers, > > Ross Howdy, Here are screen shots for pdflatex+Preview, the same pdf file and Adobe Reader and finally latex(->dvips->ps2pdf)+Preview. -------------- next part -------------- A non-text attachment was scrubbed... Name: pdflatex+Preview.pdf Type: application/applefile Size: 25522 bytes Desc: not available URL: <http://tug.org/pipermail/macostex-archives/attachments/20091130/0b6a92d4/attachment.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: pdflatex+Preview.pdf Type: application/pdf Size: 44505 bytes Desc: not available URL: <http://tug.org/pipermail/macostex-archives/attachments/20091130/0b6a92d4/attachment.pdf> -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Type: application/applefile Size: 28747 bytes Desc: not available URL: <http://tug.org/pipermail/macostex-archives/attachments/20091130/0b6a92d4/attachment-0001.bin> -------------- next part -------------- A non-text attachment was scrubbed... Type: application/pdf Size: 174670 bytes Desc: not available URL: <http://tug.org/pipermail/macostex-archives/attachments/20091130/0b6a92d4/attachment-0001.pdf> -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: latex+Preview.pdf Type: application/applefile Size: 30636 bytes Desc: not available URL: <http://tug.org/pipermail/macostex-archives/attachments/20091130/0b6a92d4/attachment-0002.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: latex+Preview.pdf Type: application/pdf Size: 67047 bytes Desc: not available URL: <http://tug.org/pipermail/macostex-archives/attachments/20091130/0b6a92d4/attachment-0002.pdf> -------------- next part -------------- So there seems to be a difference between what you're seeing and what I (and presumably Alain) are seeing with OS X 10.6.2. Good Luck, Herb Schulz (herbs at wideopenwest dot com)
At the end of last year I announced that I would be working on a series of articles regarding the development of an Advanced Trading Infrastructure. Since the initial announcement I haven't mentioned the project to any great extent. However, in this article I want to discuss the progress I've made to date. At the outset I decided that I wanted to re-use and improve upon as much of the codebases of the event-driven backtester and QSForex as possible in order to avoid duplicated effort. Despite a solid attempt to make use of large parts of these two codebases, I came to the conclusion that it was necessary to "start from scratch" with how the position handling/sizing, risk management and portfolio construction was handled. In effect I've modified the design of QSForex and the event-driven backtesters to include proper integration of position sizing and risk management layers, which were not present in previous versions. These two components provide robust portfolio construction tools that aid in crafting "professional" portfolios, rather than those found in a simplistic vectorised backtester. At this stage I have not actually written any specific risk management or position sizing logic. Since this logic is highly specific to an individual or firm, I have decided that I will provide example modules for straightforward situations such as Kelly Criterion positioning or volatility risk levels, but will leave complicated risk and position management overlays to be developed by users of the software. This allows you to use the "off the shelf" versions that I develop, but also to completely swap out these components with your own custom logic as you see fit. This way the software is not forcing you down a particular risk management methodology. To date I have coded the basics of a portfolio management system. The entire backtesting and trading tool, which I am tentatively naming QSTrader, is far from being production-ready. In fact, I would go so far as to say it is in a "pre-alpha" state at this stage! Despite the fact that the system is early in its development, I have put significant effort into unit-testing the early components, as well as testing against external brokerage-calculated values. I am currently quite confident in the position handling mechanism. However, if edge cases do arrive, they can simply be added as unit tests to further improve robustness. The project is now available (in an extremely early state!) at https://www.github.com/mhallsmoore/qstrader.git under a liberal open-source MIT license. At this stage it lacks documentation, so I am only adding the link for those who wish to browse the code as it stands. ## Component Design In the previous article in the series on Advanced Trading Infrastructure I discussed the broad roadmap for development. In this article I want to discuss one of the most important aspects of the system, namely the Position component, that will form the basis of the Portfolio and subsequently PortfolioHandler. When designing such a system it is necessary to consider how we will break down separate behaviours into various submodules. Not only does this prevent strong coupling of the components, but it also allows much more straightforward testing, as each component can be unit tested separately. The design I have chosen for the portfolio management infrastructure consists of the following components: • Position - This class encapsulates all data associated with an open position in an asset. That is, it tracks the realised and unrealised profit and loss (PnL) by averaging the multiple "legs" of the transaction, inclusive of transaction costs. • Portfolio - The Portfolio class that encapsulates a list of Positions, as well as a cash balance, equity and PnL. • PositionSizer - The PositionSizer class provides the PortfolioHandler (see below) with guidance on how to size positions once a strategy signal is received. For instance, the PositionSizer could incorporate a Kelly Criterion approach. • RiskManager - The RiskManager is used by the PortfolioHandler to verify, modify or veto any suggested trades that pass through from the PositionSizer, based on the current composition of the portfolio and external risk considerations (such as correlation to indices or volatility). • PortfolioHandler - The PortfolioHandler class is responsible for the management of the current Portfolio, interacting with the RiskManager and PositionSizer as well as submitting orders to be executed by an execution handler. Note that this component organisation is somewhat different to how the backtesting system operates in QSForex. In that case the Portfolio object is the equivalent of the PortfolioHandler class above. I've split these into two here. This allows much more straightforward risk management and position sizing with the RiskManager and PositionSizer classes. We will now turn our attention to the Position class. In later articles we will look at the Portfolio, PortfolioHandler and the risk/position sizing components. ## Position The Position class is very similar to its namesake in the QSForex project, with the exception that it has initially been designed for use with equity, rather than forex, instruments. Hence there is no notion of a "base" or "quote" currency. However, we retain the ability to update the unrealised PnL on demand, via updating of the current bid and ask prices quoted on the market. I'll output the code listing in full and then run through how it works. Note that any of these listings are subject to change, since I will be continually making changes to this project. Eventually I hope others will collaborate by providing Pull Requests to the codebase. ### position.py from decimal import Decimal TWOPLACES = Decimal("0.01") FIVEPLACES = Decimal("0.00001") class Position(object): def __init__( self, action, ticker, init_quantity, init_price, init_commission, ): """ Set up the initial "account" of the Position to be zero for most items, with the exception of the initial purchase/sale. Then calculate the initial values and finally update the market value of the transaction. """ self.action = action self.ticker = ticker self.quantity = init_quantity self.init_price = init_price self.init_commission = init_commission self.realised_pnl = Decimal("0.00") self.unrealised_pnl = Decimal("0.00") self.sells = Decimal("0") self.avg_bot = Decimal("0.00") self.avg_sld = Decimal("0.00") self.total_bot = Decimal("0.00") self.total_sld = Decimal("0.00") self.total_commission = init_commission self._calculate_initial_value() def _calculate_initial_value(self): """ Depending upon whether the action was a buy or sell ("BOT" or "SLD") calculate the average bought cost, the total bought cost, the average price and the cost basis. Finally, calculate the net total with and without commission. """ if self.action == "BOT": self.avg_bot = self.init_price.quantize(FIVEPLACES) self.avg_price = ( (self.init_price * self.quantity + self.init_commission)/self.quantity ).quantize(FIVEPLACES) self.cost_basis = ( self.quantity * self.avg_price ).quantize(TWOPLACES) else: # action == "SLD" self.sells = self.quantity self.avg_sld = self.init_price.quantize(FIVEPLACES) self.total_sld = (self.sells * self.avg_sld).quantize(TWOPLACES) self.avg_price = ( (self.init_price * self.quantity - self.init_commission)/self.quantity ).quantize(FIVEPLACES) self.cost_basis = ( -self.quantity * self.avg_price ).quantize(TWOPLACES) self.net_total = (self.total_sld - self.total_bot).quantize(TWOPLACES) self.net_incl_comm = (self.net_total - self.init_commission).quantize(TWOPLACES) """ The market value is tricky to calculate as we only have Brokers, which means that the true redemption price is unknown until executed. However, it can be estimated via the mid-price of the allows calculation of the unrealised and realised profit and loss of any transactions. """ self.market_value = ( self.quantity * midpoint ).quantize(TWOPLACES) self.unrealised_pnl = ( self.market_value - self.cost_basis ).quantize(TWOPLACES) self.realised_pnl = ( self.market_value + self.net_incl_comm ) def transact_shares(self, action, quantity, price, commission): """ Calculates the adjustments to the Position that occur once new shares are bought and sold. Takes care to update the average bought/sold, total bought/sold, the cost basis and PnL calculations, as carried out through Interactive Brokers TWS. """ prev_quantity = self.quantity prev_commission = self.total_commission self.total_commission += commission # Adjust total bought and sold if action == "BOT": self.avg_bot = ( ).quantize(FIVEPLACES) if self.action != "SLD": self.avg_price = ( ( price*quantity+commission ).quantize(FIVEPLACES) # action == "SLD" else: self.avg_sld = ( (self.avg_sld*self.sells + price*quantity)/(self.sells + quantity) ).quantize(FIVEPLACES) if self.action != "BOT": self.avg_price = ( ( self.avg_price*self.sells + price*quantity-commission )/(self.sells + quantity) ).quantize(FIVEPLACES) self.sells += quantity self.total_sld = (self.sells * self.avg_sld).quantize(TWOPLACES) # Adjust net values, including commissions self.quantity = self.net self.net_total = ( self.total_sld - self.total_bot ).quantize(TWOPLACES) self.net_incl_comm = ( self.net_total - self.total_commission ).quantize(TWOPLACES) # Adjust average price and cost basis self.cost_basis = ( self.quantity * self.avg_price ).quantize(TWOPLACES) You'll notice that the entire project makes extensive use of the decimal module. This is an absolute necessity in financial applications, otherwise you end up causing significant rounding errors due to the mathematics of floating point handling. I've created two helper variables, TWOPLACES and FIVEPLACES, which are used subsequently to define the level of precision necessary for rounding in calculations. TWOPLACES = Decimal("0.01") FIVEPLACES = Decimal("0.00001") The Position class requires a transaction action: "Buy" or "Sell". I've used the Interactive Brokers code BOT and SLD for these throughout the code. In addition, the Position requires a ticker symbol, a quantity to transact, the price of purchase or sale and the commission. class Position(object): def __init__( self, action, ticker, init_quantity, init_price, init_commission, ): """ Set up the initial "account" of the Position to be zero for most items, with the exception of the initial purchase/sale. Then calculate the initial values and finally update the market value of the transaction. """ self.action = action self.ticker = ticker self.quantity = init_quantity self.init_price = init_price self.init_commission = init_commission The Position also keeps track of a few other metrics, which fully mirror those handled by Interactive Brokers. It tracks the PnL, the quantity of buys and sells, the average purchase price and the average sale price, the total purchase price and the total sale price, as well as the total commission spent to date. self.realised_pnl = Decimal("0.00") self.unrealised_pnl = Decimal("0.00") self.sells = Decimal("0") self.avg_bot = Decimal("0.00") self.avg_sld = Decimal("0.00") self.total_bot = Decimal("0.00") self.total_sld = Decimal("0.00") self.total_commission = init_commission The Position class has been structured this way because it is very difficult to pin down the idea of a "trade". For instance, let's imagine we carry out the following transactions: • Day 1 - Purchase 100 shares of GOOG. Total 100. • Day 2 - Purchase 200 shares of GOOG. Total 300. • Day 3 - Sell 400 shares of GOOG. Total -100. • Day 4 - Purchase 200 shares of GOOG. Total 100. • Day 5 - Sell 100 shares of GOOG. Total 0. This constitutes a "round trip". How are we to determine the profit on this? Do we work out the profit on every leg of the transaction? Do we only calculate profit once the quantity nets out back to zero? These problems are solved by using a realised and unrealised PnL. We will always know how much we have made to date and how much we might make by keeping track of these two values. At the Portfolio level we can simply total these values up and work out our total PnL at any time. Finally, in the initialisation method __init__, we calculate the initial values and update the market value by the latest bid/ask: self._calculate_initial_value() The _calculate_initial_value method is as follows: def _calculate_initial_value(self): """ Depending upon whether the action was a buy or sell ("BOT" or "SLD") calculate the average bought cost, the total bought cost, the average price and the cost basis. Finally, calculate the net total with and without commission. """ if self.action == "BOT": self.avg_bot = self.init_price.quantize(FIVEPLACES) self.avg_price = ( (self.init_price * self.quantity + self.init_commission)/self.quantity ).quantize(FIVEPLACES) self.cost_basis = ( self.quantity * self.avg_price ).quantize(TWOPLACES) else: # action == "SLD" self.sells = self.quantity self.avg_sld = self.init_price.quantize(FIVEPLACES) self.total_sld = (self.sells * self.avg_sld).quantize(TWOPLACES) self.avg_price = ( (self.init_price * self.quantity - self.init_commission)/self.quantity ).quantize(FIVEPLACES) self.cost_basis = ( -self.quantity * self.avg_price ).quantize(TWOPLACES) self.net_total = (self.total_sld - self.total_bot).quantize(TWOPLACES) self.net_incl_comm = (self.net_total - self.init_commission).quantize(TWOPLACES) This method carries out the initial calculations upon opening a new position. For "BOT" actions (purchases), the number of buys is incremented, the average and total bought values are calculated, as is the average price of the position. The cost basis is calculated as the current quantity multiplied by the average price paid. Hence it is the "total price paid so far". In addition, the net quantity, net total and net including commission are also calculated. The next method is update_market_value. This is a tricky method to implement as it relies on a particular choice for how "market value" is calculated. There is no correct choice for this, as there is no such thing as "market value". There are some useful choices though: • Midpoint - A common choice is to take the mid-point of the bid-ask spread. This takes the top bid and ask prices of the order book (for a particular exchange) and averages them. • Last Traded Price - This is the last price that a stock traded at. Calculating this is tricky because the trade may have occurred across multiple prices (different lots at different prices). Hence it could be a weighted average. • Bid or Ask - Depending upon the side of the transaction (i.e. a purchase or sale), the top bid or ask price could be used as an indication. None of these prices are likely to be the one achieved in practice, however. The order book dynamics, slippage and market impact are all going to cause the true sale price to differ from the currently quoted bid/ask. In the following method I have opted for the midpoint calculation in order to provide some sense of "market value". It is important to stress, however, that this value can find its way into the risk management and position sizing calculations, so it is necessary to make sure that you are happy with how it is calculated and modify it accordingly if you wish to use it in your own trading engines. Once the market value is calculated it allows subsequent calculation of the unrealised and realised PnL of the position: def update_market_value(self, bid, ask): """ The market value is tricky to calculate as we only have Brokers, which means that the true redemption price is unknown until executed. However, it can be estimated via the mid-price of the allows calculation of the unrealised and realised profit and loss of any transactions. """ self.market_value = ( self.quantity * midpoint ).quantize(TWOPLACES) self.unrealised_pnl = ( self.market_value - self.cost_basis ).quantize(TWOPLACES) self.realised_pnl = ( self.market_value + self.net_incl_comm ) The final method is transact_shares. This is the method called by the Portfolio class to actually carry out a transaction. I won't repeat the method in full here, as it can be found above and at Github, but I will concentrate on some important sections. In the code snippet below you can see that if the action is a purchase ("BOT") then the average buy price is recalculated. If the original action was also a purchase, the average price is necessarily modified. The total buys are increased by the new quantity and the total purchase price is modified. The logic is similar for the sell/"SLD" side: if action == "BOT": self.avg_bot = ( ).quantize(FIVEPLACES) if self.action != "SLD": self.avg_price = ( ( price*quantity+commission ).quantize(FIVEPLACES) Finally, the net values are all adjusted. The calculations are relatively straightforward and can be followed from the snippet. Notice that they are all rounded to two decimal places: # Adjust net values, including commissions self.quantity = self.net self.net_total = ( self.total_sld - self.total_bot ).quantize(TWOPLACES) self.net_incl_comm = ( self.net_total - self.total_commission ).quantize(TWOPLACES) # Adjust average price and cost basis self.cost_basis = ( self.quantity * self.avg_price ).quantize(TWOPLACES) That concludes the Position class. It provides a robust mechanism for handling position calculation and storage. For completeness you can find the full code for the Position class on Github at position.py. ### position_test.py In order to test the calculations within the Position class I've written the following unit tests, checking them against similar transactions carried out within Interactive Brokers. I do fully anticipate finding new edge cases and bugs that will need error correction, but these current unit tests will provide strong confidence in the results going forward. The full listing of position_test.py is as follows: from decimal import Decimal import unittest class TestRoundTripXOMPosition(unittest.TestCase): """ Test a round-trip trade in Exxon-Mobil where the initial trade is a buy/long of 100 shares of XOM, at a price of $74.78, with$1.00 commission. """ def setUp(self): """ Set up the Position object that will store the PnL. """ self.position = Position( "BOT", "XOM", Decimal('100'), Decimal("74.78"), Decimal("1.00"), Decimal('74.78'), Decimal('74.80') ) def test_calculate_round_trip(self): """ After the subsequent purchase, carry out two more buys/longs and then close the position out with two additional sells/shorts. The following prices have been tested against those calculated via Interactive Brokers' Trader Workstation (TWS). """ self.position.transact_shares( "BOT", Decimal('100'), Decimal('74.63'), Decimal('1.00') ) self.position.transact_shares( "BOT", Decimal('250'), Decimal('74.620'), Decimal('1.25') ) self.position.transact_shares( "SLD", Decimal('200'), Decimal('74.58'), Decimal('1.00') ) self.position.transact_shares( "SLD", Decimal('250'), Decimal('75.26'), Decimal('1.25') ) self.position.update_market_value(Decimal("77.75"), Decimal("77.77")) self.assertEqual(self.position.action, "BOT") self.assertEqual(self.position.ticker, "XOM") self.assertEqual(self.position.quantity, Decimal("0")) self.assertEqual(self.position.sells, Decimal("450")) self.assertEqual(self.position.net, Decimal("0")) self.assertEqual(self.position.avg_bot, Decimal("74.65778")) self.assertEqual(self.position.avg_sld, Decimal("74.95778")) self.assertEqual(self.position.total_bot, Decimal("33596.00")) self.assertEqual(self.position.total_sld, Decimal("33731.00")) self.assertEqual(self.position.net_total, Decimal("135.00")) self.assertEqual(self.position.total_commission, Decimal("5.50")) self.assertEqual(self.position.net_incl_comm, Decimal("129.50")) self.assertEqual(self.position.avg_price, Decimal("74.665")) self.assertEqual(self.position.cost_basis, Decimal("0.00")) self.assertEqual(self.position.market_value, Decimal("0.00")) self.assertEqual(self.position.unrealised_pnl, Decimal("0.00")) self.assertEqual(self.position.realised_pnl, Decimal("129.50")) class TestRoundTripPGPosition(unittest.TestCase): """ Test a round-trip trade in Proctor & Gamble where the initial trade is a sell/short of 100 shares of PG, at a price of $77.69, with$1.00 commission. """ def setUp(self): self.position = Position( "SLD", "PG", Decimal('100'), Decimal("77.69"), Decimal("1.00"), Decimal('77.68'), Decimal('77.70') ) def test_calculate_round_trip(self): """ After the subsequent sale, carry out two more sells/shorts The following prices have been tested against those calculated via Interactive Brokers' Trader Workstation (TWS). """ self.position.transact_shares( "SLD", Decimal('100'), Decimal('77.68'), Decimal('1.00') ) self.position.transact_shares( "SLD", Decimal('50'), Decimal('77.70'), Decimal('1.00') ) self.position.transact_shares( "BOT", Decimal('100'), Decimal('77.77'), Decimal('1.00') ) self.position.transact_shares( "BOT", Decimal('150'), Decimal('77.73'), Decimal('1.00') ) self.position.update_market_value(Decimal("77.72"), Decimal("77.72")) self.assertEqual(self.position.action, "SLD") self.assertEqual(self.position.ticker, "PG") self.assertEqual(self.position.quantity, Decimal("0")) self.assertEqual(self.position.sells, Decimal("250")) self.assertEqual(self.position.net, Decimal("0")) self.assertEqual(self.position.avg_bot, Decimal("77.746")) self.assertEqual(self.position.avg_sld, Decimal("77.688")) self.assertEqual(self.position.total_bot, Decimal("19436.50")) self.assertEqual(self.position.total_sld, Decimal("19422.00")) self.assertEqual(self.position.net_total, Decimal("-14.50")) self.assertEqual(self.position.total_commission, Decimal("5.00")) self.assertEqual(self.position.net_incl_comm, Decimal("-19.50")) self.assertEqual(self.position.avg_price, Decimal("77.67600")) self.assertEqual(self.position.cost_basis, Decimal("0.00")) self.assertEqual(self.position.market_value, Decimal("0.00")) self.assertEqual(self.position.unrealised_pnl, Decimal("0.00")) self.assertEqual(self.position.realised_pnl, Decimal("-19.50")) if __name__ == "__main__": unittest.main() The imports for this module are straightforward. We once again import the Decimal class, but also add the unittest module as well as the Position class itself, since it is being tested: from decimal import Decimal import unittest For those who have not yet seen a Python unit test, the basic idea is to create a class called TestXXXX that inherits from unittest.TestCase as in the subsequent snippet. The class exposes a setUp method that allows any data or state to be utilised for the remainder of that particular test. Here's an example unit test setup of a "round trip" trade for Exxon-Mobil/XOM: class TestRoundTripXOMPosition(unittest.TestCase): """ Test a round-trip trade in Exxon-Mobil where the initial trade is a buy/long of 100 shares of XOM, at a price of $74.78, with$1.00 commission. """ def setUp(self): """ Set up the Position object that will store the PnL. """ self.position = Position( "BOT", "XOM", Decimal('100'), Decimal("74.78"), Decimal("1.00"), Decimal('74.78'), Decimal('74.80') ) Notice that self.position is set to be a new Position class, where 100 shares of XOM are purchased for 74.78 USD. Subsequent methods, in the format of test_XXXX, allow unit testing of various aspects of the system. In this particular method, after the initial purchase, two more long purchases are made and finally two sales are made to net the position out to zero: def test_calculate_round_trip(self): """ After the subsequent purchase, carry out two more buys/longs and then close the position out with two additional sells/shorts. The following prices have been tested against those calculated via Interactive Brokers' Trader Workstation (TWS). """ self.position.transact_shares( "BOT", Decimal('100'), Decimal('74.63'), Decimal('1.00') ) self.position.transact_shares( "BOT", Decimal('250'), Decimal('74.620'), Decimal('1.25') ) self.position.transact_shares( "SLD", Decimal('200'), Decimal('74.58'), Decimal('1.00') ) self.position.transact_shares( "SLD", Decimal('250'), Decimal('75.26'), Decimal('1.25') ) self.position.update_market_value(Decimal("77.75"), Decimal("77.77")) transact_shares is called four times and finally the market value is updated by update_market_value. At this point self.position stores all of the various calculations and is ready to be tested, using the assertEqual method derived from the unittest.TestCase class. Notice how all of the various properties of the Position class are tested against externally calculated (in this case by Interactive Brokers TWS) values: self.assertEqual(self.position.action, "BOT") self.assertEqual(self.position.ticker, "XOM") self.assertEqual(self.position.quantity, Decimal("0")) self.assertEqual(self.position.sells, Decimal("450")) self.assertEqual(self.position.net, Decimal("0")) self.assertEqual(self.position.avg_bot, Decimal("74.65778")) self.assertEqual(self.position.avg_sld, Decimal("74.95778")) self.assertEqual(self.position.total_bot, Decimal("33596.00")) self.assertEqual(self.position.total_sld, Decimal("33731.00")) self.assertEqual(self.position.net_total, Decimal("135.00")) self.assertEqual(self.position.total_commission, Decimal("5.50")) self.assertEqual(self.position.net_incl_comm, Decimal("129.50")) self.assertEqual(self.position.avg_price, Decimal("74.665")) self.assertEqual(self.position.cost_basis, Decimal("0.00")) self.assertEqual(self.position.market_value, Decimal("0.00")) self.assertEqual(self.position.unrealised_pnl, Decimal("0.00")) self.assertEqual(self.position.realised_pnl, Decimal("129.50")) When I execute the correct Python binary from within my virtual environment (under the command line in Ubuntu), I receive the following output: (qstrader)mhallsmoore@desktop:~/sites/qstrader/approot/position\$ python position_test.py .. ---------------------------------------------------------------------- Ran 2 tests in 0.000s OK Note that there are two tests within the full file, so take a look at the second test in the full listing in order to familiarise yourself with the calculations. Clearly there are a lot more tests that could be carried out on the Position class. At this stage we can be reassured that it handles the basic functionality of holding an equity position. In time, the class will likely be expanded to cope with forex, futures and options instruments, allowing a sophisticated investment strategy to be carried out within a portfolio. The full listing itself can be found at position_test.py on Github. ## Next Steps In the following articles we are going to look at the Portfolio and PortfolioHandler classes. Both of these are currently coded up and unit tested in a basic manner. If you wish to "jump ahead" and see the code, you can take a look at the full QSTrader repository here: https://github.com/mhallsmoore/qstrader.
# Does a Call Spread always need to be symmetric? I have a plot of a Call Spread Option at time $t ={0}$ but the graph of the call spread is not completely symmetric. My question is: does it have to be? Here is the plot I'm referring to: I'm just wondering because at maturity time the Call Spread becomes symmetric, so it anyone can provide a bit of information on this I'd really appreciate it. Thanks! • What do you mean symmetric? A call spread is not symmetric. You can plot the payoff to observe that – Gordon May 22 '16 at 23:46 • +1 Indeed. Intuitively, recall that in the limit of an infinitesimally small wedge size $K_2-K_1$, a (normalised) call spead converges to a digital option, i.e. its price is intrinsically tied to the probability of $S_T$ finishing in the money, i.e. to the cumulative distribution function of the random variable of interest. When the distribution function of your random symmetric, the cdf is symmetric as well. – Quantuple May 23 '16 at 8:07
# zbMATH — the first resource for mathematics Analysis of varieties – the work of Dr. Yoichi Miyaoka. (Japanese) Zbl 0759.14026 Yoichi Miyaoka won the prize of the Mathematical Society of Japan in spring, 1989. This report is an introduction of his main results in celebration of his receiving the prize. The author explains Y. Miyaoka’s contributions to the classification theory of algebraic surfaces, and to the proof of the existence of minimal models for 3-folds. This report is finished by the famous Miyaoka-Yau inequality and the possibility of its applications to the arithmetic geometry. ##### MSC: 14J10 Families, moduli, classification: algebraic theory 01A70 Biographies, obituaries, personalia, bibliographies 14E30 Minimal model program (Mori theory, extremal rays) 14J30 $$3$$-folds 14-03 History of algebraic geometry
Related Articles Geometrical meaning of the Zeroes of a Polynomial • Last Updated : 22 Mar, 2021 An algebraic identity is an equality that holds for any value of its variables. They are generally used in the factorization of polynomials or simplification of algebraic calculations. A polynomial is just a bunch of algebraic terms added together, for example, p(x) = 4x + 1 is a degree-1 polynomial. Similarly, polynomials can be of any degree 1, 2, 3… and so on. Polynomials are basically those Mathematical Expressions that make calculations easy in real life. There are different types of polynomials depending upon the number of terms present in them, for example, if there are 2 terms, it is known as a binomial. ### Zeros of Polynomials Let’s assume that P(x) is a polynomial. Let x = r, be the value of x where our polynomial P(x) becomes zero i.e, P(x) = 0 at x = r or P(r) = 0 The process of finding the zeroes of P(x) is simply solving the equation P(x) = 0. We already know how to calculate zeros for first and second-degree polynomials. Let’s see some examples for this, Question 1: P(x) = x2 + 2x – 15. Find the roots of this polynomial. Solution: Let’s put P(x) = 0, x2 + 2x -15 = 0 ⇒ x2 + 5x – 3x – 15 = 0 ⇒ x(x +5) -3 (x + 5) = 0 ⇒ (x – 3) (x + 5) = 0 So, this expression will be zero for two values of x, i.e. x = 3 and -5. The last example used the factorization method for finding the roots of the polynomial. We can also use other methods such as the zero factor of the Shree Dharacharya Quadratic formula for finding out the roots or zeros of a polynomial. If ax2 + bx + c = 0, where a≠0, then the formula for roots will be, This formula for finding out roots of a polynomial is known as Shree Dharacharya Quadratic Formula. Question 2: Solve for the roots of the polynomial, x2 + 6x – 14 = 0 Solution: As it is clear that the above expression cannot be simplified just by intuition, therefore, we shall go for the Dharacharya Formula. Here, a=1, b=+6, c=-14 x=-3±√23 ### Geometric Meaning of Zeros of a Polynomial We know what zeros are, but why are they important? Let’s look at the graphical interpretation of the meaning behind zeros. Let’s look at the graphs of first and second-degree polynomials. Let’s take a polynomial y = 2x + 1, this is also an equation of a straight line. The figure below represents the graph of this straight line. We will look at the positions where y becomes zero. This graph intersects the x-axis at one position, there is only one root. So, we can conclude from the graph that roots are the position where the graph of polynomial intersects the x-axis. So, in general, for a polynomial y = ax + b, it represents a straight line with one root, which lies on the position where the graph cuts the x-axis i.e (. Now, let’s look at graphs of second-degree polynomials Let p(x) = x2 – 3x – 4, the figure below represents its graph. Let’s focus on the roots of this polynomial. In this graph, we can see that it cuts the x-axis at two points. So, these two points are the zeros of this polynomial. But will it be always this case? This graph can be upward facing or downward-facing depending upon the coefficient of x2 In general, the zeroes of a quadratic polynomial ax2 + bx + c, a ≠ 0, are precisely the x-coordinates of the points where the parabola representing y = ax2 + bx + c intersects the x-axis. Three possible cases can happen to the shape of the graph. Case (i): The graph cuts the x-axis at two distinct points A and A′. This is the case that’s been shown above.  Here the polynomial y = ax2 + bx + c has two distinct roots. Case (ii): The graph cuts the x-axis at exactly one point, i.e., at two coincident points. Two same roots. Here the polynomial y = ax2 + bx + c has only one root. Case (iii): The graph is either completely above x-axis or completely below the x-axis. Here the polynomial y = ax2 + bx + c has no roots. Similarly, let’s study this for a cubic polynomial ### Cubic Polynomial Let’s assume a polynomial, P(x) = x3 – 4x. P(x) = x(x2-4) = x(x-2)(x+2) So the roots of this polynomial are x = 0, 2, -2. We can verify from the graph that these must be the places where the curve cuts the x-axis. But as in the case of second-degree polynomials, there may be more than one possibility. Let’s take an example of cubic polynomial P(x) = x3 This graph cuts the x-axis only at one point. So, x = 0 is the only root of the polynomial p(x) = x3 There can be another case, for example, take P(x) = x3 – x2 We can see that the graph of this function cuts the x-axis at two places only. From the above example, we can say that this polynomial can have at most 3 zeros. Note: In general a polynomial of degree “N” can have at most N zeros. Question 1: Which of the following graphs represents a cubic polynomial. (A) is the graph of a cubic polynomial because it cuts the x-axis only three times. (B) does not cut the x-axis at all and is continuously increasing, So it cannot be. (C) It cuts the x-axis at more than 5 points. So it cannot be three degree polynomial (D) It is a graph of a parabola, we studied earlier. So it is not a cubic polynomial. Question 2: Show the zeros of the Quadratic equation on the graph, x2 – 3x – 4 = 0 Solution: From the equation, we can tell that there are 2 values of x. Factorizing the quadratic equation to find out the values of x, x2-4x+x-4= 0 x(x-4) +1(x-4)= 0 x = (-1), x = 4 Hence, the graph will be an upward parabola intersecting at (-1,0) and (4,0) Question 3: Find the point where the graph intersects x-axis for the quadratic equation, x2 – 2x – 8 = 0 Solution: Factorize the quadratic equation to find the points, x2-4x+2x-8 = 0 x(x-4) +2(x-4)= 0 (x-4)(x+2)= 0 x = 4, x = (-2) Therefore, the equation will cut the graph on x-axis at (4,0) and (-2,0) My Personal Notes arrow_drop_up
# Do the exchange operator and Hamiltonian commute for non-identical particles? Wherever I have read about exchange operator(P), it is stated that for two identical bosons it introduces a plus sign after exchange and minus sign for fermions. P and Hamiltonian(H) commute for two identical particles.Eigenvalues of P are $\pm1$. I think it commutes also for non-identical particles. This is what I think. Let the two non-identical particles are in state $\psi(\vec r_1,\vec r_2)$ and after exchange they are in the state $\phi(\vec r_1,\vec r_2)$(I believe even the wave function changes but I don't have any proof) and E is the total energy of the system which remains unchanged even after the exchange. $H\psi(\vec r_1,\vec r_2)=E\psi(\vec r_1,\vec r_2),~~H\phi(\vec r_1,\vec r_2)=E\phi(\vec r_1,\vec r_2)$ $P\psi(\vec r_1,\vec r_2)=\phi(\vec r_1,\vec r_2),~~P\phi(\vec r_1,\vec r_2)=\psi(\vec r_1,\vec r_2)$ $PH\psi(\vec r_1,\vec r_2)=PE\phi(\vec r_1,\vec r_2)=E\psi(\vec r_1,\vec r_2)$ $HP\psi(\vec r_1,\vec r_2)=H\phi(\vec r_1,\vec r_2)=E\psi(\vec r_1,\vec r_2)$ [P,H]=0 • Let's say my non-identical particles have different masses and I place them in a uniform (classical) gravitational field with the heavier particle higher up than the lighter one. If I exchange the particles is the energy unchanged? – By Symmetry Aug 22 '18 at 10:01 • Nope, the energy changes. – Asit Srivastava Aug 22 '18 at 16:35
## How to make a QLabel widget display a QImage object, without first converting it to a QPixmap. I have been studying the Qt v5.7.1 GUI Library on a private, informal basis, and making progress. As I was doing this, I observed a fact which is generally thought to be true, but which may not always be so. GUI Libraries such as Qt have a set of object classes defined, which allow geometries to be drawn onto an abstract surface that exists in software, such geometries being Ellipses, Arcs, Rectangles, Polygons, Polylines, Chords etc.. Under Qt5, this is performed by way of the ‘QPainter’ class, together with a ‘QBrush’ and a ‘QPen’ class, and one out of several subclasses of the ‘QPaintDevice’ class. The crux of the operation is, that the ‘QPaintDevice’ class’s subclass needs to specify some sort of pixel buffer, so that the classes ‘QPixmap’ and ‘QImage’ certainly qualify. Multiple other classes also work. These pixel buffers are necessary physically, for the abstract drawing surface to ‘remember’ what the painting operations left behind, in the form of an image. But what anybody who incorporates this into their Apps’ GUI will want to solve next is, to display the resulting pixel buffer. And an ugly way to do that, which is often used, is to select the ‘QPixmap’ class as the target, and then to do something like this: void MainWindow::resetQ() { delete sw_pixmap; sw_pixmap = NULL; sw_pixmap = new QPixmap(":/res/images/Unknown.jpg"); int width = size().width(); int height = size().height(); i_label.setPixmap(sw_pixmap->scaled(width - 20, height - 50, Qt::KeepAspectRatio, Qt::SmoothTransformation)); } The main problem with this is, that temporary pixmaps are being generated as arguments to function calls, by what I call ‘Naked Constructor Calls’. The compiler will generate temporary objects, and pass them in to the functions being called. And, because those objects were allocated on the stack, as soon as the function returns, they are just popped off the stack, hopefully in a way that results in no memory leaks. When such temporary objects are simply complex numbers, I can still live with that, But in this case, they are pixel buffers! Actually, I bent the truth slightly here. The way the function ‘setPixmap()’ is really prototyped, it accepts its argument via a reference. If this were not the case, the code above would truly become ridiculous. However, because of the way arguments are nevertheless presented and then forgotten, the ‘QLabel’ object that called this function still needs to create an internal copy of the pixel map, and must also deallocate the memory once replaced. Further, the temporary object created by ‘sw_pixmap->scaled()‘ is in fact a modified copy of ‘sw_pixmap’, and the compiler sees to it, that it gets deallocated, after the containing function call has returned. Whenever (larger) objects are returned from function-calls by value, they are stored in a special region of memory, managed by the compiler. (:4) There has got to be a better way, in certain situations, to display whatever pixel buffer was painted to, from a QLabel object. And there is. (This is a link to the previous exercise.) (Updated 8/20/2020, 17h10… )
Iterator to remove duplicate element The code implements custom iterator, which skips duplicate integers. The array expected is to be sorted. public class GenericIterator<K> implements Iterator<K> { private K[] array; // Array will be set here private int index; GenericIterator(K[] array) { this.array = array; index = 0; } /* * * * * @see java.util.Iterator#hasNext() /** Returns <tt>true</tt> if this * iterator has more elements when traversing the array in the forward * direction. (In other words, returns <tt>true</tt> if <tt>next</tt> would * return an element rather than throwing an exception.) * * @return <tt>true</tt> if the array iterator has more elements when * traversing the array in the forward direction. */ @Override public boolean hasNext() { return !(array.length == index); } /* * * * * @see java.util.Iterator#hasNext() * * /** Returns the next element in the array. This method may be called * repeatedly to iterate through the array, or intermixed with calls to * <tt>previous</tt> to go back and forth. (Note that alternating calls to * <tt>next</tt> and <tt>previous</tt> will return the same element * repeatedly.) * * @return the next element in the array. * */ @Override public K next() { K outputElement = array[index++]; if (index == 0) { index++; } while (hasNext()) { if (array[index] == array[index - 1]) { index++; } else { break; } } return outputElement; } } • Can you specify what exactly what you wanted to review or just ask for a general review. The last sentence is a little short for that. – chillworld May 12 '15 at 7:25 • I don't even understand the purpose of this code. It seems like the array is assumed to be sorted but nowhere in the comments or question does it say this is a prerequisite. Therefore either the code is broken, or the purpose is unclear. – JS1 May 12 '15 at 7:58 • To make life easier for reviewers, please add sufficient context to your question. The more you tell us about what your code does and what the purpose of doing that is, the easier it will be for reviewers to help you. See also this meta question – Mast May 12 '15 at 8:04 • @JS1 I think he want to have an iterator who skip's the next element if that element is the same. – chillworld May 12 '15 at 12:00 • Why using generics and call it GenericIterator if you only expect Integer? – chillworld May 12 '15 at 21:41 The most bizarre aspect of the code is the comments, where the JavaDoc starts in the middle of a non-JavaDoc comment. Why not write standard JavaDoc? The JavaDoc for next() refers to a previous method — but no such method exists. By convention, type variables are named T instead of K. The body of hasNext() would be slightly better written as return array.length != index;. The contract for next() is that it should throw a NoSuchElement Exception if there are no more results, not an ArrayIndexOutOfBoundsException. Are you sure that you want to test for equality using == instead of .equals()? A typical use case involving Integer[] or String[] wouldn't behave the way I would expect. The if (index == 0) check can never succeed (unless index wraps around due to integer overflow!), so it's dead code. The flow of control for the while loop is slightly cumbersome, in my opinion. I would personally prefer a for loop to handle all of the index advancement and end-of-array testing. public T next() throws NoSuchElementException { if (index >= array.length) { throw new NoSuchElementException(); } T output = array[index]; for (index++; index < array.length; index++) { if (array[index] == null) { if (array[index - 1] != null) { break; } } else if (!array[index].equals(array[index - 1])) { break; } } return output; } • +1 Just saw also the == in stead of equals. Stupid I missed that big issue. – chillworld May 12 '15 at 12:03 Not much to review, but, if I read the Java documentation, you don't put the complete class in it. Because you didn't put the full class I come to the next thing : K outputElement = array[index++]; if (index == 0) { index++; } What the hell are you doing? index is instantiated at 0, the line before is index++ so index is never 0. Second point, you missed some errors. When you are at the last element and you ask next() you will get an OutOfBoundsException. Rather, it's better to check if your at the last element and if so, just return null. The OutOfBoundsException is an unchecked exception, so nobody who will use your class will expect and handle this error, and then the problems are coming. Also add JavaDoc what the Iterator returns when you are at the last element. Then another point; the constructor has this line : index = 0; It's obsolete. An int default value is always 0. Because it's not a static variable, it's instantiated when you create the class; so it's 0. Generics: With the new description in your question, a few more things come up. I don't know if you know how to use generics or it's just a fault but: • You expect Integer so I should rename the class name to UniqueIntegerIterator or something like that. • Because public class GenericIterator<K> implements Iterator<K> means that you can insert any type of Object it has a lot more usage then Integer only. If you want to make it only for Integer, the declaration should change to public class GenericIterator implements Iterator<Integer>. Of course the K[] should be changed to Integer[] Normally, when you have some requirements, you check for that and throw an AssertionError when the requirements are not met. Also, when I do this I rename the class again to UniqueSortedIntegerIterator so the people who are going to use this class understand they get unique values from a sorted collection, in this case an array. The Object behind is Integer and it acts like an Iterator.
#### Analyzing and Improving Representations with the Soft Nearest Neighbor Loss ##### Nicholas Frosst, Nicolas Papernot, Geoffrey Hinton We explore and expand the $\textit{Soft Nearest Neighbor Loss}$ to measure the $\textit{entanglement}$ of class manifolds in representation space: i.e., how close pairs of points from the same class are relative to pairs of points from different classes. We demonstrate several use cases of the loss. As an analytical tool, it provides insights into the evolution of class similarity structures during learning. Surprisingly, we find that $\textit{maximizing}$ the entanglement of representations of different classes in the hidden layers is beneficial for discrimination in the final layer, possibly because it encourages representations to identify class-independent similarity structures. Maximizing the soft nearest neighbor loss in the hidden layers leads not only to improved generalization but also to better-calibrated estimates of uncertainty on outlier data. Data that is not from the training distribution can be recognized by observing that in the hidden layers, it has fewer than the normal number of neighbors from the predicted class. arrow_drop_up
# zbMATH — the first resource for mathematics ## Banerjee, Abhijit Compute Distance To: Author ID: banerjee.abhijit Published as: Abhijit, Banerjee; Banerjee, A.; Banerjee, Abhijit Documents Indexed: 133 Publications since 2004 Reviewing Activity: 9 Reviews all top 5 #### Co-Authors 39 single-authored 23 Majumder, Sujoy 10 Ahamed, Molla Basir 10 Chakraborty, Bikash 8 Bhattacharjee, Pranab 8 Lahiri, Indrajit 7 Mallick, Sanjay 7 Mukherjee, Sonali 6 Sahoo, Pulak 4 Haldar, Goutam 3 Dhar, Santanu 1 Bhattacharyya, Saikat 1 Deb, Rajat K. all top 5 #### Fields 125 Functions of a complex variable (30-XX) #### Citations contained in zbMATH 66 Publications have been cited 263 times in 121 Documents Cited by Year Meromorphic functions sharing one value. Zbl 1093.30024 Banerjee, Abhijit 2005 Weighted sharing of two sets. Zbl 1103.30017 Lahiri, Indrajit; Banerjee, Abhijit 2006 Weighted sharing of a small function by a meromorphic function and its derivative. Zbl 1152.30321 Banerjee, Abhijit 2007 A uniqueness polynomial generating a unique range set and vice versa. Zbl 1283.30064 Banerjee, Abhijit; Lahiri, Indrajit 2012 Uniqueness of meromorphic functions sharing two sets with finite weight. Zbl 1146.30014 Banerjee, Abhijit 2008 On a question of Gross. Zbl 1115.30029 Banerjee, Abhijit 2007 Uniqueness of meromorphic functions with deficient poles. Zbl 1070.30015 Lahiri, Indrajit; Banerjee, Abhijit 2004 Further investigations on a question of Zhang and Lü. Zbl 1332.30047 Banerjee, Abhijit; Chakraborty, Bikash 2015 Uniqueness of certain non-linear differential polynomials sharing 1-points. Zbl 1218.30073 Banerjee, Abhijit 2011 Uniqueness of meromorphic functions sharing two sets with finite weight. II. Zbl 1213.30052 Banerjee, Abhijit 2010 Uniqueness of meromorphic functions sharing two or three sets. Zbl 1161.30017 Banerjee, Abhijit; Mukherjee, Sonali 2008 Some uniqueness results on meromorphic functions sharing three sets. Zbl 1130.30027 Banerjee, Abhijit 2007 Uniqueness and set sharing of derivatives of meromorphic functions. Zbl 1265.30141 Banerjee, Abhijit; Bhattacharjee, Pranab 2011 On the uniqueness of meromorphic functions that share two sets. Zbl 1159.30018 Banerjee, Abhijit 2008 Uniqueness of meromorphic functions that share two sets. Zbl 1150.30021 Abhijit, Banerjee 2007 A new class of strong uniqueness polynomials satisfying Fujimoto’s condition. Zbl 1331.30019 Banerjee, Abhijit 2015 Uniqueness of meromorphic functions concerning differential monomials sharing the same value. Zbl 1164.30022 Banerjee, Abhijit; Mukherjee, Sonali 2007 On uniqueness for nonlinear differential polynomials sharing the same 1-point. Zbl 1104.30018 Banerjee, Abhijit 2006 On the characterisations of a new class of strong uniqueness polynomials generating unique range sets. Zbl 1364.30035 Banerjee, Abhijit; Mallick, Sanjay 2017 Some uniqueness results related to meromorphic function that share a small function with its derivative. Zbl 1313.30117 Banerjee, Abhijit; Majumder, Sujoy 2014 Uniqueness of meromorphic functions that share three sets. Zbl 1186.30033 Banerjee, Abhijit 2009 A uniqueness result on some differential polynomials sharing 1-points. Zbl 1152.30023 Banerjee, Abhijit 2007 On the uniqueness of a power of a meromorphic function sharing a small function with the power of its derivative. Zbl 1224.30152 Banerjee, A.; Majumder, S. 2010 Uniqueness of derivatives of meromorphic functions sharing two or three sets. Zbl 1189.30064 Banerjee, Abhijit; Bhattacharjee, Pranab 2010 Nonlinear differential polynomials sharing a small function. Zbl 1212.30113 Banerjee, Abhijit; Mukherjee, Sonali 2008 Value distribution of a Wronskian. Zbl 1063.30030 Lahiri, Indrajit; Banerjee, Abhijit 2004 On the uniqueness of meromorphic functions and its difference operator sharing values or sets. Zbl 1390.30035 Banerjee, Abhijit; Bhattacharyya, Saikat 2018 Further results on the uniqueness of meromorphic functions and their derivative counterpart sharing one or two sets. Zbl 1362.30041 Banerjee, Abhijit; Chakraborty, Bikash 2016 Certain non-linear differential polynomials having common poles sharing a non zero polynomial with finite weight. Zbl 1412.30109 Banerjee, Abhijit; Haldar, Goutam 2015 Meromorphic function sharing a small function with its differential polynomial. Zbl 1345.30031 Banerjee, Abhijit; Ahamed, Molla Basir 2015 Certain nonlinear differential polynomials sharing a nonzero polynomial with finite weight. Zbl 1330.30035 Banerjee, Abhijit; Sahoo, Pulak 2015 A new type of unique range set with deficient values. Zbl 1331.30020 Banerjee, Abhijit; Chakraborty, Bikash 2015 Non-linear differential polynomials sharing small function with finite weight. Zbl 1323.30035 Banerjee, Abhijit; Majumder, Sujoy 2015 On uniqueness of meromorphic functions sharing three sets with finite weights. Zbl 1333.30043 Banerjee, Abhijit; Ahamed, Molla Basir 2014 Uniqueness of meromorphic functions sharing three sets – further study. Zbl 1329.30016 Banerjee, Abhijit; Majumder, Sujoy 2014 Uniqueness of meromorphic functions when two differential polynomials share one value. Zbl 1260.30015 Banerjee, Abhijit; Sahoo, Pulak 2012 Uniqueness of meromorphic functions whose $$n$$-th derivative share one or two values. Zbl 1274.30111 Banerjee, Abhijit; Mukherjee, Sonali 2009 Meromorphic functions sharing two sets. Zbl 1174.30040 Banerjee, Abhijit 2007 On uniqueness of meromorphic functions when two differential monomials share one value. Zbl 1133.30328 Banerjee, Abhijit 2007 Nonlinear differential polynomials sharing one value. Zbl 1124.30008 Banerjee, Abhijit 2007 Bicomplex modules with indefinite inner product. Zbl 1418.30043 Banerjee, A.; Deb, R. 2019 A note on uniqueness of meromorphic functions and their derivatives sharing two sets. Zbl 1412.30108 Banerjee, Abhijit; Chakraborty, Bikash 2017 Rational function and differential polynomial of a meromorphic function sharing a small function. Zbl 1399.30116 Ahamed, Molla Basir; Banerjee, Abhijit 2017 On the uniqueness of certain types of differential-difference polynomials. Zbl 1399.30117 Banerjee, A.; Majumder, S. 2017 Some further study on Brück conjecture. Zbl 1389.30126 Banerjee, Abhijit; Chakraborty, Bikash 2016 Uniqueness of the power of meromorphic functions with its differential polynomial sharing a set. Zbl 06750231 Banerjee, Abhijit; Chakraborty, Bikash 2016 On a generalization of a result of Zhang and Yang. Zbl 06750066 Banerjee, Abhijit; Majumder, Sujoy 2016 Fujimoto’s theorem – a further study. Zbl 1362.30040 Banerjee, A. 2016 On the generalizations of Brück conjecture. Zbl 1343.30019 Banerjee, Abhijit; Chakraborty, Bikash 2016 On the bi unique range sets for derivatives of meromorphic functions. Zbl 1399.30118 Banerjee, A.; Mallick, S. 2015 Meromorphic function with some power sharing a small function with the differential polynomial generated by the function. Zbl 1349.30111 Banerjee, Abhijit; Dhar, Santanu 2015 Uniqueness of meromorphic functions sharing two finite sets in $${\mathbb {C}}$$ with finite weight II. Zbl 1330.30034 Banerjee, Abhijit; Mallick, Sanjay 2015 Uniqueness of meromorphic functions sharing two finite sets in $$\mathbb{C}$$ with finite weight. Zbl 1306.30011 Banerjee, Abhijit; Haldar, Goutam 2014 Meromorphic functions with deficiencies generating unique range sets. Zbl 1302.30037 Banerjee, A.; Majumder, S. 2013 Bi-unique range sets for meromorphic functions. Zbl 1291.30189 Banerjee, Abhijit 2013 Some further results on a question of Yi. Zbl 1289.30176 Banerjee, Abhijit 2012 Some further results on the uniqueness of meromorphic functions sharing three sets. Zbl 1240.30133 Banerjee, Abhijit 2010 On the uniqueness of meromorphic functions sharing two sets. Zbl 1195.30042 Banerjee, Abhijit 2010 Uniqueness of meromorphic functions sharing a small function with their differential polynomials. Zbl 1188.30033 Banerjee, Abhijit 2009 Some uniqueness results on meromorphic functions sharing two or three sets. Zbl 1182.30043 Banerjee, Abhijit; Mukherjee, Sonali 2009 Some uniqueness results related to certain nonlinear differential polynomials sharing the same 1-points. Zbl 1182.30042 Banerjee, Abhijit; Bhattacharjee, Pranab 2009 Uniqueness of meromorphic functions sharing one value with their derivatives. Zbl 1159.30324 Banerjee, Abhijit; Bhattacharjee, Pranab 2008 Uniqueness of certain nonlinear differential polynomials sharing the same value. Zbl 1158.30321 Banerjee, Abhijit 2008 On uniqueness of meromorphic functions sharing three sets. Zbl 1161.30016 Banerjee, Abhijit 2008 Tumura-Clunie theorem concerning differential polynomials. Zbl 1062.30035 Lahiri, Indrajit; Banerjee, Abhijit 2004 Weighted sharing of three values by linear differential polynomials. Zbl 1069.30052 Lahiri, Indrajit; Banerjee, Abhijit 2004 Bicomplex modules with indefinite inner product. Zbl 1418.30043 Banerjee, A.; Deb, R. 2019 On the uniqueness of meromorphic functions and its difference operator sharing values or sets. Zbl 1390.30035 Banerjee, Abhijit; Bhattacharyya, Saikat 2018 On the characterisations of a new class of strong uniqueness polynomials generating unique range sets. Zbl 1364.30035 Banerjee, Abhijit; Mallick, Sanjay 2017 A note on uniqueness of meromorphic functions and their derivatives sharing two sets. Zbl 1412.30108 Banerjee, Abhijit; Chakraborty, Bikash 2017 Rational function and differential polynomial of a meromorphic function sharing a small function. Zbl 1399.30116 Ahamed, Molla Basir; Banerjee, Abhijit 2017 On the uniqueness of certain types of differential-difference polynomials. Zbl 1399.30117 Banerjee, A.; Majumder, S. 2017 Further results on the uniqueness of meromorphic functions and their derivative counterpart sharing one or two sets. Zbl 1362.30041 Banerjee, Abhijit; Chakraborty, Bikash 2016 Some further study on Brück conjecture. Zbl 1389.30126 Banerjee, Abhijit; Chakraborty, Bikash 2016 Uniqueness of the power of meromorphic functions with its differential polynomial sharing a set. Zbl 06750231 Banerjee, Abhijit; Chakraborty, Bikash 2016 On a generalization of a result of Zhang and Yang. Zbl 06750066 Banerjee, Abhijit; Majumder, Sujoy 2016 Fujimoto’s theorem – a further study. Zbl 1362.30040 Banerjee, A. 2016 On the generalizations of Brück conjecture. Zbl 1343.30019 Banerjee, Abhijit; Chakraborty, Bikash 2016 Further investigations on a question of Zhang and Lü. Zbl 1332.30047 Banerjee, Abhijit; Chakraborty, Bikash 2015 A new class of strong uniqueness polynomials satisfying Fujimoto’s condition. Zbl 1331.30019 Banerjee, Abhijit 2015 Certain non-linear differential polynomials having common poles sharing a non zero polynomial with finite weight. Zbl 1412.30109 Banerjee, Abhijit; Haldar, Goutam 2015 Meromorphic function sharing a small function with its differential polynomial. Zbl 1345.30031 Banerjee, Abhijit; Ahamed, Molla Basir 2015 Certain nonlinear differential polynomials sharing a nonzero polynomial with finite weight. Zbl 1330.30035 Banerjee, Abhijit; Sahoo, Pulak 2015 A new type of unique range set with deficient values. Zbl 1331.30020 Banerjee, Abhijit; Chakraborty, Bikash 2015 Non-linear differential polynomials sharing small function with finite weight. Zbl 1323.30035 Banerjee, Abhijit; Majumder, Sujoy 2015 On the bi unique range sets for derivatives of meromorphic functions. Zbl 1399.30118 Banerjee, A.; Mallick, S. 2015 Meromorphic function with some power sharing a small function with the differential polynomial generated by the function. Zbl 1349.30111 Banerjee, Abhijit; Dhar, Santanu 2015 Uniqueness of meromorphic functions sharing two finite sets in $${\mathbb {C}}$$ with finite weight II. Zbl 1330.30034 Banerjee, Abhijit; Mallick, Sanjay 2015 Some uniqueness results related to meromorphic function that share a small function with its derivative. Zbl 1313.30117 Banerjee, Abhijit; Majumder, Sujoy 2014 On uniqueness of meromorphic functions sharing three sets with finite weights. Zbl 1333.30043 Banerjee, Abhijit; Ahamed, Molla Basir 2014 Uniqueness of meromorphic functions sharing three sets – further study. Zbl 1329.30016 Banerjee, Abhijit; Majumder, Sujoy 2014 Uniqueness of meromorphic functions sharing two finite sets in $$\mathbb{C}$$ with finite weight. Zbl 1306.30011 Banerjee, Abhijit; Haldar, Goutam 2014 Meromorphic functions with deficiencies generating unique range sets. Zbl 1302.30037 Banerjee, A.; Majumder, S. 2013 Bi-unique range sets for meromorphic functions. Zbl 1291.30189 Banerjee, Abhijit 2013 A uniqueness polynomial generating a unique range set and vice versa. Zbl 1283.30064 Banerjee, Abhijit; Lahiri, Indrajit 2012 Uniqueness of meromorphic functions when two differential polynomials share one value. Zbl 1260.30015 Banerjee, Abhijit; Sahoo, Pulak 2012 Some further results on a question of Yi. Zbl 1289.30176 Banerjee, Abhijit 2012 Uniqueness of certain non-linear differential polynomials sharing 1-points. Zbl 1218.30073 Banerjee, Abhijit 2011 Uniqueness and set sharing of derivatives of meromorphic functions. Zbl 1265.30141 Banerjee, Abhijit; Bhattacharjee, Pranab 2011 Uniqueness of meromorphic functions sharing two sets with finite weight. II. Zbl 1213.30052 Banerjee, Abhijit 2010 On the uniqueness of a power of a meromorphic function sharing a small function with the power of its derivative. Zbl 1224.30152 Banerjee, A.; Majumder, S. 2010 Uniqueness of derivatives of meromorphic functions sharing two or three sets. Zbl 1189.30064 Banerjee, Abhijit; Bhattacharjee, Pranab 2010 Some further results on the uniqueness of meromorphic functions sharing three sets. Zbl 1240.30133 Banerjee, Abhijit 2010 On the uniqueness of meromorphic functions sharing two sets. Zbl 1195.30042 Banerjee, Abhijit 2010 Uniqueness of meromorphic functions that share three sets. Zbl 1186.30033 Banerjee, Abhijit 2009 Uniqueness of meromorphic functions whose $$n$$-th derivative share one or two values. Zbl 1274.30111 Banerjee, Abhijit; Mukherjee, Sonali 2009 Uniqueness of meromorphic functions sharing a small function with their differential polynomials. Zbl 1188.30033 Banerjee, Abhijit 2009 Some uniqueness results on meromorphic functions sharing two or three sets. Zbl 1182.30043 Banerjee, Abhijit; Mukherjee, Sonali 2009 Some uniqueness results related to certain nonlinear differential polynomials sharing the same 1-points. Zbl 1182.30042 Banerjee, Abhijit; Bhattacharjee, Pranab 2009 Uniqueness of meromorphic functions sharing two sets with finite weight. Zbl 1146.30014 Banerjee, Abhijit 2008 Uniqueness of meromorphic functions sharing two or three sets. Zbl 1161.30017 Banerjee, Abhijit; Mukherjee, Sonali 2008 On the uniqueness of meromorphic functions that share two sets. Zbl 1159.30018 Banerjee, Abhijit 2008 Nonlinear differential polynomials sharing a small function. Zbl 1212.30113 Banerjee, Abhijit; Mukherjee, Sonali 2008 Uniqueness of meromorphic functions sharing one value with their derivatives. Zbl 1159.30324 Banerjee, Abhijit; Bhattacharjee, Pranab 2008 Uniqueness of certain nonlinear differential polynomials sharing the same value. Zbl 1158.30321 Banerjee, Abhijit 2008 On uniqueness of meromorphic functions sharing three sets. Zbl 1161.30016 Banerjee, Abhijit 2008 Weighted sharing of a small function by a meromorphic function and its derivative. Zbl 1152.30321 Banerjee, Abhijit 2007 On a question of Gross. Zbl 1115.30029 Banerjee, Abhijit 2007 Some uniqueness results on meromorphic functions sharing three sets. Zbl 1130.30027 Banerjee, Abhijit 2007 Uniqueness of meromorphic functions that share two sets. Zbl 1150.30021 Abhijit, Banerjee 2007 Uniqueness of meromorphic functions concerning differential monomials sharing the same value. Zbl 1164.30022 Banerjee, Abhijit; Mukherjee, Sonali 2007 A uniqueness result on some differential polynomials sharing 1-points. Zbl 1152.30023 Banerjee, Abhijit 2007 Meromorphic functions sharing two sets. Zbl 1174.30040 Banerjee, Abhijit 2007 On uniqueness of meromorphic functions when two differential monomials share one value. Zbl 1133.30328 Banerjee, Abhijit 2007 Nonlinear differential polynomials sharing one value. Zbl 1124.30008 Banerjee, Abhijit 2007 Weighted sharing of two sets. Zbl 1103.30017 Lahiri, Indrajit; Banerjee, Abhijit 2006 On uniqueness for nonlinear differential polynomials sharing the same 1-point. Zbl 1104.30018 Banerjee, Abhijit 2006 Meromorphic functions sharing one value. Zbl 1093.30024 Banerjee, Abhijit 2005 Uniqueness of meromorphic functions with deficient poles. Zbl 1070.30015 Lahiri, Indrajit; Banerjee, Abhijit 2004 Value distribution of a Wronskian. Zbl 1063.30030 Lahiri, Indrajit; Banerjee, Abhijit 2004 Tumura-Clunie theorem concerning differential polynomials. Zbl 1062.30035 Lahiri, Indrajit; Banerjee, Abhijit 2004 Weighted sharing of three values by linear differential polynomials. Zbl 1069.30052 Lahiri, Indrajit; Banerjee, Abhijit 2004 all top 5 #### Cited by 92 Authors 33 Banerjee, Abhijit 16 Sahoo, Pulak 14 Waghamore, Harina Pandit 12 Majumder, Sujoy 8 Xu, Hongyan 6 Ahamed, Molla Basir 6 Chakraborty, Bikash 5 Meng, Chao 5 Saha, Biswajit 4 Karmakar, Himadri 4 Mandal, Rajib 4 Seikh, Sajahan 3 Anand, Sangeetha 3 Biswas, Tanmay 3 Maligi, Ramya 3 Mallick, Sanjay 3 Naveenkumar, S. H. 3 Wu, Chun 3 Yi, Hongxun 2 Ali, Sultan 2 An, Vu Hoai 2 Bhattacharjee, Pranab 2 Halder, Samar 2 Husna, V. 2 Khoai, Ha Huy 2 Kumar Datta, Sanjib 2 Liu, Dan 2 Rajeshwari, S. 2 Sarkar, Arindam 2 Wang, Hua 1 Al-Khaladi, Amer Haider Hussein 1 Anwar, Md T. 1 Banerjee, Arindam 1 Basir Ahamed, M. 1 Bhattacharyya, Saikat 1 Bhoosnurmath, Vijaylaxmi 1 Bilgin, Merve 1 Biswas, Chinmay 1 Biswas, G. P. 1 Cao, Tingbin 1 Charak, Kuldeep Singh 1 Dam, Arup 1 Deng, Bingmao 1 Dhar, Santanu 1 Dyavanal, Renukadevi Sangappa 1 Ersoy, Soley 1 Fang, Mingliang 1 Gao, Zongsheng 1 Giang, Ha Huong 1 Hai, Tran An 1 Haldar, Goutam 1 Harina, Waghamore P. 1 Hien, Nguyen Thi Thanh 1 Korhonen, Risto 1 Lahiri, Indrajit 1 Lai, Nguyen Xuan 1 Lal, Banarsi 1 Li, Jiangtao 1 Li, Xiaomin 1 Li, Xu 1 Liu, Gang 1 Liu, Gang 1 Liu, Gang 1 Lu, Feng 1 Ma, Linke 1 Majumder, Suman 1 Mallick, Sahanous 1 Mallick, Shrestha Basu 1 Mathai, Madhura M. 1 Pramanik, Dilip Chandra 1 Roy, Jayanta 1 Saha, Somnath 1 Sahoo, Pradyumn Kumar 1 Sahoo, Pravati 1 Sarkar, Anjan 1 Sarkar, Debranjan 1 Shilpa, N. 1 Si Duc Quang 1 Vijaylaxmi, S. B. 1 Wang, Songmin 1 Wang, Xinli 1 Wu, Zhaojun 1 Xu, Junfeng 1 Xuan, Zuxing 1 Yang, Degui 1 Yang, Lianzhong 1 Zhan, Tangsen 1 Zhang, Jilong 1 Zhang, Xiaobin 1 Zhao, Liang 1 Zhao, Liang 1 Zheng, Xiumin all top 5 #### Cited in 59 Serials 6 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 6 Tbilisi Mathematical Journal 6 Journal of Applied Mathematics & Informatics 5 Matematychni Studiï 5 Electronic Journal of Mathematical Analysis and Applications EJMAA 4 Computers & Mathematics with Applications 4 Rendiconti del Circolo Matemàtico di Palermo. Serie II 4 Abstract and Applied Analysis 4 Acta Universitatis Sapientiae. Mathematica 4 Journal of Classical Analysis 3 Mathematica Slovaca 3 The Journal of Analysis 3 Vietnam Journal of Mathematics 3 Analysis (München) 3 Applied Mathematics E-Notes 3 Advances in Pure and Applied Mathematics 2 Ukrainian Mathematical Journal 2 Demonstratio Mathematica 2 The Journal of the Indian Mathematical Society. New Series 2 Mathematica Bohemica 2 Turkish Journal of Mathematics 2 Boletín de la Sociedad Matemática Mexicana. Third Series 2 Lobachevskii Journal of Mathematics 2 Computational Methods and Function Theory 2 Advances in Difference Equations 2 Mathematics 2 Bollettino dell’Unione Matematica Italiana 1 Analysis Mathematica 1 Journal of Mathematical Analysis and Applications 1 Lithuanian Mathematical Journal 1 Acta Mathematica Vietnamica 1 Archivum Mathematicum 1 Czechoslovak Mathematical Journal 1 Fasciculi Mathematici 1 Bulletin of the Iranian Mathematical Society 1 International Journal of Mathematics 1 Bulletin of the Polish Academy of Sciences, Mathematics 1 Journal of Mathematical Sciences (New York) 1 Bulletin of the Belgian Mathematical Society - Simon Stevin 1 Advances in Applied Clifford Algebras 1 Journal of Difference Equations and Applications 1 Arab Journal of Mathematical Sciences 1 Annales Academiae Scientiarum Fennicae. Mathematica 1 Journal of Applied Analysis 1 Communications of the Korean Mathematical Society 1 Acta et Commentationes Universitatis Tartuensis de Mathematica 1 Nihonkai Mathematical Journal 1 Portugaliae Mathematica. Nova Série 1 Communications in Mathematical Analysis 1 Frontiers of Mathematics in China 1 $$p$$-Adic Numbers, Ultrametric Analysis, and Applications 1 Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica 1 Afrika Matematika 1 Arabian Journal of Mathematics 1 Journal of Applied Analysis and Computation 1 Palestine Journal of Mathematics 1 Communications in Mathematics and Statistics 1 Journal of Complex Analysis 1 Korean Journal of Mathematics all top 5 #### Cited in 6 Fields 121 Functions of a complex variable (30-XX) 10 Difference and functional equations (39-XX) 3 Several complex variables and analytic spaces (32-XX) 1 Number theory (11-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Probability theory and stochastic processes (60-XX)
# Equation of velocity with quadratic drag, vertical throw 1. Dec 16, 2012 ### Hannibal123 1. The problem statement, all variables and given/known data I am having trouble deriving the velocity equation for an objekt moving vertically upwards. the net force will be $$m*dv/dt=-mg-kv^2$$ where k is the drag. At the time t=0 the object's velocity wil be $$v(0)=v_0$$ 2. Dec 16, 2012 ### haruspex 3. Dec 16, 2012 ### Hannibal123 im not sure how they get to that equation, and why they are using y, which seems to be an height. I have tried seperating the variables, and have come to this integral $$-√(m/gk)∫1/(1+u^2 )du=∫dt$$ where u is a substitution defined by $$u=√(k/mg)⋅v$$ $$-√(m/gk)⋅arctan(√(k/mg)⋅v)+c=t$$ however I'm not sure if this is corrrect, and the constant c must be solved for the condition t=0 and v(0)=v_0. Furthermore this will have to be solved for v, because i need the velocity equation. 4. Dec 16, 2012 ### haruspex That all looks correct. Can you not rearrange it easily enough into the form v = f(t)? 5. Dec 17, 2012 ### Hannibal123 for the start condition (t,v)=(0,v_0) i can only cook c down to this $$c=√(m/gk)*arctan(√(k/mg)*v_0 )$$ which is going to give a pretty nasty equation when inserted into the previous equation, that i want to solve for v? 6. Dec 17, 2012 ### Hannibal123 which i eventually solve to this for v $$tan((t-√(m/gk)⋅arctan(√(k/mg)⋅v_0 ))/√(m/gk))/√(k/mg)$$ or in a more pretty picture: http://imgur.com/V53iV but I'm not sure if it's correct, or if it can be shortened 7. Dec 17, 2012 ### haruspex arctan(v) = t+c v = tan(t+c) = (tan(t)+tan(c))/(1- tan(t)tan(c)) v0 = tan(c) v = (tan(t)+v0)/(1- tan(t)v0) 8. Dec 17, 2012 ### Hannibal123 I don't really follow you there, would you mind elaborating that? 9. Dec 17, 2012 ### haruspex $$-√(m/gk)⋅arctan(√(k/mg)⋅v)+c=t$$ $$arctan(√(k/mg)⋅v)=\frac{c-t}{√(m/gk)}$$ $$√(k/mg)⋅v=tan(\frac{c-t}{√(m/gk)})$$ $$=tan(\frac{c}{√(m/gk)}-\frac{t}{√(m/gk)})$$ $$=\frac{tan(\frac{c}{√(m/gk)})-tan(\frac{t}{√(m/gk)})}{1+tan(\frac{c}{√(m/gk)})tan(\frac{t}{√(m/gk)})}$$ At t=0: $$√(k/mg)⋅v_0=tan(\frac{c}{√(m/gk)})$$ $$√(k/mg)⋅v=\frac{√(k/mg)⋅v_0-tan(\frac{t}{√(m/gk)})}{1+√(k/mg)⋅v_0 tan(\frac{t}{√(m/gk)})}$$ $$v=\frac{v_0-\frac{1}{√(m/gk)}tan(\frac{t}{√(m/gk)})}{1+√(k/mg)⋅v_0 tan(\frac{t}{√(m/gk)})}$$
# biblatex-apa: undefined references I cannot, for the life of me, get biblatex's apa style to work. Other biblatex styles work fine, apa, however, does neither print a reference list, nor produce a citation at all (single output is key in boldprint). Warning "undefined references" is produced. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[german, ngerman]{babel} \usepackage[babel]{csquotes} \usepackage[style=apa, backend=biber]{biblatex} \DeclareLanguageMapping{german}{german-apa} \begin{filecontents}{apa-test-bib.bib} @book{Labov1972, Author = {William Labov}, Publisher = {University of Pennsylvania Press}, Title = {Sociolinguistic Patterns}, Year = {1972}} \end{filecontents} \begin{document} Bla \cite{Labov1972} \printbibliography \end{document} I run the newest version available of all packages. Can someone help, please? Am I just overlooking the obvious? - Just checking, are you running bibtex or biber on the file? (You've specified backend=biber, which means you need to use it to sort the references, not bibtex.) –  Alan Munn Feb 9 '12 at 20:38 I used biber. As I said, it works fine for the other styles. I'm guessing it must be a problem with biblatex-apa. –  die Anne. Feb 9 '12 at 22:00 Your document works fine on my system (TeXLive 2011 on Mac). Have you deleted all the associated .aux and .bcf files etc.? If that doesn't change things can you add \listfiles to your document and add the console output to your question. –  Alan Munn Feb 9 '12 at 23:27 Having just updated my system I can now reproduce your problem. –  Alan Munn Feb 10 '12 at 2:04 This is a bug in biblatex 1.7 and the \RequireBiber[3] setting it appears. This was changed in biblatex-apa style 4.5. You can change the apa.bbx file to \RequireBiber[2] to fix it. I have released version 4.6 of the style with this setting and it should be in TL today. - Cool, looking forward to that. –  die Anne. Feb 10 '12 at 8:50 I recently upgraded MikTek 2.9's packages and now I get biblatex Error: Option 'sorting=apa' invalid. (when not including backend=biber). If I use the example by die Anne I get the same error she had even if I edit apa.bbx to contain \RequireBiber[2] or \RequireBiber[1], as well as if I just replace it by the current v4.6 apa.bbx file. This was with biblatex v1.7 and biber v.0.9.8 –  Víctor H Cervantes Feb 20 '12 at 17:03 Yes, you have to use Biber with the style now because it needs a custom sorting specification not possible with bibtex. –  PLK Feb 21 '12 at 6:48 Just a side note: You can see the version of biblatex-apa distributed by MiKTeX here: miktex.org/packages/biblatex-apa –  matth Feb 21 '12 at 10:52 Version 4.6 - that is the latest version –  PLK Feb 21 '12 at 11:00 I could replicate the error on my TeXLive 2011 Mac installation after updating to current packages via TeX Live Utility (biblatex 1.7, biblatex-apa 4.4, biber 0.9.8). A quick fix for me was a change to the apa.bbx file in line 29: \RequireBiber[3] % Biber is strictly required now due to custom sorting to \RequireBiber[1] % Biber is strictly required now due to custom sorting I couldn't determine whether this is indeed a veritable cause for the "Empty bibliography" error you describe or an artefact due to some underlying problems. However the solution could reliably be repeated after a clean reinstall of all affected packages. - Nice fix. My system hadn't been updated in for a couple of days. –  Alan Munn Feb 10 '12 at 2:03 Great. That did the trick. –  die Anne. Feb 10 '12 at 8:48
# Starting and Stopping SAP Systems SAP BASIS To start the SAP R/3 system, log on at the operating system level using the <sid>adm (lowercase) user account that was created during installation. For example, if the SID was defined as DD1, then log in as UNIX user dd1adm and enter the password. To start SAP R/3 on UNIX systems, just execute the startsap command. For example, dd1adm> startsap The startsap command is really a UNIX alias (a symbolic name pointing to something else) that calls the needed start programs. To stop the system, enter stopsap in the command line. The startsap options are • UNIX STARTSAP = startsap R3 STOPSAP = stopsap R3 • Windows STARTSAP = \\$(SAPGLOBALHOST)\sapmnt\$(system)\sys\exe\run\startsap.exe name = <SID> nr 5 <SYSTEMNR> SAPDIAHOST 5 <HOST> STOPSAP 5 \\$(SAPGLOBALHOST)\sapmnt\$(system)\sys\exe\run\stopsap.exe name = <SID> nr = <SYSTEMNR> SAPDIAHOST = <HOST> On Windows NT systems, you can start and stop Web AS from within the SAP Service Manager that is located on the NT Programs menu, within the SAP R/3 programs group. Starting the SAP R/3 system involves starting the underlying database and all the SAP processes configured to run in all application servers. The type and number of processes are configurable with the start profile and the instance profile parameters. These processes might include the following: • The operating system and/or network performance collectors • The central system log collection process • The CPIC gateway server • The message server • The dispatcher processes • The spool processes • The dialog and background processes The SAP R/3 system can be started and stopped by using operating system commands or from within the CCMS utilities. However, for the latter, at least the database server and the central instance must have been started first using the operating system startup commands. In current releases, the database system is not stopped from within Web AS either. In centralized installations, with just one single server, one start and one stop command are enough for starting or stopping the whole system. However, in distributed configurations, some configuration is needed to start and stop the group of application servers of a SAP system. Starting the SAP system first requires starting the database and then the instance processes. Stopping is the opposite process: first you have to stop the instance processes and then the database background processes. For example, you can write a shell script command file that can start the whole system from a single server. In these cases, many people use remote shell commands to execute the start programs in remote computers. Stopping can be done the same way. Remember that using remote commands (for example, rsh, remsh, or similar) can be a security violation in some systems because a list of permitted hosts is necessary. For this, check with your security manager. To start or stop the SAP system in a UNIX environment, you must log on as user <sid>adm, for example, for SAP system DD1, as user dd1adm. The following commands are available. Note The brackets indicate optional parameters where you can choose just one from the list or none at all. 1. startsap [R3] [DB] [ALL] • Using the command, startsap R3, only the SAP instance is started. It is assumed that the database is already running. Otherwise, the instance will not start successfully. • With the command, startsap DB, only the database is started. • Using startsap ALL, the system will first start the database and then the SAP instance. ALL is the default setting and can be omitted. If the database is running, it will just start the instance. 2. stopsap [DB] [R3] [ALL] • Using stopsap R3, all the instance processes are stopped. • With the command, stopsap DB, the system stops just the database. Make sure you first stop the instance processes; otherwise, the SAP processes will "hang" because no update is possible. • Issuing the command, stopsap [ALL], the system stops the SAP instance and then the database. ALL is the default parameter and can be omitted. When in distributed SAP installations with several application servers, pay attention to stopping all the instances before stopping the database, which is only located in the database server. To check if the system has been correctly started or stopped, you can use standard UNIX operating system utilities such as the ps command. From the UNIX system, the SAP processes are prefixed by dw, so, for example, issuing the command dd1adm> ps -eaf | grep dw will show the SAP running processes. If you see no lines from the command output, then no SAP processes are running on this system. Note In different UNIX implementations, the options for the ps command might differ.Another way to check whether the SAP processes in an application server are running correctly is by selecting Tools | Administration | Monitor | _System Monitoring | _Process Overview from the standard SAP monitoring tools. Or, use the CCMS, which permits a check of all the application servers in the system by choosing Tools | _CCMS | Control Monitoring | Global Process Overview. In the Web AS startup process, the startsap script calls the sapstart program with the startup profile as the argument. The startup profile is specified in the variable START_FILES, which is contained in the script. The script can be found under the home directory of the SAP administration user account, <sid>adm. The actual name of the script is usually startsap_<hostname>_<sap_system_number>, for example, startsap_copi01_00; the script startsap is really a UNIX alias defined in the login environment variables for the <sid>adm user. When stopping the SAP system, the stopsap script calls the kill.sap script, which is located under the instance work directory (/usr/sap/<SID>/SYS/<INSTANCE>/work). The kill.sap script activates the shutdown processing in the sapstart process. As can be seen, both the start and the stop process of the SAP R/3 system are initiated from the sapstart program, which is located under the executables directory. The syntax of this program is sapstart pf = <start_profile>. For example, pf=/usr/sap/C11/SYS/profile/START_DVEBMGS00 When the sapstart program is executed, it reads from the start profile to determine the preliminary commands it has to process. These commands are preceded by the Execute_xx keyword, and often they just establish logical links or clean the shared memory. It then launches the SAP processes as described in the Start_program_xx statements. The xx indicates the processing order. However, you should know that sapstart processes the entries asynchronously, which means it will not check the status of one process before proceeding with the next one. The sapstart process is the mother of all the processes running in a SAP R/3 system. For that reason, when this process is shut down, all the child processes are shut down as well. When in shutdown processing, the sapstart program executes the commands in the start profile and it will wait until all of its child processes terminate or it receives a stop signal from the system. The stopsap script works by sending the stop message to the sapstart program by means of the kill.sap script. This script is very simple, and what it contains is simply the PID of the sapstart process running in the system. The SAP processes are also shut down asynchronously and therefore in parallel. Both the startsap and stopsap procedures are logged into files that are left in the home directory of the SAP administrator user account, <sid>adm. The names of these files are startsap _<hostname> _<sap _system _number>.log and stop_<hostname>_<sap_system_number>.log. The sapstart program itself logs its processing in a log file located under the instance work directory: sapstart.log. This log file can be seen either from the operating system or inside the Web AS system from the monitoring and tracing utilities. Starting and Stopping SAP WAS for ABAP under Windows The process for starting or stopping SAP R/3 systems on Windows NT systems is basically the same as under UNIX, except that some of the programs are different, and also Windows NT includes a graphical interface, known as the SAP Service Manager. Additionally,Windows NT reads some of the required SAP R/3 variables directly from the Registry. Starting Web AS from the SAP Service Manager requires that the SAP R/3 Service SAP<SAPSID> _ <Instance _number> (for example, SAPK2P _00) be started. This is usually done automatically because the SAP R/3 service is defined for automatic start at system boot by default. In any case, to check whether the SAP R/3 service is running, on the Windows NT server, select Control Panel | Services and make sure that the SAP R/3 service has the status Started. If this is not the case, you will need to start it manually. If the SAP R/3 service is started, to start the SAP R/3 system, select Programs | SAP R/3 | SAP Service Manager <SID> _<Instancenumber>. This program can be located in different places according to the Web AS release. It is recommended that system managers or SAP administrators create shortcuts on their desktops. ress the Start button to start the SAP R/3 system. It will start the database first and then the central instance. If the database was already started, then only the central instance is started. The system is completely started when the stoplights turn green. Stopping Web AS is also done from the SAP Service Manager, by pressing the Stop button. However, this procedure will not stop the database. In the case of Oracle and Informix, SAP R/3 can be stopped using sapdba, or the database-specific tools that in Windows NT can be used graphically or from the command line. For Microsoft SQL Server, the database can be stopped from the taskbar. When the SAP R/3 system includes several instances (application servers), the procedure for starting those instances can be done from the SAP Service Manager of each server, or from the CCMS, once the database and central instance have been started. However, when stopping the full SAP R/3 system, the first things to stop are the application servers, then the central instance, and finally the database. The process of starting and stopping a full SAP R/3 system with several instances has been simplified since release 4.5 of SAP R/3 because installation of SAP R/3 on Windows NT requires the installation of the Microsoft Management Console, which enables starting and stopping all the instances centrally. Notice, however, that on SAP R/3 installations on Microsoft Server Cluster Services (MSCS), the procedure for starting and stopping the system is quite different on the cluster nodes. Starting and stopping SAP R/3 and the database is done from the Cluster Administration application by selecting the service and choosing the action (Start, Stop, Move, etc.) Further Guidelines for Productive Environments The purpose of this section is to make the people in charge of technically implementing the system aware that installing the system is not the same thing as having it ready for productive day-to-day business work. There are certain aspects of the SAP R/3 system that, from a technical and management point of view, must be carefully considered. Many of them apply to all installations, and others might not be necessary. All these points are discussed in more detail in different sections of this book. These points are as follows: • A backup and recovery strategy for R/3. • Cleaning background jobs. • Definition of daily or periodic tasks for the operation and support teams. • Printing strategy. • System management procedures. • Definition and setup of the CCMS operation modes and alerts. • SAP R/3 monitoring (CCMS) and administration, including performance and tuning.
# How can subtilisin still function without its catalytic triad? I read chapter 9 in the book Biochemistry (5th edition), by Berg, Tymoczko, and Stryer (provided in the NCBI site here). It describes the mechanism of action of the chymotrypsin enzyme. The catalysis is performed through the catalytic triad consisting of serine-195, histidine-57, and aspartate-102. What intrigued me is the following section on the related enzyme subtilisin, which also has a catalytic triad but with different residue numbers: 9.1.5. The Catalytic Triad Has Been Dissected by Site-Directed Mutagenesis [...] As expected, the conversion of active-site serine 221 into alanine dramatically reduced catalytic power. [...] The mutation of histidine 64 to alanine had very similar effects. [...] These observations support the notion that the serine-histidine pair act together to generate a nucleophile of sufficient power to attack the carbonyl group of a peptide bond. The conversion of aspartate 32 into alanine had a smaller effect, although the value of $$k_\mathrm{cat}$$ still fell to less than 0.005% of its wild-type value. My problem is in what it says next: The simultaneously conversion of all three catalytic triad residues into alanine was no more deleterious than the conversion of serine or histidine alone. Despite the reduction in their catalytic power, the mutated enzymes still hydrolyze peptides a thousand times as rapidly as does buffer at pH 8.6. A graph of the relative rates can be taken from this link: My question is: how can it be that after replacing all the triad residues, the enzyme still possess such a great catalytic activity? What is the mechanism behind that phenomenon?
# dev-c++ compilation errors This topic is 4489 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts here is the situation: i have written a program that uses opengl, glfw, and DevIL. it runs perfectly fine on my machine. i can send the .exe to others and they can run it (provided the devil.dll, ilu.dll, ilut.dll, and all necessary textures and mesh files are there). however, when the same folks try to compile the source code they get errors of the: "main.o(.text+0x78d):main.cpp: undefined reference to _imp__ilGenImages' main.o(.text+0x79c):main.cpp: undefined reference to _imp__ilBindImage' main.o(.text+0x7b0):main.cpp: undefined reference to `_imp__ilLoadImage'" nature. it makes no sense to me since the people attempting to compile the source have the exact same devpaks as i do (same versions from the same places) and yet they receive errors that i do not. i tried to recreate their issue by reinstalling all my devpaks but that didn't do anything because i can still recompile everything (i use the "rebuild all" option so that all the .o's are remade). does anyone know what may be causing this situation and/or a way to remedy it? : nevermind, sorry, figured it out. the people hadn't put in the correct linker commands (even though they said they had). this post can be deleted now if a mod wants to. sorry. :-\ ##### Share on other sites You can delete the thread yourself by deleting the original post. 1. 1 2. 2 3. 3 4. 4 Rutin 19 5. 5 • 14 • 14 • 9 • 9 • 9 • ### Forum Statistics • Total Topics 632927 • Total Posts 3009253 • ### Who's Online (See full list) There are no registered users currently online ×
Now showing items 79-91 of 91 • #### The Characterization Of The Infrasonic Noise Field And Its Effects On Least Squares Estimation Localization of the source of an acoustic wave propagating through the atmosphere is not a new problem. Location methods date back to World War I, when sound location was used to determine enemy artillery positions. Since the drafting of the Comprehensive Nuclear-Test-Ban Treaty in 1996 there has been increased interest in the accurate location of distant sources using infrasound. A standard method of acoustic source location is triangulation of the source from multi-array back azimuth estimates. For waves traveling long distances through the atmosphere, the most appropriate method of estimating the back azimuth is the least squares estimate (LSE). Under the assumption of an acoustic signal corrupted with additive Gaussian, white, uncorrelated noise the LSE is theoretically the minimum variance, unbiased estimate of the slowness vector. The infrasonic noise field present at most arrays is known to violate the assumption of white, uncorrelated noise. The following work characterizes the noise field at two infrasound arrays operated by the University of Alaska Fairbanks, The power distribution and coherence of the noise fields was determined from atmospheric pressure measurements collected from 2003-2006. The estimated power distribution and coherence of the noise field were not the white, uncorrelated noise field assumed in the analytic derivation of the LSE of the slowness vector. The performance of the LSE of azimuth and trace velocity with the empirically derived noise field was numerically compared to its performance under the standard noise assumptions. The effect of violating the correlation assumption was also investigated. The inclusion of clutter in the noise field introduced a dependence to the performance of the LSE on the relative signal amplitude. If the signal-to-clutter ratio was above 10 dB, the parameter estimates made with the correlated noise field were comparable to the estimates made with uncorrelated noise. From the results of these numerical studies, it was determined that the assumption of Gaussian, white, uncorrelated noise had little effect on the performance of the LSE at signal-to-noise ratios greater than 10 dB, but tended to over estimate the performance of the LSE at lower signal-to-noise ratios. • #### The Dynamics And Morphology Of Sprites In 1999 the University of Alaska Fairbanks fielded a 1000 fields-per-second intensified CCD camera to study sprites and associated upper atmospheric phenomena occurring above active thunderstorms as part of the NASA Sprites99 campaign. The exceptional clarity and definition obtained by this camera the night of August 18, 1999, provides the most detailed image record of these phenomena that has been obtained to date. The result of a frame-by-frame analysis of the data permits an orderly classification of upper atmospheric optical phenomena, and is the subject matter of this thesis. The images show that both elves and halos, which are diffuse emissions preceding sprites, are largely spatially unstructured. Observations of sprites initiating outside of main parts of halos, and without a halo, suggest sprites are initiated primarily from locations of atmospheric composition and density inhomogeneities. All sprites appear to start as tendrils descending from approximately 75 km altitude, and may form other dynamic or stationary features. Dynamic features include downward developing tendrils and upward developing branches. Stationary features include beads, columns, and diffuse "puffs," all of which have durations greater than 1 ms. Stationary sprite features are responsible for a significant fraction of the total optical emissions of sprites. Velocities of sprite tendrils were measured. After initial speeds of 106--107 m/s, sprite tendrils may slow to 105 m/s. Similarly, on some occasions the dim optical emission left behind by the descending tendrils may expand horizontally, with speeds on the order of 105 m/s. The volume excited by the sprite tendrils may rebrighten after 30--100 ms in the form of one of three different sprite after effects collectively termed "crawlers." A "smooth crawler" consists of several beads moving upward (~105 m/s) without a large vertical extent, with "smooth" dynamics at 1 ms timescale. "Embers" are bead-like forms which send a downward-propagating luminous structure towards the cloudtop at speeds of 106 m/s, and have irregular dynamics at 1 ms timescales. In TV-rate observations, the downward-propagating structure of an ember is averaged out and appears as a vertically-extended ribbon above the clouds. The third kind of crawler, so-called "palm tree," appears similar to an ember at TV-rates, but with a wider crown at top. • #### The effect of rate, frequency, and form of migration on host parasite population dynamics What is the effect of migration on host-parasite population dynamics? Animals live in a landscape where they move between patches. They are also locked in host-parasite conflicts. Host-parasite interactions are modeled with consumer resource functions. I constructed models using two different consumer resource functions (the Lotka Volterra system and the Saturating Type II system). The first model was a conservative system. The second was dissipative and more biologically realistic. I examined the effect of rate of migration, time between migration events, and form of migration. I found that the time between migration events had the largest effect on the synchronization in host-parasites population dynamics between the patches. Decreased time between migration events increased the fraction of simulation to completely synchronize and decreased the time it took to do so. In the first model, I observed simulations with a low rate of migration took a long time to synchronization and with a high rate of migration took a short time to synchronize. There was a phase transition between these two amounts of time it took to synchronize. In the second model, simulations done at low rates of migration did not synchronize while with increased migration rates the fraction of simulations to synchronize increased. I found in some simulations of parasite only migration that the patches synchronized faster. My results imply that parasite only migration to islands could have a greater impact on the extinction risk on islands further from the mainland than other forms of migration. • #### The generalized Ohm's law in collisionless magnetic reconnection Magnetic reconnection is an important process in space environments. As a result of magnetic reconnection, the magnetic field topology changes, which requires the breakdown of the frozen-in condition in ideal magnetofluids. In a collisional plasma, the resistivity associated with Coulomb collisions of charged particles is responsible for the breakdown of frozen-in condition. In a collisionless plasma, however, the cause of the breakdown of frozen-in condition remains unanswered. We address this problem by investigating the generalized Ohm's law and the force balance near magnetic neutral lines based on two-dimensional particle simulations. In a particle simulation with one active species, it is found that a weakly anisotropic and skewed velocity distribution is formed near the magnetic X line, leading to the presence of off-diagonal elements of plasma pressure tensor. The gradients of the off-diagonal pressure terms transport plasma momentum away from the X line to balance the reconnection electric field. The presence of the reconnection electric field results in the breakdown of frozen-in condition. The importance of both electron and ion off-diagonal pressure tensor terms in the generalized Ohm's law near neutral lines is further confirmed in full particle simulations. The generation of the off-diagonal pressure terms can be explained in terms of the thermal dispersion of particle motions and the response of particle distribution function in the electric and magnetic fields near the neutral lines. In the particle simulations, we also find the presence of a new dynamo process, in which a large amount of new magnetic flux near the magnetic O line is generated. This dynamo process is not allowed in resistive magnetofluids. However, in a collisionless plasma, the plasma inertia and momentum transport due to the off-diagonal plasma pressure terms can lead to E $\cdot$ J < 0 near the magnetic O line and make the dynamo process possible. • #### The morphology and electrodynamics of the boreal polar winter cusp The major result of this thesis is the magnetic signatures of the dayside cusp region. These signatures were determined by comparing the magnetic observations to optical observations of different energy particle precipitation regions observed in the cusp. In this thesis, the cusp is defined as the location of most direct entry of magnetosheath particles into the ionosphere. Optical observations show that the observing station rotates daily beneath regions of different incident energy particles. Typically, the station passes from a region in the morning of high energy particles into a region near magnetic noon of very low energy precipitation, and then returns to a region of high energy precipitation after magnetic noon. A tentative identification of the cusp is made on the basis of these observations. The optical observations also are used to determine the upward field aligned current density, which is found to be most intense in the region identified as the cusp. The magnetic field measurements are found to correlate with the optical measurements. When the characteristic energy is high, the spectrogram shows large amplitude broad band signals. The Pc5 component of these oscillations is right hand polarized in the morning, and left hand polarized in the afternoon. During the time the optics detect precipitation with a minimum characteristic energy, the magnetic spectrogram shows a unique narrow band tone at 3-5 mHz. The occurrence statistics of the magnetic oscillations are compared to DMSP satellite observations of the cusp and low latitude boundary layer. The pulses that make the narrow band tone are found to come in wave trains that are phase coherent. These trains of coherent pulses are found to be separated by phase jumps from adjacent wave trains. These jumps in phase occur when a new field aligned current appears on the equatorward edge of the cusp. This combination of phase coherent wave trains associated with poleward propagating auroral forms which are shown to contain intense field aligned currents may be the signature of newly reconnected flux tubes in the ionosphere. • #### The quasiparallel collisionless shock wave: A simulation study The structure of the quasi-parallel collisionless shock wave is studied via a numerical simulation model. The model is compared to observations and theoretical predictions and within its limitations appears to reproduce the true shock structure reasonably well. Three electron equations of state and their effects on the simulation are examined. It is found that only the isotropic-adiabatic electron equation of state yields acceptable results in the simulation at high Mach numbers. The scale lengths of the shock are measured, normalized by the natural scale lengths of the plasma, and plotted as a function of the Alfven Mach number. It is found that the wavelength of the upstream waves follows that predicted for a phase standing whistler quite well and the scalelength of the jump in the magnitude of the magnetic field is generally greater than, but approximately equal to this wavelength. For Alfven Mach numbers $M\sb{A} >$ 2.5, waves are generated in the downstream region. Their wavelength and the scale length of the plasma transition are larger than the natural scale lengths of the plasma. The ion heating is seen to occur in two stages. In the first stage which occurs upstream of the principal shock ramp, the heating can be characterized by a polytropic power law equation of state with an exponent much greater than the isentropic-adiabatic rate of $\gamma$ = 5/3. The second stage of heating which occurs from the principal shock ramp to the downstream region is characterized by an exponent on the order of the isentropic-adiabatic rate. The results show that the ion heating occurs mainly around the principle density jump near the center of the shock transition region, while the increase in entropy takes place mainly in the upstream side of the shock transition region. It is suggested that the ion heating is a consequence of the non-adiabatic scattering of the ions through the magnetic field of the shock and its upstream precursor wave. • #### Transient spatiotemporal chaos in a diffusively and synaptically coupled Morris-Lecar neuronal network Transient spatiotemporal chaos was reported in models for chemical reactions and in experiments for turbulence in shear flow. This study shows that transient spatiotemporal chaos also exists in a diffusively coupled Morris-Lecar (ML) neuronal network, with a collapse to either a global rest state or to a state of pulse propagation. Adding synaptic coupling to this network reduces the average lifetime of spatiotemporal chaos for small to intermediate coupling strengths and almost all numbers of synapses. For large coupling strengths, close to the threshold of excitation, the average lifetime increases beyond the value for only diffusive coupling, and the collapse to the rest state dominates over the collapse to a traveling pulse state. The regime of spatiotemporal chaos is characterized by a slightly increasing Lyapunov exponent and degree of phase coherence as the number of synaptic links increases. In contrast to the diffusive network, the pulse solution must not be asymptotic in the presence of synapses. The fact that chaos could be transient in higher dimensional systems, such as the one being explored in this study, point to its presence in every day life. Transient spatiotemporal chaos in a network of coupled neurons and the associated chaotic saddle provide a possibility for switching between metastable states observed in information processing and brain function. Such transient dynamics have been observed experimentally by Mazor, when stimulating projection neurons in the locust antennal lobe with different odors. • #### Transient spatiotemporal chaos in a Morris-Lecar neuronal ring network collapses to either the rest state or a traveling pulse Transient spatiotemporal dynamics exists in an electrically coupled Morris-Lecar neuronal ring network, a theoretical model of an axo-axonic gap junction network. The lifetime of spatiotemporal chaos was found to grow exponentially with network size. Transient dynamics regularly collapses from a chaotic state to either the resting potential or a traveling pulse, indicating the existence of a chaotic saddle. For special conditions, a chaotic attractor can arise in the Morris-Lecar network to which transient chaos can collapse. The short-term outcome of a Morris-Lecar ring network is determined as a function of perturbation configuration. Perturbing small clusters of nearby neurons in the network consistently induced chaos on a resting network. Perturbation on a chaotic network can induce collapse in the network, but transient chaos becomes more resistant to collapse by perturbation when greater external current is applied. • #### Transient spatiotemporal chaos on complex networks Some of today's most important questions regard complex dynamical systems with many interacting components. Network models provide a means to gain insight into such systems. This thesis focuses on a network model based upon the Gray-Scott cubic autocatalytic reaction-diffusion system that manifests transient spatiotemporal chaos. Motivated by recent studies on the small-world topology discovered by Watts and Strogatz, the network's original regular ring topology was modified by the addition of a few irregular connections. The effects of these added connections on the system's transience as well on the dynamics local to the added connections were examined. It was found that the addition of a single connection can significantly effect the transient time of spatiotemporal chaos and that the addition of two connections can transform the system's spatiotemporal chaos from transient to asymptotic. These findings suggest that small modifications to a network's topology can greatly affect its behavior. • #### Two- and three-dimensional study of the Kelvin-Helmholtz instability, magnetic reconnection and their mutual interaction at the magnetospheric boundary Magnetic reconnection and the Kelvin-Helmholtz (KH) instability regulate the transport of magnetic flux, plasma, momentum and energy from the solar wind into the magnetosphere. In this thesis, I use two-dimensional and three-dimensional MHD simulations to investigate the KH instability, magnetic reconnection, and their relationship. Two basic flow and magnetic field configurations are distinguished at the Earth's magnetopause: (1) configurations where the difference in plasma velocity between the two sides of the boundary $\Delta$v (velocity shear) is parallel to the difference of the magnetic field $\Delta$b (magnetic shear), and (2) configurations where the velocity shear is perpendicular to the magnetic shear. For configuration (1), either magnetic reconnection is modified by the shear flow, or the KH instability is modified by the magnetic shear and resistivity. The evolution of the basic configuration (2) requires three dimensions. In this case, both processes can operate simultaneously in different planes. If the KH instability grows faster initially, it can wrap up the current layer and thereby initiate a very fast and turbulent reconnection process. The resulting magnetic turbulence can provide the first explanation of often very turbulent structures of the magnetopause current layer. For the first time, it is quantitatively confirmed that the KH instability operates at the magnetospheric boundary at low latitudes. • #### Two-dimensional Bernstein-Greene-Kruskal modes in a magnetized plasma with kinetic effects from electrons and ions Electrostatic structures are observed in various of space environments including the auroral acceleration region, the solar wind region and the magnetosphere. The Bernstein-Greene-Kruskal (BGK) mode, one of the non-linear solutions to the Vlasov-Poisson system, is a potential explanation to these phenomena. Specifically, two dimensional (2D) BGK modes can be constructed through solving the Vlasov-Poisson-Ampère system with the assumption of a uniform ion background. This thesis discusses the existence and features of the 2D BGK modes with kinetic effects from both electrons and ions. Specifically, we construct electron or ion BGK modes with finite temperature ratio between ions and electrons. More general cases, the electron-ion 2D BGK mode with the participation of both non-Boltzmann electron and ion distributions are constructed and analyzed as well. • #### Variation of electron and ion density distribution along earth's magnetic field line deduced from whistler mode (wm) sounding of image/rpi satellite below altitude 5000 km This thesis provides a detailed survey and analysis of whistler mode (WM) echoes observed by IMAGE/RPI satellite during the years 2000-2005 below the altitude of 5000 km. Approximately 2500 WM echoes have been observed by IMAGE during this period. This includes mostly specularly reflected whistler mode (SRWM) echoes and ~400 magnetospherically reflected whistler mode (MRWM) echoes. Stanford 2D raytracing simulations and the diffusive equilibrium density model have been applied to 82 cases of MRWM echoes, observed during August-December of the year 2005 below 5000 km to determine electron and ion density measurements along Earth's magnetic field line. These are the first results of electron and ion density measurements from WM sounding covering L-shells ~1.6-4, a wide range of geomagnetic conditions (Kp 0+ to 7), and during solar minima (F10.2~70-120) in the altitude range 90 km to 4000 km. The electron and ion density profiles obtained from this analysis were compared with in situ measurements on IMAGE (passive recording; electron density (Ne)), DMSP (~850 km; Ne and ions), CHAMP (~350 km; Ne), Alouette (~500-2000 km; Ne and ions), ISIS-1, 2 (~600-3500 km; Ne, ions), AE (~130-2000 km; ions) satellites, bottom side sounding from nearby ionosonde stations (Ne), and those by GCPM (Global Core Plasma Model), IRI-2012 (International Reference Ionosphere). Based on this analysis it is found that: (1) Ne shows a decreasing trend from L-shell 1.6 to 4 on both the day and night sides of the plasmasphere up to altitude ~1000 km, which is also confirmed by the GCPM and IRI-2012 model. (2) Above ~2000 km altitude, GCPM underestimates Ne by ~30-90% relative to RPI passive measurements, WM sounding results. (3) Below 1500 km, the Ne is higher at day side than night side MLT (Magnetic Local Time). Above this altitude, significant MLT dependence of electron density is not seen. (4) Ion densities from WM sounding measurements are within 10-35% of those from the Alouette, AE, and DMSP satellites. (5) The effective ion mass in the day side is more than two times higher than night side below altitude ~500 km. (6) The O⁺/H⁺ and O⁺/(H⁺+H⁺+) transition heights at day side are ~300-500 km higher than night side; the transition heights from the IRI-2012 model lie within the uncertainty limit of WM sounding for night side, but for day side (L-shell>2.5) they are 200 km higher than WM uncertainty limits. (7) foF2 (F2 peak plasma densities) from ionosonde stations and the IRI-2012 model are ~1.5-3 MHz higher than those from WM sounding during daytime. These measurements are very important as the ion density profile along geomagnetic field lines is poorly known. They can lead to a better understanding of global cold plasma distribution inside the plasmasphere at low altitude and thereby bridge the gap between high topside ionosphere and plasmasphere measurements. These results will provide important guidance for the design of future space-borne sounders in terms of frequency and virtual range, in order to adequately cover ion density measurements at low altitudes and wide range of MLTs, solar and geophysical conditions. • #### Variational anodic oxidation of aluminum for the formation of conically profiled nanoporous alumina templates Anodic oxidation of metals, otherwise known as anodization, is a process by which the metal in question is intentionally oxidized via an electrochemical reaction. The sample to be oxidized is connected to the anode, or positive side of a DC power source, while a sample of similar characteristics is attached to the cathode or negative side of the same power source. Both leads are then immersed in an acidic solution called the electrolyte and a current is passed between them. Certain metals such as aluminum or titanium anodized in this way form a porous oxide barrier, the characteristics of which are dependent on the anodization parameters including the type of acid employed as the electrolyte, pH of the electrolyte, applied voltage, temperature and current density. Under specific conditions the oxide formed can exhibit highly ordered cylindrical nanopores uniformly distributed in a hexagonal pattern. In this way anodization is employed as method for nanofabrication of ordered structures. The goal of this work is to investigate the effects of a varied potential difference on the anodization process. Specifically to affect a self-assembled conical pore profile by changing the applied voltage in time. Although conical pore profiles have been realized via post-processing techniques such as directed wet etching and multi-step anodization, these processes result in pore dimensions generally increasing by an order of magnitude or more. To date there has been reporting on galvanostatic or current variations which directly effected the resulting pore profiles, but to our knowledge there has not been a reported investigation of potentiostatic or voltage variation on the anodization process. We strive to realize a conical pore profile in process with the traditional two-step anodization method while maintaining the smallest pore dimensions possible. Pores having diameters below 20nm with aspect ratios about 1.0 would be ideal as those dimensions would be much closer to some of the characteristic lengths governing the quantum confined spatial domain. Thus we set out to answer the question of what effect a time varied potential difference will have on the traditional two-step anodization method, a technique we refer to as variational iodization, and if in fact conically profiled nanopores can be realized via such a technique.
# Inspired by Calvin Lin: Any part 1 Geometry Level 4 Suppose $$AD, BE, CF$$ are the medians of a triangle $$ABC$$, with side $$BC = 10$$, intersecting at $$G$$. In the quadrilateral $$AEGF$$, suppose $$AE\times AF = 3\times GE\times GF$$, evaluate $$\dfrac{AB\times BE+ AC\times CF}{AD}$$. ×
Sciencemadness Discussion Board » Special topics » Energetic Materials » capacitor discharge Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues Author: Subject: capacitor discharge schatz Harmless Posts: 14 Registered: 15-12-2009 Member Is Offline Mood: No Mood capacitor discharge Hello everyone, I intend to build a capacitor discharge exploder for SFX squibs mainly, but might also do double duty in commecial quarrying. I am going for 50 to 100 WS. The highvoltage generation circuitry is no problem since I have some experience in this area, however I have my doubts about safe triggering. I was thinking that since switching 100 joules will require a very heavy duty mechanical switch/relay, I though of using a power thyristor. My question really is that there must be some way to protect the thyristor because I am sure that sooner or later, somebody is going to dump 100 joules in a shorted wiring circuit; This might damage the thyristor. I wanted to see how its done in commercial exploders and surfed around a bit for schematics, but found none! I wonder if anybody here can give me some usefull hint. 12AX7 Post Harlot Posts: 4803 Registered: 8-3-2005 Location: oscillating Member Is Offline Mood: informative Spark discharge (ignitron, krytron, etc.). They don't make thyristors that handle the voltage anyway (>4kV). What type of capacitors are you using? Tim Seven Transistor Labs LLC http://seventransistorlabs.com/ Electronic Design, from Concept to Layout. Need engineering assistance? Drop me a message! schatz Harmless Posts: 14 Registered: 15-12-2009 Member Is Offline Mood: No Mood capacitor discharge I have'nt built anything yet. But I have a fair idea from SS laser work what will be suitable for storage capacitor. I have some low ESR , 1 kv, energy caps in my junkbox I can probably try out first. I have several 1500volt 1500 amp thyristors. I am not sure if they will survive the big jolt. In that case just what is actually used in commercial exploder units ? In the past while playing with laser power units parts , I fooled with EBW experiments and just enjoyed the supersonic plasma crack; But I always used a big mechanical contactor which pretty much had short contact life ! So whats the switching element in those compact 50WS exploders? Thyratron? hinz Hazard to Others Posts: 200 Registered: 29-10-2004 Member Is Offline Mood: No Mood A triggered spark gap (Trigatron) is probably the simplest solution, as everyone can manufacture it on a lathe. Probably the raise time is not as short as with a Thyratron, but you can make it yourself and you can hardly destroy it. It would actually be interesting if someone would compare the raise-time between Trigatron and Thyratron. http://en.wikipedia.org/wiki/Trigatron http://www.rossoverstreet.org/tesla/projects/Pulse/pulse.htm... [Edited on 15-12-2009 by hinz] aonomus National Hazard Posts: 361 Registered: 18-10-2009 Member Is Offline Mood: Refluxing I've built a capacitor bank for fun, but its huge compared to the purpose for this. Short of going to complete overkill and using a pair of hockey-puck sized semiconductors, you can either use semiconductor, mechanical, or triggered sparkgap. Voltage and current are all proportional if you want 100J, E = (1/2)(C)(V^2) (note, C is in farads, you must convert units properly). At lower voltages, the resulting voltage drop across your resistive load (wire to be exploded) is low, and the current rise (dI/dt) is low. If your wire is too beefy, or your capacitors too small, a slow current rise means more current is wasted just heating it up without getting to the stage where its red hot and starts to melt/evaporate. For a mechanical switch, the key is to simply close it as fast as possible. Use spring loaded large copper contacts, or use a pair of copper wedges spaced apart, and use a pneumatic piston or other system to quickly push another copper wedge into the 'V' space between them. For a switched sparkgap, its only practical on higher voltages in the kV range, and you can either have 2 stationary electrodes with a third one moving from the side to reduce the distance between both, or 1 stationary, one moving electrode. For a semiconductor switch, you can use a SCR (thyratron), but they are only good to about 1kV. These are expensive, tricky to use, but would give the best result (silent operation, relatively durable). I would personally recommend using a SCR with a reasonable voltage of maybe 600-800VDC and using a little more than only 100J. You should be able to find them cheap on eBay, because alot of old industrial motor drives and phase-angle controllers are being swapped out and replaced with IGBT based modules. Oh, and one other thing, design a backup system to discharge the capacitors. I recommend having a high value resistor directly bolted across the capacitors for slow discharge (capacitors can rebuild their charge over time), and a second smaller SCR used with a resistive load (ie: lightbulb) that can be triggered to discharge the capacitors in the event that the main switch or load circuit fails. schatz Harmless Posts: 14 Registered: 15-12-2009 Member Is Offline Mood: No Mood capacitor discharge I understand your reasoning hinz, but surely, the smaller (50WS) exploders dont use these kind of devices do they? Tried google for schematics but no real diagrams turned up. dann2 International Hazard Posts: 1523 Registered: 31-1-2007 Member Is Offline Mood: No Mood Hello, A book by Cook! Page 369 of book has lots of info. on exploding bridge wire detonators. Thread may be useful too. You need a VERY LOW INDUCTANCE (low esr too) capacitor for 'real' exploding wire detonators, the ones with (no primary explosives) only PETN or RDX in them at the bridge wire. A photoflash capacitor will not do the job. Regards, [Edited on 16-12-2009 by dann2] schatz Harmless Posts: 14 Registered: 15-12-2009 Member Is Offline Mood: No Mood capacitor discharge Hey guys, my need is NOT for EBW or similar. I am looking for schematics for blasting exploder machines for ordinary hot wire e-matches and commercial dets. Power , 50 to 100 WS. I have not yet figured out how to dump 100 or so joules without causing damage to the exploder itself. Perhaps, thyristors can be used, but will probably need protection in case of shorted firing circuits. This is why I was looking for diagrams or schematics of the smaller commercial exploders. Thanks ! JohnWW International Hazard Posts: 2849 Registered: 27-7-2004 Location: New Zealand Member Is Offline Mood: No Mood densest National Hazard Posts: 359 Registered: 1-10-2005 Location: in the lehr Member Is Offline Mood: slowly warming to strain point Commercial inexpensive CD units use 10-25A or so SCRs chosen for their pulse handling rating. A multi-hundred-amp SCR will probably survive a short into most wires. They're very rugged. The big question is whether they turn on fast enough (many microseconds to fractions of a millisecond) for your purposes. It will be plenty fast enough for igniting something. It will probably not be fast enough for EBW. An IGBT would be (50-100 nanoseconds), but they're less robust. Assuming a few feet of wire, there's enough inductance and resistance for many large units to survive. Data sheets for most surplus units can be found online - check for the one-time peak current rating (time duration and current value). Reliable, reproducable triggering either requires a fairly stiff (1-10A depending on what you're triggering) fast rise time (under 1 microsecond for the SCR, under 100 nanoseconds for the IGBT) driver. There are commercial ICs which will do the job for under $1 available from the usual surplus sources, under$2 new. Common part numbers start with "TSC42" though there are many many other types. Use short (under 6 inches) twisted pair wires from the trigger IC to the power device. Use a schmitt trigger (74C14, HC, LS, etc,) to drive the gate driver from your switch. An old MC144490P contact bounce eliminator ($3, goldmine-elec.com) is a good thing to use. Put a disable switch shorting the gate to ground for safety. I'd -strongly- advise against using a lash-up like this in a quarry. If a$100 pre-built fireworks grade CD unit is out of your price range, you can't afford any hospital bills. quicksilver International Hazard Posts: 1820 Registered: 7-9-2005 Location: Inches from the keyboard.... Member Is Offline Mood: ~-=SWINGS=-~ Does anyone have schematics with defined components of either the commercial item or a pipe-dream proposal? I MAY have some that are much better than patents - from a company called Ideal products. woelen Posts: 7764 Registered: 20-8-2005 Location: Netherlands Member Is Offline Mood: interested Quote: Originally posted by aonomus I've built a capacitor bank for fun, but its huge compared to the purpose for this. Short of going to complete overkill and using a pair of hockey-puck sized semiconductors, you can either use semiconductor, mechanical, or triggered sparkgap. This capacitor bank looks very nice, but it is fairly low-voltage and this can be handled with more or less normal components. I myself have three pulse discharge capacitors of huge size (5 kg each) having a capacity of 40 uF at a voltage of 5000 volts (max. voltage is 7500 volts, but recommended is not to exceed 6000 volts, I stick to 5000 volts). Discharging such a beast is not easy at all without causing big damage. I once tried using a smal bottle with a rubber plug, having two wires in it and making a vacuum such that a spark can be made more easily. I only tried this at 1000 volts and the effect was rather scary (very loud noise from bottle, I fear explosion of this at higher voltages). A lot of energy is released in the switching device itself and not all of the energy goes to the load. I still did not find a suitable and safe method of discharging the big capacitors in a non-destructive way. The art of wondering makes life worth living... Want to wonder? Look at https://woelen.homescience.net WaveFront Harmless Posts: 31 Registered: 1-1-2005 Location: Southern Europe Member Is Offline Mood: curious Quote: Originally posted by woelen A lot of energy is released in the switching device itself and not all of the energy goes to the load. ...unless the switching device is the load itself. It would be nice to see a common 1000 V diode exploding when the rupture voltage is achieved. With a few thousand volts the explosion could reach 1 "microton", the energy of one gram of TNT. Wonderful capacitors. [Edited on 2-1-2010 by WaveFront] [Edited on 2-1-2010 by WaveFront] quicksilver International Hazard Posts: 1820 Registered: 7-9-2005 Location: Inches from the keyboard.... Member Is Offline Mood: ~-=SWINGS=-~ I have seen VERY expensive (EBW) units. The devices was developed from an Inverter (& yes it could run from a truck battery). It had a "reset" that was (ThBoMK) some type of a circuit breaker. It has "ARM" & "FIRE" (strongly shielded) buttons. and had several very serious "WARNING" stickers all over them and were in the two thousand dollar range. They would work off of 110, 240 (AC) & 12V(DC) automotive. One warning stated it delivered lethal amperage/voltage, etc & to make all personnel aware of that issue to the wire-lead out roll. When casually examining one I am fairly sure that someone with a bit of electronic engineering background could make one fairly quickly but it needs a quality inverter and some design room (box space) for extra items and hoods for the buttons & heat sinks. I was asking questions about these things for awhile. They CAN be made by the home workshop hobbyist but the trick is to recognize that to really blow the bridge wire you need some complexities if the thing is to be safe & be usable more than a few times. Another one I saw was gas motor driven and was actually portable (85+ lbs) but was also $3000+. I believe the circuit was an almost typical inverter with VERY heavy duty SCR, circuit breaker design (looked very custom but could be available or substituted). This was not a transformer-type but designed like the inverter in a modern Microwave Oven as the heat sinks for the switching transistors were HUGE. Both units were NOT small, hand-held cap-discharge, easily portable, light units: they were beefy. There was certain demands regarding the shooting wire (thicker than 20). There was unquestionably certain shorting resistors for the capacitors, which (I was told) had multiple protections for the parts. Everything about them was top grade and not cheap. They had long guarantees & were made (one anyway) in Norway. They took special care to protect the poles, buttons, etc. I was told that a Microwave Oven inverter would be a good starting point for parts. IF you used an old fashioned transformer from a Microwave Oven and an inverter with short-circuit protections; that may come close. Using two of the BIG 50/60Hz caps for serious heat / spark, one of the biggest issues would be the safety issue. The rectifiers in Microwaves would handle things just fine. The current from those are .25 - .5 amps. But the commercial one will go hotter. All of them would be deadly however. I've seen quite a few Jacobs ladders and Tesla coils from Microwave parts and the output would pop most any wire - way thicker than most anyone would even consider for a EBW. I have thought of making one, but without the use of seriously expensive stuff. Especially since the guts of the modern Microwave have much of what you need. The way I would imagine it, -it needs to withstand a short: NOT for the bridge wire but if the long leads were to become abraded & short. I was told that with EBW caps, the formula for required current was unique to the cap manufacturer & generally it was a individual setup of industrial blasting as a totally different thing from the series/parallel formulas common cap-discharge boxes. Generally, single units were backed up with a standard cap backed up to it for surety and Nonel or detcord was used extensively for multiple or timed shots. The EBW issue was used for very safety-oriented, highly controlled agendas.... So a few amps could do it. [Edited on 4-1-2010 by quicksilver] Contrabasso National Hazard Posts: 277 Registered: 2-4-2008 Member Is Offline Mood: No Mood The NATO shrike Mk V exploder uses 319 - 400v and therefore 6.8 - 12 joules The Mk IV is 719 - 800v 35 - 48 Joules. The clever thing about the Shrike is that it checks tht the firing line resistance is below 400 ohms ( MKV) or 350ohms (Mk IV) before priming. This means that you cannot fire it into an operator person. This sort of discharge is well into cardiac defib range. Even for SFX or blasting 600v is plenty Many times 50v is plenty. densest National Hazard Posts: 359 Registered: 1-10-2005 Location: in the lehr Member Is Offline Mood: slowly warming to strain point EBW requires a fast rise time. Below 1 microsecond is a figure I've heard; there have been threads on that topic here. Hockey puck sized semiconductors are remarkably inexpensive. An IGBT rated at 1200V and 400A is usually in the$10-30 range on EBay or from a surplus dealer. Any IGBT from the last 10 years or more will turn on in less that 300 ns if driven from a fast rise time source capable of supplying 5A or so. There are electronic methods for sharpening pulses using things like saturable reactors and other magnetic or capacitor/magnetic units. Saturable reactor cores are in many current computer PSUs. I will admit not knowing how the distributed magnetic ones work - perhaps Mr. 12AX7 might point out a reference? Getting high voltage is easy if you aren't in a hurry. A cold cathode fluorescent light driver (about \$5 from a surplus dealer) and a fast recovery diode rated over 1000V will get you 500V or so from 12VDC at lowish currents. The output is in the microamp-milliamp range so be prepared to wait for a minute or three to charge 50 uF. It's inherently current limited making it fractionally safer than a MWO derived unit which can deliver 1KW or so at high voltage. A fast diode is imperative - a standard 1N4007 does NOT work. A Cockroft-Walton multiplier would probably get you another KV or so at some cost of complexity (multiple diodes and some high voltage ceramic capacitors). 400-450V high surge rating capacitors are pretty easy to come by since they are used in a lot of mains-driven electronics in industry. 2 400UF at 400V gives you about 32 J. Snubber capacitors used in inverters need to handle high current pulses with fast rise time. They are usually in the 1-2 uF range so you'd need to stack 10-100 of them using low impedance (wide flat ribbon pairs) conductors. 12AX7 Post Harlot Posts: 4803 Registered: 8-3-2005 Location: oscillating Member Is Offline Mood: informative Nonlinear energy storage, like ferromagnetics and ferroelectrics, are interesting. Ferromagnetics increase the inductance of a transmission line except under large currents (where they saturate), so the average impedance is higher, which isn't too handy for this subject. Ferroelectrics increase the capacitance of a transmission line except under large voltages (where they saturate), so the average impedance is lower, which is more useful for this subject. I don't have a clue what it would take to make a loaded transmission line blow a bridgewire. It might be interesting to play with. It would take a lot of playing to build something practical, though. For this purpose, it's easier to take a primary energy storage medium (i.e., capacitor) and dump it into the load. Note you need this anyway, plus the transmission line has to store its energy for sharpening, so it's not really useful unless you absolutely need picosecond pulses. As for switching, SCRs are kind of slow, on the order of 10us risetime. For quarter shrinkers, they're just fast enough, not quite performing as well as a trigatron. IGBTs aren't as robust and tend to explode at the slightest avalanche (they don't tolerate excessive forward or reverse voltages). With SCR type latchup all but eliminated from modern devices, you won't get quite as much conduction: even at 20V gate, you may get into current limiting and end up dumping your capacitor into the IGBT instead of the load. Keep in mind it's very easy to get kiloamp discharges from a rather pedestrian capacitor -- it's NOT easy to conduct kiloamps through pedestrian semiconductors! Tim Seven Transistor Labs LLC http://seventransistorlabs.com/ Electronic Design, from Concept to Layout. Need engineering assistance? Drop me a message! quicksilver International Hazard Posts: 1820 Registered: 7-9-2005 Location: Inches from the keyboard.... Member Is Offline Mood: ~-=SWINGS=-~ Tim: You're right on the money! I asked a guy who is really into Tesla coils and he said the same thing. He said the best be is to find junk yards with old arc welders for many parts. My thinking was a cap bank from microwaves, keep the thing at 50/60Hz. & see if the switching could take it. I was told that's most likely too dangerous..... But I was told there were better ideas. He gave me this, which I am passing on. I think something like this could be done, especially with a good parts selection. The layout seems workable. Opinions? Attachment: inverter_transformer_design_and_core_selection.pdf (1.1MB) [Edited on 9-1-2010 by quicksilver] Contrabasso National Hazard Posts: 277 Registered: 2-4-2008 Member Is Offline Mood: No Mood Do you have a specific reason for looking to fire EBW? There are almost NO civil uses for them and the military uses come too close to the UN Non proliferation treaty for comfort (unless a room with bars and a hard bed is your style). A standard exploder for SFX or quarrying HE use will only put out about 20Joules maximum at around 250 - 600vDC there will be an electrolytic cap in there and a small DC -DC converter. Allow 2 seconds to charge each time. The basic circuit is cheap and easy to build, Making it work reliably in all situations and keeping it dry is much harder an adds a lot to the build costs. quicksilver International Hazard Posts: 1820 Registered: 7-9-2005 Location: Inches from the keyboard.... Member Is Offline Mood: ~-=SWINGS=-~ Quote: Originally posted by densest A Cockroft-Walton multiplier would probably get you another KV or so at some cost of complexity (multiple diodes and some high voltage ceramic capacitors). We were thinking along the same lines but I did some looking and a Croft Walton may not be able to handle the current - unless money was no object. Parts are a real challenge; even on an intellectual level of "dream-up" a EBW machine. I DO have a source for very serious resistors. All I can come up with for Capacitors would be Microwave-Oven (05/60Hz stuff that COULD handle close to an amp) and some Tesla coil caps that would not be able to handle the current. I thought of a pedestrian idea. but sometimes simple is where you need to be (cost?). Get a small gas driven motor, from a old chain saw, grass trimmer, etc - & put together an inverter for 12v to 110 (depending where you live; I'm in the USA) and simply use the Microwave transformer with two or three caps in parallel with a high quality circuit breaker. Box it up, make it safe and keep testing if it will stand up to multiple wire pops. Make two separate units in small boxes. One would be the gas-motor and inverter; the other would be the electronics to maintain safe distance of the gasoline from ANY spark. Awkward but it could work without a rechargeable battery pack or a vehicle. [Edited on 12-1-2010 by quicksilver] densest National Hazard Posts: 359 Registered: 1-10-2005 Location: in the lehr Member Is Offline Mood: slowly warming to strain point A couple of notes: Mitsubishi published a "how to use our IGBTs" document "powermos4_0.pdf" which has a number of useful instructions on how to make your huge IGBTs last. They specify peak current at 2X continuous current. So if you want 1000A for a millisecond or so, you need at least a 500A IGBT. They also specify drive circuitry, drive voltages, layout, and other crucial details. I'm not sure why people are discussing high power high voltage supplies. Unless you are trying to fire multiple times in a few minutes, a low current supply will charge even a very large capacitor. Diodes, etc. for 5000V are available for a dollar or two, new from a reputable supply house. Tesla is not the way to go for DC because it is difficult to rectify the pulsed high frequency AC. A voltage multiplier for 60Hz is easy to make and not expensive if you are only expecting 50 mA out of it. One for 1000Hz is not too bad. Finding high voltage fast recovery diodes is difficult and expensive which makes high frequency inverter supplies problematic, so bigger magnetics and lower frequencies are necessary. A neon sign transformer, an oil burner igniter transformer, etc. would give more voltage than one would probably need (5KV? 10KV? that's pretty harsh stuff), and those work at mains frequencies so at most four relatively inexpensive diodes would be sufficient. I'd probably use a variac on the input to keep the voltage to "reasonable" levels. It would be instructive to check the self resonant frequency of one's capacitors to determine a pretty hard limit to rise time, as well as the equivalent series resistance for peak current.... Both are usually far worse than one might like. I'm curious about EBW because of the possibility of doing shockwave imprinting on hard metals, for instance, without having to make and detonate high explosives. Emphasis on -without- explosives, even though this is a chemistry forum. 12AX7 Post Harlot Posts: 4803 Registered: 8-3-2005 Location: oscillating Member Is Offline Mood: informative Quote: Originally posted by densest Tesla is not the way to go for DC because it is difficult to rectify the pulsed high frequency AC. It's tempting to put a vacuum tube diode at the top, heater power from a couple turns at the top, big volts DC at the anode (or ground the anode and insulate the other end of the coil if you want positive output). But that's no big deal, a flyback transformer running at 15-150kHz will generate 40kV at up to 2mA easily (not that it lasts long at that power level). You can do it resonant (ala TC) or quasiresonant if you like, but an ordinary flyback works at those voltages. Quote: It would be instructive to check the self resonant frequency of one's capacitors to determine a pretty hard limit to rise time, as well as the equivalent series resistance for peak current.... Both are usually far worse than one might like. This is the main reason for high voltage: low capacity means high SRF, and a relatively high ratio of sqrt(L/C) means you can couple it to standard 50 ohm transmission line. There are other techniques to make stupid fast, stupid tall pulses on transmission lines (e.g., diode step recovery), but the short pulse width means the energy is small. Tim Seven Transistor Labs LLC http://seventransistorlabs.com/ Electronic Design, from Concept to Layout. Need engineering assistance? Drop me a message! bfesser Resident Wikipedian
# How do you (16x^2)/(4x^5)? Jul 20, 2015 $\frac{16 {x}^{2}}{4 {x}^{5}} = \frac{4}{x} ^ 3$ #### Explanation: $\frac{16 {x}^{2}}{4 {x}^{5}}$ Simplify $\frac{16}{4}$ to $4$. $\frac{4 {x}^{2}}{{x}^{5}}$ Apply the exponent rule ${a}^{m} / {a}^{n} = {a}^{m - n}$. $\frac{4 {x}^{2}}{x} ^ 5 = 4 {x}^{2 - 5} = 4 {x}^{- 3}$ Apply the exponent rule ${a}^{- m} = \frac{1}{{a}^{m}}$ $4 {x}^{- 3} = \frac{4}{x} ^ 3$
# Knowledge Bank ## University Libraries and the Office of the Chief Information Officer The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 12:00 pm EDT. During this time users will not be able to register, login, or submit content. # THE 10$\mu$m SPECTRUM OF ACETALDEHYDE Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/12614 Files Size Format View 1992-FA-03.jpg 81.42Kb JPEG image Title: THE 10$\mu$m SPECTRUM OF ACETALDEHYDE Creators: Fraser, G. T.; Pate, Brooks H.; Tretyakov, M. Yu. Issue Date: 1992 Abstract: The 10$\mu$m in-plane methyl rock normal mode of acetaldehyde has been measured using a $CO_{2}$ (or $N_{2}O$) laser side band system and the EROS spectrometer. Using both $CO_{2}$ and $N_{2}O$ laser lines we have approximately 80\% coverage of the spectrum from $912-9 cm^{-1}$. The beam is formed in a weak He expansion which results in population in both $v=0$ and $v=1$ of the torsion. This study is aimed at understanding the vibrational interactions in a spectroscopically well-characterized molecule containing an internal methyl rotor. The spectrum is found to be quite dense and extensively perturbed. The possible interactions with the bath states and the coupling strengths will be discussed. URI: http://hdl.handle.net/1811/12614 Other Identifiers: 1992-FA-03
# Which excerpt from 'my heart is bursting most contributes to the cultural view that freedom is essential ###### Question: Which excerpt from "my heart is bursting most contributes to the cultural view that freedom is essential to happiness? ### What is necessary for a convection cell to be set up in a fluid? What is necessary for a convection cell to be set up in a fluid?... ### If the money supply is growing at a rate of 3 percent per​ year, real GDP​ (real output) is growing If the money supply is growing at a rate of 3 percent per​ year, real GDP​ (real output) is growing at a rate of 3 percent per​ year, and velocityLOADING... is​ constant, what will the inflation rate​ be?... ### Connie has $660,000 she wants to save. if the fdic insurance limit per depositor, per bank, is$250,000, Connie has $660,000 she wants to save. if the fdic insurance limit per depositor, per bank, is$250,000, which of these ways of distributing her money between three banks will guarantee that all of her money is insured?... ### In spanish, a person that is nice is said to be . reservado b. simpatico c. perezosod. atrevido In spanish, a person that is nice is said to be . reservado b. simpatico c. perezosod. atrevido... ### Which one of the following s the fastest your heart should ever beat in one minute? minimum heart rate Which one of the following s the fastest your heart should ever beat in one minute? minimum heart rate maximum heart rate rapid heart rate target heart rate... ### How to write a standard algebra equation How to write a standard algebra equation... ### Is the quadrilateral a parallelogram? why or why not. Is the quadrilateral a parallelogram? why or why not. $Is the quadrilateral a parallelogram? why or why not.$... PLEASE HELP ME ASAP :( $PLEASE HELP ME ASAP :($...
Timezone: » Poster No-Regret Learning in Partially-Informed Auctions Wenshuo Guo · Michael Jordan · Ellen Vitercik Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #1206 Auctions with partially-revealed information about items are broadly employed in real-world applications, but the underlying mechanisms have limited theoretical support. In this work, we study a machine learning formulation of these types of mechanisms, presenting algorithms that are no-regret from the buyer's perspective. Specifically, a buyer who wishes to maximize his utility interacts repeatedly with a platform over a series of $T$ rounds. In each round, a new item is drawn from an unknown distribution and the platform publishes a price together with incomplete, masked'' information about the item. The buyer then decides whether to purchase the item. We formalize this problem as an online learning task where the goal is to have low regret with respect to a myopic oracle that has perfect knowledge of the distribution over items and the seller's masking function. When the distribution over items is known to the buyer and the mask is a SimHash function mapping $\R^d$ to $\{0,1\}^{\ell}$, our algorithm has regret $\tilde \cO((Td\ell)^{\nicefrac{1}{2}})$. In a fully agnostic setting when the mask is an arbitrary function mapping to a set of size $n$ and the prices are stochastic, our algorithm has regret $\tilde \cO((Tn)^{\nicefrac{1}{2}})$.
# Well ordering of reals from linear ordering of power set of reals Working without the axiom of choice (ZF or even ZF + DC), can we show the existence of a well ordering of the real line $\mathbb{R}$ assuming that there is a linear ordering of $\mathcal{P}(\mathbb{R})$? I know that we can construct a linear ordering of $\mathcal{P}(\mathbb{R})$ using a well ordering of $\mathbb{R}$. But I don't see how to prove the converse if it is possible at all.
# Question #8e1f1 Sep 12, 2015 To determine the shape you need to start by drawing the lewis dot structure, then count the number of electronic domains, and determine the type for ${\text{NO}}_{2}$, it is $A {X}_{2} E$, memorize the shapes. #### Explanation: When you draw the lewis dot structure of ${\text{NO}}_{2}$, you have 17 valence electrons to work with, and you end up with $\text{N}$ is the middle double bonded to oxygen on one side. The $\text{N}$ atom is also single bonded to the other oxygen atom, with one electron left over which is on nitrogen (this is a resonance structure ). Basically, using $A X E$ notation, where $A$ is the center atom, there are two bonding domains represented by $X$ and one nonbonding domain given by $E$, so the short hand is $A {X}_{2} E$. Every molecule you can break down in to a $A X E$ type, so $\text{H"_2"O}$ is $A {X}_{2} {E}_{2}$, because there are two bonds, and two lone pairs. Then you need to just memorize the structures for the different types. It is easiest to just start with the structures with no lone pairs. $A {X}_{2}$ linear, $A {X}_{3}$, trigonal planar, $A {X}_{4}$ tetrahedral, etc, the lone pairs slightly distort the geometry because the repulsion is different. Keep in mind that that structure the molecule takes, is one that will minimize the amount of electron - electron repulsion . A table with all the possible structures is given here: http://chemistry.ncssm.edu/labs/molgeom/
# I know the lower limit of an integration as well as the integral. How to find the upper limit of the integration? I used the following code. NV[x1_, x2_] := NIntegrate[3 x^2, {x, x1, x2}] FindRoot[NV[0, t] == 3, {t, .001}, PrecisionGoal -> 20] Output: NIntegrate::nlim: x = t is not a valid limit of integration. >> {t -> 1.44225} The output is correct. However, I want to remove the comment: "x = t is not a valid limit of integration. >>" from the output. • Use NV[x1_?NumericQ, x2_?NumericQ] := ... instead of NV[x1_,x2_]. – kglr Feb 24 '15 at 11:35 • You can take a look at Quiet if you have faith in the solution. – Yves Klett Feb 24 '15 at 11:38 Nasser gave a fine and simple enough approach. I will propose a different, not that it is better, but just to give another view. So, if in general you have an integral y[t]==Integrate[f[x],{x,0,t}], it is equivalent to the differential equation: y'[t]=f[t] with the initial condition y[0]==0. You might solve it numerically and then find the solution of the equation y[t]==J, where Jis the integral value you are assumed to know. Its solution yields the upper integral limit being looked for. Let f[x]=x^2. Then the solution of the differential equation is nds = NDSolve[{y'[t] == t^2, y[0] == 0}, y, {t, 0, 3}][[1, 1]]; and the upper integral limit is given by the following: FindRoot[(y[t] /. nds) == 1, {t, 0.01}] // Quiet (* {t -> 1.44225} *) I do not see any advantage with respect to the approach of Nasser, but it was interesting to me that one may operate also this way. Have fun! If a highly accurate solution is desired, I would follow kguler's comment; otherwise, here is a variation on Alexei Boulbitch's approach: sol = NDSolveValue[{y'[x] == 3 x^2, y[0] == 0, WhenEvent[y[x] == 3, "StopIntegration"]}, y, {x, 0, 3}]; sol["Domain"][[1, -1]] (* 1.44225 *) Or somewhat more directly: Catch @ NDSolve[{y'[x] == 3 x^2, y[0] == 0, WhenEvent[y[x] == 3, Throw[x]]}, {}, {x, 0, 3}] (* 1.44225 *) • Please comment what is "Domain". I like your approach very much. – Alexei Boulbitch Feb 25 '15 at 9:34 • @AlexeiBoulbitch Please see this answer for information about "Domain" and other properties of InterpolatingFunction. Briefly, "Domain" returns a list of intervals, one for each input to the I-Fn specifying its domain. – Michael E2 Feb 25 '15 at 11:18 You can't do numerical integration with upper limit being symbol. Just use Integrate instead ClearAll[t, x, x1, x2, nv] nv[x1_, x2_] := Integrate[3 x^2, {x, x1, x2}] FindRoot[nv[0, t] == 3, {t, .001}] • But i have some integration where I must have to use NIntegrate. Otherwise it cant be solved – Abhijit Saha Feb 24 '15 at 12:57
# Focal Length And Magnification Of A Thin Lens Lab Report Focal Length And Magnification Of A Thin Lens Lab Report. For a certain combination of object and image distances. 1 d o + 1 d i = 1 f (22.1) where d o is the object distance and d i is the image distance. Thin Lens Equation Lab (2) Honors Physics 2013 Optics from sites.google.com Measure the distance between the lens and the screen. Rays of light from an object very far away from a thin lens will be approximately parallel. Hello. i need help with my lab report. Source: taylortechassoc.com Focal length of convex lensb11 lab report objectives of the experiment: Where f is the focal length. d o is the distance between the. slideserve.com The lens maker’s equation for the focal length. f. of a lens is1 1 f =(n− 1) 1 r1 + 1 r2. Theory for a thin lens: Where f is the focal length. d o is the distance between the. Focal length of convex lensb11 lab report objectives of the experiment: tessshebaylo.com The thin lens equation) between the object distance p. the image distance q. and the focal length f of a convex lens. Measure the distance between the lens and the screen. Source: pressbooks.bccampus.ca The purpose of this experiment is to determine the focal length of a thin lens and to measure the magnification. 1 f = 1 p + 1 q (8.3) if we rearrange eq. slideshare.net Focal length and magnification of a thin lens. Before proceeding with this section you need to answer the question: #### Focal Length And Magnification Of A Thin Lens. 1 q = − 1 p + 1 f (8.4) notice eq. The magnification of a thin lens is given by: Move the screen back and forth until the image of the distant object is as clear as possible. #### The Purpose Of This Experiment Is To Determine The Focal Length Of A Thin Lens And To Measure The Magnification. Focal length the focal length f of a thin lens is related to the object distance p and image distance q by the following expression: For a lens in air. $$n_1=1.0$$ and $$n_2≡n$$. so the lens maker’s equation reduces to $\dfrac{1}{f}=(n−1) \left(\dfrac{1}{r_1}−\dfrac{1}{r_2}\right).$ . synonymous with q in the lab. to determine the focal length f of the lengths through the equation +. #### Measure The Focal Length Of A Converging Lens Measure Object And Image Distances Calculate The Magnification Of A Lens. These distances are measured from the lens. An important property of a lens is its focal length. f. The distance between the two lenses was measured to be 4cm. #### Calculated Focal Lengths For Thin Lenses Using Data From Three Different Image Types Lens Or Lens Combination Image From Distant Object Focal Length. 1 d o + 1 d i = 1 f (10.1) the magnification equation is: 1= 1 + 1 f do di where f is focal length. do is the distance between the object and the lens. and di is the distance between the image Thin lenses typo fix 10/26/16 name: #### The Distance Between The First Lens And Its Image Was Measured To Be65. 8.4 has the same form as the equation of a straight line. y = mx + b. To determine the focal length of a spherical convex lens ♦ by “lens formula method” ♦ by “lens replacement method”. The thin lens equation) between the object distance p. the image distance q. and the focal length f of a convex lens.
## College Algebra (10th Edition) Substitute the value given to determine if it is a solution $3x-8=0$ $3(\frac {3}{8})\overset?=0$ $\frac{9}{8}\overset?=0\ X-false$
# Probabilistically Checkable Proofs ## Affiliation: Microsoft Research New England Cambridge United States of America ## Time: Thursday, 11 June 2015, 16:00 to 17:00 ## Venue: • AG-66 (Lecture Theatre) Abstract: As advances in mathematics continue at the current rate, editors of mathematical journals increasingly face the challenge of reviewing increasingly long, and often wrong, ''proofs'' of classical conjectures. Often, even when it is a good guess that a given submission is erroneous, it takes excessive amounts of effort on the editor/reviewer's part to find a specific error one can point to. Most reviewers assume this is an inevitable consequence of the notion of verifying submissions; and expect the complexity of the verification procedure to grow with the length of the submission. In this talk I will describe how research, mostly in the 20th century, has allowed us think about theorems and proofs formally; and how this formal thinking has paved the way for radically easy ways of verifying proofs. In particular I will introduce the "PCP" (for Probabilistically Checkable Proof) format of writing proofs. This is a format that allows for perfectly valid proofs of correct theorems, while any purported proof of an incorrect assertion will be evidently wrong'', so that a reviewer checking consistency of a very parts of the proof (probabilistically) will detect an error.  The existence of PCPs may seem purely fictional but research since the 1990s has shown how such PCPs do exist. In this talk I will explain the concept of a PCP. After giving some some background on why theoretical computer scientists are interested in PCPs and theorems and proofs in general, I will attempt to give an idea of how such PCP formats, and associated verification methods are designed. Brief Bio: Prof. Madhu Sudan has been a Principal Researcher at Microsoft Research New England since 2009. He received his B.Tech. in Computer Science and Engineering from IIT-Delhi in 1987 and his Ph.D. from the University of California, Berkeley, in 1992. From 1992 to 1997, he was at the IBM Thomas J. Watson Research Center, after which he moved to MIT as a faculty member in the Electrical Engineering and Computer Science (EECS) department and a member of their Computer Science and Artificial Intelligence Laboratory (CSAIL). Sudan's current research interests lie in the interface of Computation and Communication, and in particular, in the role of errors in this interface. He has made important contributions to theoretical computer science in areas such as probabilistically checkable proofs, non-approximability of optimization problems, list decoding, and error‑correcting codes. During his distinguished career, he has won many awards, including the ACM Distinguished Doctoral Dissertation (1993), the Godel Prize (2001), and the Rolf Nevanlinna Prize (2002). He is a fellow of ACM and the American Mathematical Society.  Prof. Madhusudan was awarded  the Infosys Prize in Mathematical Sciences in 2014. ----------------------------------------- Infosys Science Foundation lecture by Prof. Madhu Sudan Principal Researcher, Microsoft Research New England, and Adjunct Professor, Electrical Engineering and Computer Science department, and Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge.
# 4.6: Encoding Information in the Frequency Domain Learning Objectives • Using a finite Fourier series to represent the encoding of information in time T. To emphasize the fact that every periodic signal has both a time and frequency domain representation, we can exploit both to encode information into a signal. Refer to the Fundamental Model of Communication. We have an information source, and want to construct a transmitter that produces a signal x(t). For the source, let's assume we have information to encode every T seconds. For example, we want to represent typed letters produced by an extremely good typist (a key is struck every T seconds). Let's consider the complex Fourier series formula in the light of trying to encode information. $x(t)=\sum_{k=-K}^{K}c_{k}e^{i\frac{2\pi kt}{T}}$ We use a finite sum here merely for simplicity (fewer parameters to determine). An important aspect of the spectrum is that each frequency component ck can be manipulated separately: Instead of finding the Fourier spectrum from a time-domain specification, let's construct it in the frequency domain by selecting the ck according to some rule that relates coefficient values to the alphabet. In defining this rule, we want to always create a real-valued signal x(t). Because of the Fourier spectrum's properties, the spectrum must have conjugate symmetry. This requirement means that we can only assign positive-indexed coefficients (positive frequencies), with negative-indexed ones equaling the complex conjugate of the corresponding positive-indexed ones. Assume we have N letters to encode: {a1,...,aN}. One simple encoding rule could be to make a single Fourier coefficient be non-zero and all others zero for each letter. For example, if an occurs, we make: $c_{n}=1\; and\; c_{k}=0,\; k\neq n$ In this way, the nth harmonic of the frequency 1/T is used to represent a letter. Note that the bandwidth—the range of frequencies required for the encoding—equals N/T. Another possibility is to consider the binary representation of the letter's index. For example, if the letter a13 occurs, converting 13 to its base 2 representation, we have 13=11012. We can use the pattern of zeros and ones to represent directly which Fourier coefficients we "turn on" (set equal to one) and which we "turn off". Exercise $$\PageIndex{1}$$ Compare the bandwidth required for the direct encoding scheme (one nonzero Fourier coefficient for each letter) to the binary number scheme. Compare the bandwidths for a 128-letter alphabet. Since both schemes represent information without loss -- we can determine the typed letter uniquely from the signal's spectrum -- both are viable. Which makes more efficient use of bandwidth and thus might be preferred? Solution N N signals directly encoded require a bandwidth of N/T. Using a binary representation, we need: $\frac{log_{2}N}{T}$ For N=128, the binary-encoding scheme has a factor of $\frac{7}{128}=0.05$ smaller bandwidth. Clearly, binary encoding is superior. Exercise $$\PageIndex{1}$$ Can you think of an information-encoding scheme that makes even more efficient use of the spectrum? In particular, can we use only one Fourier coefficient to represent N letters uniquely? Solution We can use N different amplitude values at only one frequency to represent the various letters. We can create an encoding scheme in the frequency domain to represent an alphabet of letters. But, as this information-encoding scheme stands, we can represent one letter for all time. However, we note that the Fourier coefficients depend only on the signal's characteristics over a single period. We could change the signal's spectrum every T as each letter is typed. In this way, we turn spectral coefficients on and off as letters are typed, thereby encoding the entire typed document. For the receiver (see the Fundamental Model of Communication) to retrieve the typed letter, it would simply use the Fourier formula for the complex Fourier spectrum for each T-second interval to determine what each typed letter was. Fig. 4.6.1 shows such a signal in the time-domain. In this example, only the third and fourth harmonics are used, as shown by the spectral magnitudes corresponding to each T-second interval plotted below the waveforms. Can you determine the phase of the harmonics from the waveform? In this Fourier-series encoding scheme, we have used the fact that spectral coefficients can be independently specified and that they can be uniquely recovered from the time-domain signal over one "period." Do note that the signal representing the entire document is no longer periodic. By understanding the Fourier series' properties (in particular that coefficients are determined only over a T-second interval, we can construct a communications system. This approach represents a simplification of how modern modems represent text that they transmit over telephone lines. ## Contributor • ContribEEOpenStax
• I moved to Austria about 3 months ago and since then, • I have written my PhD thesis. • Currently it sits with my Prof for corrections. I wrote my thesis in LaTeX hoping it would make things easier than using Word which chugs horribly in large documents with dozens of figures. Not to mention the pains of layout. I believe LaTeX is a great way to write a thesis, but there are a lot of hurdles I was unprepared for when starting. Here, I will outline to solutions to most of them. There are now online services for writing in LaTeX (Overleaf.com probably being the largest) but as I was expecting to write at times with no/limited internet access, I wanted to do things locally. I used TeXstudio for most of the work, occasionally writing in a text editor with LaTeX highlighting (Sublime Text v3). The basic template that I use came from this website, and while not perfect, it does make a lot of things easier than starting with a blank slate. ## General I highly recommend learning a little about LaTeX planning on spending weeks or months writing in it. It requires a certain level of comfort writing markup style, and if this scares you, given the importance of your thesis LaTeX may not be for you. Some things to be aware of. I will state certain package that need to be included. These should be done above  \begin{document} in your "main.tex" file if you're using the Masters/Doctoral thesis template above. I find it useful to add comments all over my .tex files. Add a package? Comment to explain why you did and what it is for. This allows you to remove it later if you're not usually it any more or if it was just for testing. Comments are easy, just add a % to the start of the line. If you are using TeXstudio, you can highlight text in bulk and Ctrl+t to comment out the whole block. I also recommend writing in chapters and comment out chapters you aren't working on i.e. \include{Chapters/Chapter1} %intro % \include{Chapters/Chapter2} %synergy Not only will it be faster than compiling the whole document every time you want to check a small layout change,  especially important when you have more than a couple of chapters done and the density of figures/tables etc increases. ## Images This is a complicated and important topic. Most scientific work is centred around the figures that you make, and so it really important that this is done right. Before you even consider any LaTeX code for images, I recommend you create your images as SVG files (inkscape is a great free tool to do this), then export as PDF. I initially read that I should export images as .EPS files (enhanced post-script), however my microscopy images were then subjected to compression and artefacts became apparently which was not acceptable. All software I used to create figures allowed for export as SVG or to Inkscape for manipulation (GraphPad Prism, imageJ, FlowJo) and I can't imagine that there would be a case where this is not possible. If you have to use .EPS and those figures are not bitmap (i.e. microscopy), you can do so, but will need to include the epstopdf package usepackage{epstopdf} above \begin{document} in your "main.tex" file. Make your images in Inkscape and save as SVG (for future editing) but export as PDF. You can usually edit imported text here (i.e. axis of graphs) and I would re-do these if necessary as not all symbols will transfer well to Inkscape. It also allows for you unify your fonts and text sizes. I would also label the figure parts at this stage a) b) c) etc. Pick a font size (probably 12) and stick with it for all figures. This helps later. You can click and drag the PDF file of your image into TeXstudio. There you will be presented with a dialogue box where you can enter details that eventually just appear in the code inserted. Eventually, when familiar it's easier just to write it yourself in my opinion. Let's break it down. \begin{figure} starts a figure environment but isn't strictly necessary to insert an image, but it helps with formatting. \centering is pretty obvious - it lines up the figure to the centre of the writing environment of the page (i.e. not including the margins). \includegraphics[width=0.7\linewidth]{filelocation} sets the embedded image to 70% of the page width. Note that you don't need to include .pdf at the end of the file name. LaTeX realises this and deals with it for you. In the cast of imported .eps files, they get converted to PDF here. \caption is self explanatory but can be combined with other markup for example to embolden a few words at the beginning. It is important to put a unique label on images that is also descriptive. In the text you can then reference it with \ref{fig:label} which will post just the figure number, or \autoref{fig:label} which will post "Figure 4.1" etc. I would recommend adding a couple notes using comments at the end of every figure. 1) %source: identify the source of your data as it's not always so obvious years after acquisition to go back and trace where it came from, especially after removing some of the text to make the figure look cleaner. 2) %todo this syntax adds a highlight by default in TeXstudio and makes it easier to spot where you want to make changes later. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Figures/bonemarrowdevelopment} \caption{Bone marrow development. As humans age, haematopoietic cells are gradually replaced with adipose tissue, changing the colour of bone marrow from red to yellow.\cite{Li2018}} \label{fig:bonemarrowdevelopment} \end{figure} ## Future posts Other articles to come when I find time to write them: Tables : Easy ways how to create using excel, including tables that cross multiple pages Referencing: Using Mendeley and to CWYW (cite while you write)! Greek: Your alpha beta gamma deltas Chemical names: I'll explore some packages for easily writing out chemical structures. Note I did it all manually, but I do not recommend this.
# I present a communication game - Could you please make comments on my assumptions, notation and properties that I may have not considered yet? I consider the following communication game. Suppose that we have $$I$$ players and each one of them learns a private signal $$s_i=(s_{i,1},s_{i,2},...,s_{i,k})$$, where $$k$$ is finite and also, every player $$i$$ learns a private code name $$l_i$$, which serves as her "name" to refer to, in the game. Imagine that $$l_i$$ is like a unique id (positive integer) for every player $$i$$ such that $$l=(l_1,l_2,\cdots,l_i)\in L$$ is the profile of ids and $$L$$ is of the same dimension with $$I$$. The game is played in three rounds that consist of two talk phases and one play phase. 1. In the first round the players talk, sending the following message $$(l_i,s_{i,1},s_{i,2},...,s_{i,k})=(l_i,m_i)\in \mathbb{Z}_p^{k+1}$$, which is interpreted in the sense that "I am player $$l_i$$ and I report that my private information is $$s_{i,1}, s_{i,2}, ..., s_{i,k}$$". The message space $$m_i\in \mathbb{Z}_p^k$$ is the space of truthful reporting signals. Note that $$\mathbb{Z}_p$$ is the set of integers modulo $$p$$, where $$p$$ is a large prime positive integer. 2. In the second round they also talk and after gathering the whole information they respond back to each other a mixed action to play, that is modeled in the following sense. Each player $$i$$ will learn all the distribution $$m$$ of messages and will give back the message $$r_i=\pi_i\circ(1_{L}\times1_{M}\times g_i)$$ such that $$\pi_i$$ is a permutation, $$1_L$$ is the identity on $$L$$, $$1_M$$ the identity on $$M$$ and $$1_{L}\times1_{M}\times g_i:L\times M\to L\times M\times \Delta(A^i): (l,m)\to(l,m,g_i(l,m))$$. We denote $$\Delta(A^i)$$ with the profile of mixed actions send by the palyer $$i$$ to the rest of the players. Permutation $$\pi_i$$ serves as an encryption so as every $$j$$ will learn her own coordinate and we define this in the sequel. To be more precise $$\pi_i=\begin{pmatrix}(l_1,m_1) & (l_2,m_2) & \cdots & (l_j,m_j) & \cdots & (l_I,m_I)\\ g_1(l_1,m_1) & g_2(l_2,m_2) & \cdots & g_j(l_j,m_j) & \cdots & g_I(l_I,m_I)\end{pmatrix}$$ the above representation shows that every $$(l_j,m_j)$$ is associated with exactly one $$g_i(l_j,m_j)$$ which is a mixed action that is instructed by player $$i$$ as a recommendation to player $$j$$. Also $$g_i(l,m)$$ is a vector of $$|I|$$ dimension assuming that the recommendation $$g_i(l_i,m_i)$$ is the message that she sends to herself. 1. In the third phase the players play their recommended strategies based on a honest majority since they are truthful in the beginning of the game, namely every player $$j$$ will play the recommendation according to the decision mapping $$\tau_i:L\times M\times \Delta(A^j)\to \Delta(A_i)$$ where by $$\Delta(A_i)$$ we denote the space of mixed actions of player $$i$$ $$\tau_i(l,m,g_j)=pr_i\circ\pi^{-1}_j$$ In order to play the recommended strategy player $$i$$ must receive the same recommendation from the majority (excluding herself although her opinion by the message she sends to herself helps as a vitrification scheme). This is a proposed game that extends the game of Universal mechanisms $$(1990)$$. More precisely I present a small extension of the proof. My worries are the following. 1. I define $$g_i$$ as a function that takes as input a vector of $$(l,m)\in\mathbb{Z}_p^{Ι(k+1)}$$ where $$p$$ is a large prime positive integer and $$g_i\in\Delta(A^i)$$ is a profile of mixed actions (vector) of dimension $$I$$ where every player $$j$$ will learn her own coordinate by $$i$$ and also player $$i$$ can send a message to herself (namely $$g_i$$ is of dimension $$|I|$$). What assumptions do I need to do so as $$g_i$$ is injective? I think that I need two things to examine here, the first one linearity of the profile $$g_i(l,m)\in\Delta(A^i)$$ of mixed strategies that are proposed by the player $$i$$ and that it is 1-1. 2. I just want an opinion if anyone of you see something missing in my assumptions or in the formalization in general. P.S. I know that it is not as much detailed as it should be, but it is the best I can do write know. P.S. For reasons of formalization, I assume that the id name of every player $$i$$ is $$l_i$$. • Having checked only 1. at the top: What is $p$? What is $\mathbb{Z}_p$? The last entry of $l$ should be $l_I$. If $(l_i,m_i)\in\mathbb{Z}_p^{k+1}$, then $l_i\in\mathbb{Z}_p$ and $L=\mathbb{Z}_p^I$, so why introduce $L$ at all? Do you assume $m_i=s_i$ for all players? If yes, then why give them different names? If no, then you shouldn't write down the same profile for both. Mar 31 at 7:24 • $\underbrace{\mathbb{Z}_{p}^{k+1}\times\mathbb{Z}_p^{k+1}\times\cdots\mathbb{Z}_p^{k+1}}_{\text{$I$times}} = (\mathbb{Z}_{p}^{k+1})^I = \mathbb{Z}_{p}^{I(k+1)}$, right? Mar 31 at 7:27 • @VARulle For any prime $p$ , the set $\mathbb{Z_p}$ with the addition mod p and multiplication mod p, and congruence mod p, is a field. Mar 31 at 7:47 • @VARulle for your second comment yes you are right. Mar 31 at 7:48 • I re-edited the $1$. question and I think that in order for my solution to be well defined I need to check two things, the linearity of $g_i \in \Delta(A^i)$ which is a profile (namely vector) of mixed strategies, and this has to be a $1-1$ mapping. In essence $g_i:\mathbb{Z}_p^{I(k+1)}\to \Delta(A^i)$ where $\Delta(A^i)$ is a profile of mixed strategies. Note again that the exponent i refers to the player who sends the recommendation, namely $A^i=A_1^i\times A_2^i\times\cdots\times A_I^i=\Pi_{j=1}^IA_j^i$, which differs from $A_i$ Mar 31 at 11:52
# Re: Encodings, Non-T1 symbols, and bitmapped fonts... • To: [email protected] • Subject: Re: Encodings, Non-T1 symbols, and bitmapped fonts... • From: [email protected] (Alan Jeffrey) • Date: Mon, 19 Jun 95 10:31 BST • CC: [email protected] Replying to some recent postings... Mellisa said: >It'd be nice if those on the PostScript >side could liaise with J"org Knappen (if they aren't already) so that >their miscellaneous symbol virtual font might have the same encoding as >his miscellaneous symbols font for the DC fonts This has (sort of) already happened. The sort of' is because I've been bogged down with the administrivia of teaching, so I haven't had time to look at the proposal. It is very very very very important that the MF and VF implementations of the TS fonts use the same encoding---otherwise we're back in the state we were 4 years ago when PS and MF fonts used different encodings and trying to combine them was almost impossible. >[Chris Rowley says that >LaTeX2e actually can't support more than one encoding, but this seems >to contradict my personal experience, where I've used T1 and OT1 in the >same document, and T1 and 8r in the same document. Perhaps he can clarify >what he means here.] This is a feature' of TeX. \begin{technobabble} When TeX is hyphenating a paragraph, it uses the uc/lc table to determine what constitutes a word'. It is therefore important that the uc/lc table matches the encodings used in the para. Unfortunately, the uc/lc table that counts is the one in force at the end of the para (unlike settings to \language, \lefthyphenmin and \righthyphenmin, which are stored in an appropriate whatsit). This means that if you have some Greek in a German paragraph, that the Greek will be hyphenated using the German uc/lc table. This is not good. The upshot of this is that it is very difficult to use more than one encoding in a document (eg 8r and T1) unless they share a uc/lc table. \end{technobabble} >Finally, some points of confusuion. Alan didn't think there was a .enc >file for 8r, yet there is one in the distribution. Sorry, if I said .enc, I meant *enc.def. >And to conclude, I was a little lost >by Sebastian's "you are on your own with using 8r as an encoding", since >which says that this is supported. That para should go then. The L3 team is *not* supporting 8r as anything other than a raw font encoding for VFs to point at. Drahoslav said: >One often hears the phrase despite all of its problems' about >T1 encoding; other than not having misc. symbols (which aren't >_letters_ anyway) *what* are the problems? Lots of little things... almost no fonts have a <perthousandzero> glyph; T1 has a slot for the Sami <ng> and <Ng> glyphs but not (so I am told) the other Sami glyphs; the Turkish letter \.\i doesn't quite come out as expected in small caps; the inclusion of <section> but not <paragraph>; but on the whole it's as good a compromise as we're likely to get, and it's too late to change it now. >If T1 is really going to be accepted >as a standard on par with ISOLatin-x, It isn't. Latin-1 and -2 have the ISO sitting behind them. T1 only has the TeX community. >Still, a few letters, particularly the ones which are >not in 8r directly nor have a PCC description, look rather out >of kilter compared to well typeset books, and could be fixed-up >without resorting to design sizes' or customizing for every >font by a few (more) tweeks to latin.mtx. Well, if you have any suggestions, mail them to [email protected], but there's some glyphs like <aogonek> or <Aogonek> where there's no way to automatically place the accent---it has to be done by hand (eg with a PCC description in the AFM file). Yannis said: >I would like to remind you that I'm working on UC1, UC2, UC3, UC4, UC5: >five encodings which cover Unicode/ISO10646-1 + the necessary glyphs for >TeX output. I would be glad if the "LaTeX2e folks" leave me the possibility >of integrating them into LaTeX2e on Omega. The support for encodings other than OT1 in LaTeX is getting better, and is one of the areas we're still working on. Our current plans are for the LaTeX team to support OT1 and T1 directly, and to encourage other people to look at supporting other encodings (eg Cyrillic / Greek / IPA etc). We want to encourage 3rd party developers to use the font encoding commands documented in fntguide.tex and ltoutenc.dtx---these have been pretty stable for over a year now, with any recent changes being upwardly compatible. As far as Omega is concerned, providing *enc.def files for the UC* fonts is eminently sensible (my only slight worry is about the encoding names, but that's fairly trivial). If you want to make them the default encoding, then you should change the format name (eg to OmegaLaTeX) since LaTeX is distributed under the same change the file, change the filename' conditions as TeX. Phew... Alan. -- Alan Jeffrey Tel: +44 1273 678526 [email protected] School of Cognitive and Computing Sciences, Sussex Univ., Brighton BN1 9QH, UK
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mosc. Math. J.: Year: Volume: Issue: Page: Find Towers of function fields over non-prime finite fieldsAlp Bassa, Peter Beelen, Arnaldo Garcia, Henning Stichtenoth 1 On projections of smooth and nodal plane curvesYu. Burman, Serge Lvovski 31 New homogeneous ideals for current algebras: filtrations, fusion products and Pieri rulesGhislain Fourier 49 Algebraic independence of multipliers of periodic orbits in the space of rational maps of the Riemann sphereIgors Gorbovickis 73 A proof of a conjecture by Lötter on the roots of a supersingular polynomial and its applicationTakehiro Hasegawa 89 Eigenfunctions for $2$-dimensional tori and for rectangles with Neumann boundary conditionsThomas Hoffmann-Ostenhof 101 Hyperbolicity of renormalization of circle maps with a break-type singularityK. Khanin, M. Yampolsky 107 Conformal spectrum and harmonic mapsNikolai Nadirashvili, Yannick Sire 123 The complete formal normal form for the Bogdanov–Takens singularityEwa Stróżyna, Henryk Żołądek 141
# Chapter 10 - Section 10.5 - Equations of Lines - Exercises: 1 $x+2y=6$ $y=-\frac{x}{2} + 3$ #### Work Step by Step $8x + 16y = 48$ 8, 16 and 48 are multiples of 8 and therefore can divide the whole equation by 8. $x + 2y = 6$ $2y = -x + 6$ $y=-\frac{x}{2} + 3$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Sizes of kernels of homomorphisms #### kalish ##### Member I have a problem that I have been stuck on for two hours. I would like to check if I have made any progress or I am just going in circles. **Problem: Let $\alpha:G \rightarrow H, \beta:H \rightarrow K$ be group homomorphisms. Which is larger, $\ker(\beta\alpha)$ or $\ker(\alpha)$?** **My work:** $\ker(\alpha)<G, \ker(\beta\alpha)<G, |G|=|\ker\alpha|*|im\alpha|=|\ker\beta|*|im\beta|$ $|G|=|\ker\alpha|[G:\ker\alpha]=|\ker\beta|[G:\ker\beta]$ $|im\alpha|$ divides $|G|$ and $|H|$ $|im\beta\alpha|$ divides $|G|$ and $|K|$ $|ker(\beta\alpha)|=\frac{|G|}{|im(\beta\alpha)|}, |ker(\alpha)|=\frac{|G|}{|im(\alpha)|}$ $\frac{|\ker(\beta\alpha)|}{|\ker\alpha|} \leq \frac{|H|}{|im(\beta\alpha)|}$ With similar analysis, I get $|\ker(\beta\alpha)| \geq \frac{|G|}{|K|}$ and $|\ker(\alpha)| \geq \frac{|G|}{|H|}$. This seems like too much work with zero output. #### Deveno ##### Well-known member MHB Math Scholar If $g \in \text{ker}(\alpha)$ then it is immediate that: $\beta\alpha(g) = \beta(e_H) = e_K$, so that: $g \in \text{ker}(\beta\alpha)$. Hence it is obvious that all of $\text{ker}(\alpha)$ is contained within $\text{ker}(\beta\alpha)$, so the latter must be the larger set (it may the same set if $\beta$ is an isomorphism).
# zbMATH — the first resource for mathematics Lifespan of solutions for a weakly coupled system of semilinear heat equations. (English) Zbl 1439.35181 Summary: We introduce a direct method to analyze the blow-up of solutions to systems of ordinary differential inequalities, and apply it to study the blow-up of solutions to a weakly coupled system of semilinear heat equations. In particular, we give upper and lower estimates of the lifespan of the solution in the subcritical case. ##### MSC: 35K05 Heat equation 35B44 Blow-up in context of PDEs Full Text: ##### References: [1] Y. Aoyagi, K. Tsutaya and Y. Yamauchi, Global existence of solutions for a reaction-diffusion system, Differential Integral Equations 20 (2007), 1321-1339. · Zbl 1212.35225 [2] J. M. Ball, Remarks on blow-up and nonexistence theorems for nonlinear evolution equations, Quart. J. Math. Oxford 28 (1977), 473-486. · Zbl 0377.35037 [3] P. Baras and M. Pierre, Critère d’existence de solutions positives pour des équations semi-linéaires non monotones, Ann. Inst. H. Poincaré Anal. Non Linéaire 2 (1985), 185-212. · Zbl 0599.35073 [4] M. Escobedo and M. A. Herrero, Boundedness and blow up for a semilinear reaction-diffusion system, J. Differential Equations 89 (1991), 176-202. · Zbl 0735.35013 [5] M. Escobedo and H. A. Levine, Critical blowup and global existence numbers for a weakly coupled system of reaction-diffusion equations, Arch. Rational Mech. Anal. 129 (1995), 47-100. · Zbl 0822.35068 [6] M. Fila and P. Quittner, The blow-up rate for a semilinear parabolic system, J. Math. Anal. Appl. 238 (1999), 468-476. · Zbl 0934.35062 [7] H. Fujita, On the blowing up of solutions of the Cauchy problem for $$u_t=\Delta u+u^{1+\alpha}$$, J. Fac. Sci. Univ. Tokyo Sec. I 13 (1966), 109-124. · Zbl 0163.34002 [8] K. Fujiwara, M. Ikeda and Y. Wakasugi, Blow-up of solutions for weakly coupled systems of complex Ginzburg-Landau equations, Electron. J. Differential Equations 2017 (2017), No. 196, 1-18. · Zbl 1370.35247 [9] K. Fujiwara, M. Ikeda and Y. Wakasugi, Estimates of lifespan and blow-up rates for the wave equation with a time-dependent damping and a power-type nonlinearity, to appear in Funkcial. Ekvac., arXiv:1609.01035v2. · Zbl 1426.35162 [10] K. Fujiwara and T. Ozawa, Finite time blowup of solutions to the nonlinear Schrödinger equation without gauge invariance, J. Math. Phys. 57 082103 (2016), 1-8. · Zbl 1348.35231 [11] K. Fujiwara and T. Ozawa, Lifespan of strong solutions to the periodic nonlinear Schrödinger equation without gauge invariance, J. Evol. Equ. 17 (2017), 1023-1030. · Zbl 1381.35160 [12] K. Hayakawa, On nonexistence of global solutions of some semilinear parabolic differential equations, Proc. Japan Acad. 49 (1973), 503-505. · Zbl 0281.35039 [13] K. Ishige, T. Kawakami and M. Sięrżzega, Supersolutions for a class of nonlinear parabolic systems, J. Differential Equations 260 (2016), 6084-6107. · Zbl 1338.35228 [14] E. Kamke, Zur Theorie der Systeme gewöhnlicher Differentialgleichungen. II, Acta Math. 58 (1932), 57-85. · JFM 58.0449.02 [15] K. Kobayashi, T. Sirao and H. Tanaka, On the growing up problem for semilinear heat equations, J. Math. Soc. Japan 29 (1977), 407-424. · Zbl 0353.35057 [16] E. Mitidieri and S. I. Pohozaev, Nonexistence of weak solutions for some degenerate elliptic and parabolic problems on $$\mathbb{R}^n$$, J. Evol. Equ., 1 (2001), 189-220. · Zbl 0988.35095 [17] K. Mochizuki, Blow-up, lifespan and large time behavior of solutions of a weakly coupled system of reaction diffusion equations, Adv. Math. Appl. Sci. 48, World Scientific (1998), 175-198. · Zbl 0932.35028 [18] K. Mochizuki and Q. Huang, Existence and behavior of solutions for a weakly coupled system of reaction-diffusion equations, Methods Appl. Anal. 5 (1998), 109-124. · Zbl 0913.35065 [19] K. Nishihara and Y. Wakasugi, Global existence of solutions for a weakly coupled system of semilinear damped wave equations, J. Differential Equations 259 (2015), 4172-4201. · Zbl 1327.35238 [20] T. Ogawa and H. Takeda, Non-existence of weak solutions to nonlinear damped wave equations in exterior domains, Nonlinear Anal. 70 (2009), 3696-3701. · Zbl 1196.35142 [21] J. Renclawowicz, Global existence and blow-up for a completely coupled Fujita type system, Appl. Math. 27 (2000), 203-218. · Zbl 0994.35055 [22] N. Umeda, Blow-up and large time behavior of solutions of a weakly coupled system of reaction-diffusion equations, Tsukuba J. Math. 27 (2003), 31-46. · Zbl 1035.35018 [23] M. Wang, Blow-up rate for a semilinear reaction diffusion system, Comput. Math. Appl. 44 (2002), 573-585. · Zbl 1030.35104 [24] F. B. Weissler, Existence and non-existence of global solutions for a semilinear heat equation, Israel J. Math. 38 (1981), 29-40. · Zbl 0476.35043 [25] Qi S. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Week 4 – Precalc 11 This week in precalc 11, we learned how to add and subtract radicals. The steps do to so are pretty easy. First, we simplify the radicals in the problem, and then we need to make sure the indices and the radicands is the same. If they are the same, you add or subtract the coefficients with the coefficients, and the radicands just stay the same. Just remember though, if the radicands are not the same, you can’t add or subtract them together. An example that helped me understand this was: $\sqrt {98}$ + $\sqrt {50}$$\sqrt {18}$ Simplify the terms: $\sqrt {2}$ + 5 $\sqrt {2}$ – 3 $\sqrt {2}$ Add the coefficients together, and leave the same coefficient: = 7 + 5 – 3 = 9 $\sqrt {2}$ All you have to remember is to always simplify the radicals, it will make your life way easier! # Week 3 – Precalc 11 This week in precalc 11 we learned about the absolute value of a real number. The absolute value is  the principal square root of a number, it is the distance from a to 0 on the number line. The absolute value is always positive. The sign used to represent it is | |. For example, let’s say we’re trying to find the absolute value of | -39 | = 39 because the distance between -39 is 39 (the number has to be positive). Another example would be | 24 | = 24  which means that the distance between 24 and 0 is 24. The absolute value bars do not work like parenthesis or brackets. For example, if you want to simplify – | -25 |. Because the absolute value of negative 25 is positive 25, we end up with negative positive 25, which at the end gives us negative 25. – | – 25 | = – ( +25 ) = -25 # Week 2 – Precalc 11 This week in precalc 11 we learned about geometric series. Geometric series is the sum of the terms of a geometric sequences, a series with a constant ratio between successive terms. For example, a geometric series would be 6 + 12 + 24 + 48 + . . . The common ratio is the ratio between two numbers in a geometric sequence. To determine the common ratio, you can just divide each number from the number preceding it in the sequence (formula: r = a(n) / a(n – 1) ). We also learned how the formula for determining the sum of the first n terms in any geometric series using the formula: Here’s an example on how to apply this Find $S_{10}$ for the geometric series 80 + 60 + 45. . . a = 80 r = 0.75 $S_{10}$ = 80 ( ( $0.75^{10}$ ) – 1) ÷ ( o.75 – 1) $S_{10}$ = 301.98 Another example       ↓ For the geometric series 3, 9, 27. . . 6561, determine how many terms it has and then calculate its sum. r = 3 a = 3 $t_n$ = 6561 $t_n$ = a ( $r^{n-1}$ 6561 = 3 ( $3^{n-1}$ 2187 = ( $3^{n-1}$ $3^7$$3^{n-1}$ 7 = n – 1 8 = n $S_{8}$ = 3 ( $3^8$  – 1 ) ÷  ( 3 – 1) $S_{8}$ = 9840 # Week 1 – My Arithmetic Sequence 5, 20, 35, 50, 65 To find the 50th term, use the formula: $t_n$ = $t_1$ + (n – 1) d. 20 – 15 = 5 35 – 15 = 20 D= +15 $t_{50}$ = 5 + (50 -1) ⋅ 15. $t_{50}$ = 5 + (49) ⋅ 15 $t_{50}$ = 5 + 735 $t_{50}$ = 740 Find $S_{50}$ with the formula $S_n$$\frac{n}{2}$ ( $t_1$ + $t_n$ ) $t_{50}$ =$\frac{50}{2}$ ( 5 + 740) $t_{50}$ = 25 (745) $t_{50}$ = 18,625 # Week 1 – Precalc 11 One of the things I learned this week in precalc 11 was arithmetic sequences. An arithmetic sequence is when you have a list of numbers following a certain pattern, where each number increases (or decreases) by adding the same value each time. For example, 3,6,9,12,15… it’s a sequence because each time it adds up to 3. The common difference is the difference between 2 numbers in the arithmetic sequence. The equation to find the common difference, also known as D, is d = t(n) – t(n – 1) where T means term, N means the last number in the sequence and N-1 means the number before the last one in the sequence. The common difference could be negative, which means that the sequence is decreasing, or it could be positive which means that the sequence is increasing. We also learned how to define the pattern in a sequence by using the formula:                $a_n$  = $a_1$  + (n – 1) ⋅ d Here’s an example on how to apply this ### ↓ Let’s say we want to find the 15th term in the sequence 3,6,9,12,15… First, we start by identifying our clues to solve this problem. We know that the first term is 3, and we know that the common difference is 3 (because each time the number increases by 3, to prove it we simply just subtract the second term to the first, and to verify we could also subtract the fourth term to the third). So now we just fill in the numbers in the formula and solve. Here’s another example. Let’s say we want to find out how many terms are in this sequence: 4,0,-4,-8,-12,……..-36. Like before, we start by identifying the clues we have. We know that our first term is 4, and we know that our last term is -36. We also know that our sequence is decreasing by -4 each time.  Now we fill in the numbers in the formula and solve. I really enjoyed the first lesson in math because it was amazing learning how we can figure out the terms in any sequence by just using a formula instead of counting and adding until we get there.
SEARCH HOME Math Central Quandaries & Queries Question from Sam, a student: Kate is 16 yrs. younger then Tess. Nina is 8 yrs. older then Kate. The sum of their ages is 189. How old is each? HI Sam, This is an algebra problem so I need a variable to stand for the age of one of these women. I want the variable to stand for the age of the youngest woman so that the ages of the other two will be the age of the youngest plus something. If I let the variable stand for the age of one of the other two then the age of the youngest will be that age minus something, and I would rather not have to deal with subtraction. Read the question. Kate is younger than Tess and Nina is older than Kate so Kate is the youngest. Thus I would start this way. Let $K$ be Kate's age. Since Kate is 16 years younger than Tess, Tess' age is $K + 16.$ How old is Nina? The sum of the three ages is $189.$ Solve for $K.$ Penny Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
# Math Help - Function Operations and Compositions 1. ## Function Operations and Compositions The original equation is 11^(2/5) 11^(4/5) I actually worked it to the point I got: 1 121^(1/5) Can you show me or tell me how to solve this to where there is no rational # on bottom? 2. Originally Posted by runner940 The original equation is 11^(2/5) 11^(4/5) I actually worked it to the point I got: 1 121^(1/5) Can you show me or tell me how to solve this to where there is no rational # on bottom? Multiply by $\frac{121^{\frac{4}{5}}}{121^{\frac{4}{5}}}$ (a cleverly disguised 1) 3. Originally Posted by runner940 The original [expression] is $\frac{11^{\frac{2}{5}}{11^{\frac{4}{5}}$. I actually worked it to the point I got: $\frac{1}{121^{\frac{1}{5}}$. Can you show me or tell me how to solve this to where there is no rational # on bottom? Try using exponent rules: . . . . . $\frac{x^m}{x^n}\, =\, x^{m\, -\, n}$ In your case, this means that the original expression simplifies as $11^{-\frac{2}{5}}\, =\, \frac{1}{(11^2)^{\frac{1}{5}}}$, as you mention. Now you need to rationalize the radical denominator. Hint: To be able to take the 121 out of the radical (the fifth root that is indicated by the one-fifth power), you need four more copies of 121 inside the root. So what should you multiply by? Highlight the space below for a bigger hint. $\huge\color{white}\frac{121^{\frac{4}{5}}}{121^{\f rac{4}{5}}}$
# Math Help - Depths in different sized molds 1. ## Depths in different sized molds I am pouring liquid into molds. My original molds were 1.5"x1.5" and had a depth of .75" I poured 3mL into each mold, and the liquid did not come near the .75" depth--I don't know how deep the liquid was once it was poured in, maybe like 1/2 or 1/3 or something. I now have different sized molds, but I want to keep the depth of the liquid I pour the same as when I poured it into the smaller molds. How do I figure out the volume of liquid to add to the larger molds so that the depth of the liquid remains the same? Thanks! 2. Originally Posted by angicals I am pouring liquid into molds. My original molds were 1.5"x1.5" and had a depth of .75" I poured 3mL into each mold, and the liquid did not come near the .75" depth--I don't know how deep the liquid was once it was poured in, maybe like 1/2 or 1/3 or something. I now have different sized molds, but I want to keep the depth of the liquid I pour the same as when I poured it into the smaller molds. How do I figure out the volume of liquid to add to the larger molds so that the depth of the liquid remains the same? Thanks! you're mixing units ... use cm instead of inches because $1 \, cm^3 = 1 \, mL $ if all your molds have a square base, then the volume of liquid is $V = s^2h$ , where s = side length of the square and h is the height the liquid reaches.
# Executive summary Here in Part 1, we will define a formula for the expected value (EV) of the EA Hotel, relative to donating to other EA organisations, called . We will then factor each of the elements of as much as we can, and present our best estimate for each of these factors, using data gathered from current residents. In Part 2, we will use these estimates as an anchor for defining a plausible range of values for each factor, and do a Monte Carlo simulation on them. We aim to show 1) that under a wide range of assumptions, the hotel is higher EV than the alternative and 2) a few showcased values for that will match different potential donors. # 1. What is the relative EV of the EA Hotel, rV? Since the hotel is a meta-charity that hosts people who are working on projects that range as broadly as EA itself, we will not try to calculate actual EV, as in QALYs or some other standardised measure. We will instead attempt to calculate relative EV, which is EV divided by counterfactual EV. Let us first establish the counterfactual. We will assume that most readers have a donation budget, which they already intend to spend on EA. We will also assume that the EA organisation they would otherwise fund is funding-constrained, and would spend these additional funds on hiring. We imagine a hypothetical philanthropist P, who has X to spend. This amount would allow either hires to be made ( being the cost of a marginal hire at the counterfactual EA org), or it would allow residents to live at the hotel (note that is low—it costs us <£6000/​year all in to host someone at the EA Hotel). Counterfactual EV would be the added value of these marginal hires, which is the impact of their work subtracted by the impact they would be making when not employed by an EA organisation. We call it . Hotel EV would be the added value of these additional residents, which we will call . We will assume that the amount of residents we can host scales linearly with the amount of funding we receive. Since economies of scale apply, and since our funding gap arguably wouldn’t be closed until well into the hundreds of thousands, this is a conservative assumption. Relative EV would be dividing the latter by the former. Simplifying, we get . In the rest of this post, we will be further elaborating and estimating all of these variables. ## 1.1 Making rV more intuitive In many of our estimates, we will defer to our intuition to estimate an answer. This is a necessary evil, for we are calculating the EV of a group of projects that is sufficiently heterogeneous to be beyond properly formalising. We found that the terms in our equation are easier to intuit if we define each one of them as relative to the counterfactual. For each of our variables, we define a relative version: # 2 Defining rVresident Remember that stands for the value of the work of one hotel resident, divided by the value of the work of one counterfactual EA hire. The value of the hotel comes from: • Allowing more people to do charity work • Allowing those people to do charity work more resourcefully, by giving them: 1. Time (because the hotel does their housekeeping, and because they don’t have to travel much for social activities) 2. Focus (by engineering an environment that boosts mental health and productivity) 3. A network (by living with ~20 residents and an array of visitors passing through) The question of EV of the hotel, per resident, can thus be factored as follows: 1) How resourceful does the hotel environment allow a resident to be, compared to a counterfactual EA hire? 2) If they were given the same resources as a counterfactual EA hire, how much value would the work of a hotel resident create? We specify as follows: • being the resources that a hotel resident could invest into their work, relative to a counterfactual EA hire • being the value generated by their work, after controlling for resources. ## 2.1 What is rW, the relative value of a project enabled by the EA Hotel? This variable has a very precise meaning, so allow us to describe it in detail. should capture: • The quality of the project that a hotel resident wants to carry out • The quality of the judgment of the resident in particular It should not capture the amount of resources that this particular hotel resident can spend on their work. Any difference in that doesn’t stem from their project, should stem from their general ability to carry out projects. To have a good intuition for , imagine that we would have a hotel resident, and put them in a “normalised” working environment. That is: they have their own house to keep, access to an office at a normal distance, a social life that requires travel, a professional network of an average amount of loose ties, etc. Then the impact of their particular project, divided by the impact of a marginal EA hire, would be . This is perhaps the crux of the whole EV calculation. We have found that formally estimating is nearly impossible to do properly, because it involves factoring it into all the complexity of the various projects that EA Hotel guests are doing. These projects change over time, are often highly explorative and/​or entrepreneurial and cover a broad range of cause areas including those for which no established metrics exist yet. Yet, we have an intuitive sense that most of these projects clearly meet EA standards, and aren’t necessarily less impactful than a marginal EA hire. Now EA prides itself for doing the measurements to determine whether a strategy for impact is actually worthwhile, but we feel that this risks neglecting the black box of intuition that often gives better answers. If you are considering the hotel as a recipient of your donation, we ask you to make your own intuitive estimate for . These are the steps: 1. Identify your counterfactual. Which charity would you donate to, if not the EA Hotel? 2. Imagine that your donation would be exactly enough to make the charity hire one additional employee. Intuit the amount of impact this employee would make. 3. Imagine that your donation would also allow exactly one additional guest to stay at the hotel. Look at our last post for reference. Intuit the amount of impact the work of this guest would make, if it were carried out by this guest with a normal level of motivation, in a regular office, with a network of normal size. 4. Now divide the former to the latter. This is your personal intuitive value for . You may be more comfortable using bounds instead of numbers. Bar negative outcomes, it is obvious that is more than 0.01. It is also obvious that is less than 100. Where would you put the bounds of , so that you are still 90% certain that is between them? You may be thinking: what about hotel guests who are studying, or skilling up in a new area? In these cases we could need to factor out a discount rate, , and a lead time in years, , to working on object level work of expected relative quality : Discount rates should be set at a level you think is reasonable given the Haste Consideration (note that the Haste Consideration also favours investing in more people wanting to transition to full time direct EA work, as the hotel is doing). Perhaps = 0.9, meaning a 10% yearly discount, is reasonable. Or maybe even =0.8 (20%) if you think the urgency of direct EA work is particularly high. might typically be between 0.5 and 2 years. ## 2.2 What is rR, the factor by which the productivity of EA Hotel residents is multiplied? Imagine a marginal EA hire would spend 30 focused hours on work. If a hotel resident finds the means to work 40 focused hours, due to less time spent on housekeeping and travel, and due to a motivating environment, that is an amount of added value equal to 33% of a marginal EA hire working full-time. With this example, would be 1.33. Since the hotel environment often causes a considerable increase in productivity, we want to add this element to our calculations. While residents report that they are able to do 2.2 times as much work at the hotel on average, this number does not map nicely to our analysis. For some residents, it compares their overall impact with the case in which they would self-fund. For others it takes into account that they’re doing different work. We are modelling as a factor on productivity that comes from the hotel as compared to a regular job, in which the employee has to organise their own life to a much larger extent. Some factors that go into are: • Freed up time because housework is largely done by the hotel, and because less time is spent on travel • Increased well-being, from an environment intentionally designed for it, and free access to counselling • An improved professional and social network, from living with 20 EA’s and frequent visitors We will generalise the first two into a variable we would like to call “personal productivity”, which is the amount of concentrated effort one can put into one’s day to day work. The latter will have an analysis on its own. We expect these factors to be roughly independent, meaning that they both feed into : being the amount of productive hours worked relative to an EA org hire, and being the value of one’s network relative to the counterfactual. 2.2.1 Estimating relative productivity, Note that . The numerator being the amount of productive hours a hotel resident can work, and the denominator being the amount of productive hours a marginal EA hire can work. Our hope is that a healthy and time-saving hotel environment causes an increase in the amount of focused hours worked. We noticed that our culture strongly inclines us towards productivity. Simply sitting in the living room compels one to work. We expect that access to an environment like this leads to an increase in hours worked. We asked residents to estimate their amounts of productive hours per week, and found an average of 40: We also asked how many productive hours they would have worked otherwise, and found an average of 29. While there is some reason to believe that the former will go up as we further develop the hotel, we will now use it as a conservative estimate for . While it may be tempting to use the latter as an estimate for , we don’t think this would be fair: 29 is the hours that our residents would otherwise work from home. Compared to a good office, working from home presumably lowers their productivity, so might be higher. It’s also worth considering that EA organisation employees are selected for conscientiousness, hard work and dedication at a high rate. So we conclude that , leaving the value of to our simulation in Part 2. 2.2.2 estimating network value, Having more like-minded people around gives you more opportunities. According to the social version of Metcalfe’s law, the value of a network of nodes scales as , or per node. We take this to mean that, for each person in a network each increment in size will increase their number of opportunities by one, but since their bandwidth is limited and interests don’t always align, they will only be able to capitalise on the equivalent value of opportunities. Observe that, in some abstract sense, all the value of the EA community can be stated as an accumulation of positive sum trades between people. Our law states that, if one’s network size changes from to , the value of their work is multiplied by . So we redefine as follows: being the professional network size of a hotel resident, and being the professional network size of a counterfactual EA hire. While for some, “professional network size” has an intuitive meaning, we found that this isn’t the case for everyone, so allow us to define that more precisely. What we mean is the set of people that you could easily interact with in a professional capacity, if incentives lined up that way. To approximate this number, take each person you know. Do the two of you have common knowledge about the rough outline of both of each other’s work? Then they’re in your professional network. Note that this means that people in a wildly different cause area, or outside of EA, can be in your professional network. We can’t estimate the value of , because that depends on the counterfactual of the donor, but we have asked our residents to estimate their professional network size: Note the logarithmic scale. ## 2.3 Conclusion for rVresident Putting it all together, we get . For each guest, we have gathered their answers for and and estimated their based on our intuition, given what we know about them. The data suggests that and are mostly uncorrelated (r = 0.15), and so are and (r = 0.04), but and are strongly correlated (r = 0.6). This may be owed to our possibly biased assumption that the most value is in building infrastructure, which naturally requires a larger network. We will assume that the same pattern of independencies will hold for hires. Simplifying further: We will calculate under a range of assumptions in Part 2. # 3. What are rVcresident and rVchire, the relative values of work done by people if they’re not housed or hired? Both of these can be given a similar structure to : , , … being the values of these variables in the counterfactual situation where the resident or hire is doing something else with their career. Properly estimating these variables will require gathering data from EA’s. This wasn’t an option for us given short runways, so we will have to make do with rough estimates instead. ## 3.1 Rough estimates for rVcresident 3.1.1 15 of the hotel residents are doing work that they would otherwise self-fund by working part time. For these, we can define . Of those that are doing something else at the hotel, three would be working a normal job outside EA, one would continue doing promising AI capabilities research, and one would continue to scout for opportunities in the EA landscape. We estimate that their values for would be , and respectively. Using the average of our current guests as an estimate, we will guess that is around 3.1.2 Since most of the hotel residents would self-fund, they would spend considerably less time working on their projects, with the rest of their time spent on working part-time instead. They report an average speedup of 2.2, which we will use to guess that . 3.1.3 We would expect that everyone meets approximately the same amount of people when they move to the hotel, so we will define to be a constant increase over . Our survey shows that this increase is an average of 20.6, which we will use to say . 3.1.4 Putting the above together, we get: ## 3.2 Rough estimates for rVchire 3.2.1 We don’t have any data whatsoever about likely values for this, so we will leave it to the simulation in Part 2. 3.2.2 Given the strong competition, we will assume that a marginal EA hire is highly conscientious. They might not have a project available to them that is as good as working at an EA org, but whatever project they will be working, they will work on it productively. Our best guess for is that it equals . 3.2.3 Again, we’re assuming that this number is some constant difference from , the size of which depends on the particular organisation that the counterfactual EA hire joins. If they’re doing remote work, it may be just a few, or only 1. If they get to work at an office of a major EA org, it may be as high as 50. Even more if they relocate to a large hub like the Bay or London. 3.2.4 Putting the above together, we get: ## 3.3 Conclusions for rVcresident and rVchire We will calculate and under a range of assumptions in Part 2. # 4. Putting it all together We get: In Part 2 of this post we will give some example calculations under a range of assumptions and scenarios, and attempt to draw some conclusions. We will also link to an online calculator for readers to calculate their own estimates. Thanks to Toon Alfrink for developing the methodology and drafting the post, Greg Colbourn for editing, and Sasha Cooper and Florent Berthet for comments. # Our Ask Do you like this initiative, and want to see it continue for longer, on a more stable footing? Do you want to cheaply buy time spent working full time on work relating to EA, whilst simultaneously facilitating a thriving EA community hub? Is your relative EV estimate for the project high? Then we would like to ask for your support. We are getting critically low on runway. Our current shortfall is ~£5k/​month from May onward. We will have to start giving current guests notice in 1 month. To donate, please see our GoFundMe or get in touch if you’d like to make a direct bank transfer. If you’d like to give regular support, we also have a Patreon. Previous posts in this series: • Speaking as someone with a undergrad degree in math, I would have found a non-technical summary for this post to be helpful. So I expect this would apply much more to many other forum readers. • Thanks for the feedback, we will incorporate a non-technical summary into Part 2. (Basically the whole thing is just an attempt to explicitly factorise the largely intuitive reasoning people might use in estimating the value of the project). • As I read the post, something which stood out to me was the idea of “counting productive hours”. Alongside that number, it seems like we’d also have to estimate something like “productivity per hour”. (I may be getting a bit lost in the math here; let me know if one of the model’s current factors is meant to represent this.) Not all productive hours are equally productive/​impactful; some of the hours I’ve spent at CEA have been life-changing, while others were spent editing Facebook posts; all were “productive” in the sense that I was finishing necessary work, but part of my job is trying to figure out how to generate more of the most productive hours, not just “more hours of productive time”. The same will be true for any project; some hours of “research” are much more valuable than others. I suspect that some dynamics of working at an organization make having “hours of productive time” easier (more chance of a sudden inspiration from a colleague that you’d have spent much more time coming up with on your own), while others could make it harder (“overhead” or operational tasks that a lone researcher might not have to worry about). • Good point. This could be factored into W, the value of the work. Although perhaps it would be better if the model was further elaborated to add an explicit “depth” factor. Cal Newport’s Deep Work comes to mind. We try to facilitate an optimal combination of Deep Work and serendipitous collaboration at the hotel by having both private work spaces in rooms, and co-working in communal areas, available. • BTW, I just tried to donate100 (not much but about what I feel comfortable impulse-donating), and the trivial inconvenience of finding and typing in a credit card through me off. A paypal moneypool link would probably have been lower friction for me (not arguing it’s lower friction overall, just that having a variety of easy payment types is probably useful to getting marginal donors) • I don’t understand the formula that appears after “For each of our variables, we define a relative version:”, could you clarify? Then its says “Remember that rVresident...” but I can’t “remember”, since there’s no definition of it earlier (only of rV). A definition of rVresident appears later [but it incorporates new concepts R and W that aren’t defined very clearly (what’s a “resource” and what exactly does “value after controlling for resources” mean, for those of us that are not statisticians? Well, you start to elaborate on W, but … by this point I’m confused enough to find the discussion harder to follow.)] • I have a thought about EA hotel which this analysis likely doesn’t capture: the general intuition that EAs should be taken care of—that we should “take care of our own”. Today I read that research by Holt-Lunstad “shows that being disconnected [lonely] poses comparable danger to smoking 15 cigarettes a day, and is more predictive of early death than the effects of air pollution or physical inactivity.” While I’m not exactly lonely*, I have no EA friends (my city is not an EA hub), and my productivity is extremely low as I’m (1) currently unemployed and (2) have virtually stopped working on altruistic projects due to a lack of emotional support and a loss of faith that I can succeed**. I may soon get a job and will then earn-at-least-partly-to-give (probable donations: \$15,000/​yr, perhaps more later), but this is not as fulfilling as a project would be, or as fulfilling as EA friendships would be. I’ve tried job hunting in the Bay Area where I might have been able to be near EAs, but was turned down by a few companies and gave up; besides, the idea of spending roughly 100% of the additional income I would earn in the Bay Area on rent… it’s repugnant. By extension, I believe that for some EAs, EA hotels could offer improvements to mental heath and future good-doing potential that aren’t otherwise available. Intuitively, it seems like the EA community ought to be able to take care of its own adherents. One simple justification of this is simply that poorer mental health limits the amount of good done by each EA; another one is that if the EA movement can’t take care of its non-central members, it will be more difficult to grow and spread the movement; e.g. a reputation for loneliness among EAs would suggest to others that they shouldn’t become an EA, and EAs who are lonely are less likely to encourage others to become EAs. Since creating more EAs presumably creates more good in the world—especially as we can anticipate exponential growth—the question of how to create more EAs is valuable to ponder. While EA hotels (and similar projects) are not a solution by themselves, they may be an important component of such growth. So the EA hotel is one of my favorite ideas and if I were in the UK I might be living there now. * I live with my best friend, who doesn’t think at all like a EA/​rationalist. I’m also married to a non-EA but the Canadian government keeps us separated (thanks IRCC). But this touches on a related issue—I plan to have a child, and as long as we don’t have a rationalist/​EA Sunday School system to teach our values, I’m curious whether growing up inside or close to an EA hotel would work as a substitute. Seems worth a try! ** as I’m writing software, the value of the project is a highly nonlinear function of the input effort, requiring much more manpower to become valuable, i.e. a minimum viable product. Working on it has become harder in turns of willpower requirement over the years. • stands for expected value. , or the expected value of the resident relative to a hire. In the first formula you refer to there is also the equivalent for the counterfactual resident () and the counterfactual hire ().
# Epistemic Self-Doubt First published Fri Oct 27, 2017 And if I claim to be a wise man, Well, it surely means that I don't know. —Kansas It is possible to direct doubt at oneself over many things. One can doubt one’s own motives, or one’s competence to drive a car. One can doubt that one is up to the challenge of fighting a serious illness. Epistemic self-doubt is the special case where what we doubt is our ability to achieve an epistemically favorable state, for example, to achieve true beliefs. Given our obvious fallibility, epistemic self-doubt seems a natural thing to engage in, and there is definitely nothing logically problematic about doubting someone else’s competence to judge. However when we turn such doubt on ourselves, incoherence seems to threaten because one is using one’s judgment to make a negative assessment of one’s judgment. Even if this kind of self-doubt can be seen as coherent, there are philosophical challenges concerning how to resolve the inner conflict involved in such a judgment, whether one’s initial judgment or one’s doubt should win, and why. Not all epistemic self-doubt is so evidently constructive. Socrates could hope to find his answers in the future in part because his doubt was not directed at his faculties for gaining knowledge, and the matters on which he believed himself ignorant were specific and limited. This left him confident of his tools, and still in possession of a lot of knowledge to work with in seeking his answers. For example, it was possible for Socrates to be both sure he did not know what virtue was and yet confident that it was something beneficial to the soul. In contrast, Descartes in his Meditations set out to rid himself of all beliefs in order to rebuild his edifice of belief from scratch, so as to avoid all possibility of erroneous foundations. He did so by finding reason to doubt the soundness of his faculty of, for example, sense perception. Instead of casting doubt on his empirical beliefs one by one, he would doubt the reliability of their source and that would blanket all of them with suspicion, loosening the hold that even basic perceptual beliefs had on his mind. Descartes’ epistemic self-doubt was extreme in undermining trust in a belief-forming faculty, and in the wide scope of beliefs that were thereby called into question. As in the case of Socrates though, his belief states fit together sensibly; as he convinced himself he might be dreaming, thus undermining his trust that he was in a position to know he had hands, he was also shaken out of his belief that he had hands. The cases of Socrates and Descartes illustrate that judgments about one’s own epistemic state and capacity can provide reasons to adjust one’s beliefs about the way things are. Less dramatic cases abound in which rationality’s demand for some kind of fit between one’s beliefs (first-order beliefs) and one’s beliefs about one’s beliefs (second-order beliefs) can be seen in the breach. Suppose I am a medical doctor who has just settled on a diagnosis of embolism for a patient when someone points out to me that I haven’t slept in 36 hours (Christensen 2010a). On reflection I realize that she is right, and if I am rational then I will feel some pressure to believe that my judgment might be impaired, reduce somewhat my confidence in the embolism diagnosis, and re-check my work-up of the case or ask for a colleague’s opinion. Though it seems clear in this case that some reconsideration of the first-order matter is required, it is not immediately clear how strong the authority of the second-order might be compared to the first order in coming to an updated belief about the diagnosis, and there are clear cases where the second order should not prevail. If someone tells me I have unwittingly ingested a hallucinogenic drug, then that imposes some prima facie demand for further thought on my part, but if I know the person is a practical joker and he has a smirk on his face, then it seems permissible not to reconsider my first-order beliefs. There are also cases where it is not obvious which order should prevail. Suppose I’m confident that the murderer is #3 in the line-up because I witnessed the murder at close range. Then I learn of the empirical literature saying that eyewitnesses are generally overconfident, especially when they witnessed the event in a state of stress (Roush 2009: 252–3). It seems I should doubt my identification, but how can it be justified to throw out my first-order evidence that came from directly seeing that person, in person and close up? An adjudication of some sort between the first order and the higher order is required, but it is not obvious what the general rules might be for determining the outcome of the conflict, or what exactly would justify them. Questions about epistemic self-doubt can be organized into five over-arching questions: 1) Can the doubting itself, a state of having a belief state and doubting that it is the right one to have, be rational? 2) What is the source of the authority of second-order beliefs? 3) Are there general rules for deciding which level should win the tug of war? If so, what is their justification? 4) What does the matching relation this adjudication is aiming at consist in? 5) If mismatch between the levels can be rational when one first acquires reason to doubt, is it also rationally permitted to remain in a level-splitting state—also known as epistemic akrasia (Owens 2002)—in which the self-doubting conflict is maintained? For convenience, approaches to modeling doubt about one’s own ability to judge and to the five questions above can be separated into four types, which are overlapping and complementary rather than inconsistent. One approach is through seeing the self-doubting subject as believing epistemically unflattering categorical statements about the relation of her beliefs to the world. Another is through conditional principles, asking what a subject’s credence in q should be given that she has a particular credence in q but thinks she may be epistemically inadequate or compromised. A third approach is to construe doubt about one’s judgment as a matter of respecting evidence about oneself and one’s evidence (higher-order evidence). A fourth approach ties together the first order and second order by using the idea that we should match our confidence in p to our expected reliability. That is, treating ourselves like measuring instruments we should aim to be calibrated. ## 1. How Far Can Consistency and Coherence Take Us? Categorical Self-Doubting Belief It might seem that consistency and coherence are not strong enough to tell us what the relation should be between the first order and the second order in cases of epistemic self-doubt, in the same way that they don’t seem sufficient for explaining what is wrong with Moore-paradoxical statements (see entry on epistemic paradoxes). In the latter I assert either “p and I don’t believe p”, or “p and I believe not-p”. There is a lack of fit between my belief and my belief about my belief in either case, but the beliefs I hold simultaneously are not inconsistent in content. What I say of myself would be consistent and quite sensible if said about me by someone else, thus: “p, but she doesn’t believe p”. Similarly, there is nothing inconsistent in the claim “There is a cat in the distance and she is severely near-sighted”, though there does seem to be a problem with the first-person claim “There is a cat in the distance and I am severely near-sighted” if my assertion about the cat is made on the basis of vision and I give no cue that I mean the second clause as a qualification of the first. My confidence about the cat should have been tempered by my awareness of the limitation of my vision. If there are general principles of rationality that govern self-doubt of our faculties or expertise it seems that they will have to go beyond consistency among beliefs. However, consistency and coherence do impose constraints on what a subject may believe about the reliability of her beliefs if combined with an assumption that the subject knows what her beliefs are. (This is also found for Moore’s paradox, and is used in Shoemaker’s approach to that problem; Shoemaker 1994.) One way of formulating an extreme case of believing that one’s epistemic system doesn’t function well is as attributing to oneself what Sorensen calls anti-expertise (Sorensen 1988: 392f.). In the simplest type of case, S is an anti-expert about p if and only if Anti-expertise (A) Either S believes p and p is false or S does not believe p and p is true. Sorensen pointed out that if S is consistent and knows perfectly what her beliefs are, then she cannot believe she is an anti-expert. For if S believes p then, by perfect self-knowledge, she believes that she believes p, but her beliefs that p and that she believes p are together inconsistent with both of the disjuncts of A. Similarly for the case where S does not believe p. This phenomenon generalizes from outright belief to degrees of belief, and from perfect knowledge of one’s beliefs to decent but imperfect knowledge of them (Egan & Elga 2005: 84ff.). Believing you are an anti-expert is not compatible with coherence and decent knowledge of your own beliefs. Denying that knowledge of our own beliefs is a requirement of rationality would not be helpful, since usefully doubting that one’s beliefs are soundly formed would seem to require a good idea of what they are. Egan and Elga favor the view that your response to this fact about anti-expertise should be to maintain coherence and decent self-knowledge of belief, and refrain from believing that you are an anti-expert. However, one can imagine examples where the evidence that you are incompetent is so overwhelming that one might think you should believe you are an anti-expert even if it makes you incoherent (Conee 1987; Sorensen 1987, 1988; Richter 1990; Christensen 2011). The problem with self-attributing unreliability coherently doesn’t go away if the degree of unreliability is more modest. Consider the following property: I’m not perfect (INP) $$P((P(q) \gt .99 \amp -q )\textrm{ or } (P(q) \lt .01 \amp q)) \gt .05$$ This says that you are at least 5% confident that you’re highly confident of q though it’s false or lack confidence in q though it’s true. It is a softened version of anti-expertise, and you can’t coherently fulfill it, have $$P(q) >.99$$, and have perfect knowledge of your beliefs. For in that case $$P(P(q) > .99) = 1$$, which means INP can only be true if $$P(-q) > .05$$. But $$P(-q) > .05$$ implies $$P(q) < .95$$ which contradicts $$P(q) > .99$$. The point survives if you have imperfect but good knowledge of what your beliefs are. The self-doubt expressed through INP is quite modest, but is no more consistent than attributing to oneself anti-expertise, and this will be so for any value of INP’s right hand side that is not equal to $$P(-q)$$. Egan and Elga think that the significance of evidence of anti-reliability is taken into account by seeing it as obligating a subject to revise her first-order belief. However, their view implies that to be rational such a revision must be done without attributing anti-expertise to oneself. One can revise whenever one wants of course, but any revision should have a reason or motivation. If one does not give any credence at all to the possibility one is an anti-expert, then what is one’s reason to revise one’s first-order belief? There would seem to be no other way to take on board and acknowledge the evidence of your anti-expertise than to give some credence to its possibility. Egan and Elga say that the belief the evidence should lead a subject to is that she has been an anti-expert and that should lead her to revise (Egan & Elga 2005: 86). But if she avoids incoherence by attributing anti-expertise only to a previous self, then that belief can’t be what leads her to revise her current view. If she does not attribute anti-expertise to her current self then she doesn’t give her current self any reason to revise. The same problem can be seen with Egan and Elga’s treatment of cases of self-attributing less extreme unreliability (such as INP) that they regard as unproblematic. Consider a person with growing evidence that his memory isn’t what it once was. What effect should this have on his beliefs about the names of students? They compare what happens to his confidence that a given student is named “Sarah” when he hears counterevidence—overhearing someone calling her “Kate” for example—in the case where he has and the case where he hasn’t taken the evidence of his decline in memory into account. Via a Bayesian calculation they conclude that when he hasn’t taken into account the evidence about his memory, the counterevidence to his particular belief that the student is named Sarah does reduce his belief that she is Sarah, but it does so much less than it would have if he had taken the evidence about his memory into account. But this analysis represents taking into account the evidence about one’s memory only implicitly, as an effect that that evidence already had on one’s prior probability that the student is Sarah. That effect is the difference between a .99 and a .90 prior probability, or degree of belief. The distinction that is then derived between the effects that counterevidence can have on the self-doubter and the non-self-doubter is just the familiar point that counterevidence will have a greater effect the lower one’s initial probability. This doesn’t tell us how to assimilate news about one’s decline in reliability to the first-order belief, but only how to treat other evidence about the first-order matter once one has done so. The question was supposed to be how the evidence about memory should affect our beliefs, and to answer that requires saying how and why that evidence about his memory should make our subject have a .90 rather than .99 initial confidence that the student was Sarah. Surely one must attribute reduced reliability to oneself if one is to have any reason to revise one’s first-order belief that the student was Sarah on the basis of evidence of diminished reliability. Even a raw feel of redness of a flower must become an ascription of red to the flower in order for one’s experience of it to affect other beliefs such as that it would or wouldn’t be an appropriate gift. Even evidence suggesting a small amount of unreliability, as with INP above, presents us with a trilemma: either we incoherently attribute unreliability to ourselves but revise and have a justification for doing so, or we coherently fail to attribute unreliability and revise without justification for doing so, or we remain coherent by failing to attribute unreliability and don’t revise, ignoring evidence of our unreliability. It seems that it is not possible for a rational subject to acknowledge evidence of her own unreliability and update her first-order belief on the basis of it. This approach using consistency (or coherence) plus self-knowledge of belief gives a way of representing what a state of self-doubt is. It sensibly implies that it is irrational to stay in such a state but it also implies that it is irrational to be in it in the first place, making it unclear how self-doubt could be a reason to revise. The approach identifies a kind of matching that rationality demands: give no more credence to the possibility that one is a bad judge of q than one gives to not-q. However, it leaves other questions unanswered. If the rational subject finds herself doubting her judgment, should she defer to her first-order evidence, or her second-order evidence about the reliability of her first-order judgment? What are the rules by which she should decide, and how can they be justified? ## 2. Conditional Principles ### 2.1 Synchronic Reflection and Self-Respect We might do better at understanding the relations rationality requires between your beliefs and your beliefs about them by adding to the requirements of consistency and coherence a bridge principle between the two orders expressed via conditional (subjective) probability. Conditional probabilities say what your degree of belief in one proposition is (should be) given another proposition, here the relevant propositions being a first-order proposition q, and the proposition that one has degree of belief x in q, respectively. A first pass at how to represent a situation where my beliefs at the two levels don’t match comes from its apparent conflict with the synchronic instance of the Reflection Principle (van Fraassen 1984). Reflection $$P_0 (q\mid P_1 (q) = x) = x$$ Reflection says that my current self’s degree of belief in q given that my future self will believe it to degree x should be x. It is implied by the fact that her degrees of belief are represented as probabilities that my future self is coherent, but that alone does not rule out the possibility that her judgment is compromised in some other way—as for example when Ulysses anticipated that he would be entranced by the Sirens—and the principle can be questioned for such cases (Sobel 1987; Christensen 1991; van Fraassen 1995). However, the self-doubt we are imagining is one that the subject has about her current beliefs, and the synchronic version of Reflection Synchronic Reflection (SR) $$P_{0}(q\mid P_{0}(q)=x) = x$$ which says that my degree of belief in q now given that I now believe q to degree x should be x, seems less open to question. Christensen (2007b) also calls this principle Self-Respect (SR). This is not the tautology that if I believe q to degree x then I believe q to degree x, for in a logically equivalent form the principle is Synchronic Reflection/Self-Respect (SR) $$[P_{0}(q \amp P_{0}(q)=x)\mid P_{0}(P_{0}(q)=x)] = x$$ which does not follow from either deductive logic or the probability axioms alone. But SR has been widely endorsed as unobjectionable, and according to some even undeniable, as a requirement of rationality (van Fraassen 1984: 248; Vickers 2000: 160; Koons 1992: 23—Skyrms 1980 sees as useful a version of it he calls Miller’s Principle, though he also shows it is subject to counterexamples). While I can sensibly imagine my future self to be epistemically compromised, unworthy of my deference, violating SR would require regarding my current self as epistemically compromised, as having a degree of belief that should be other than it is. This appears to be something that doubt of my own judgment would call for, in which case whether self-doubt can be rational depends on whether SR is a requirement of rationality. SR can be defended as a rational ideal by Dutch Strategy arguments, though not by the strongest kind of Dutch Book argument (Sobel 1987; Christensen 1991, 2007b: 328–330, 2010b; Briggs 2010—Roush 2016 argues it can’t be defended as a requirement by a Dutch book argument at all). It has been argued to be questionable if not false on grounds that it conflicts with the Epistemic Impartiality Principle that says we should not in general take the mere fact that we have a belief as a reason to have that belief any more so than we do with the mere fact that others have that belief (Christensen 2000: 363–4; Evnine 2008: 139–143; Roush 2016).[1] Nevertheless the probabilist—one who thinks that rationality requires probabilistic coherence—will have a difficult time resisting SR since, analogously to what we saw above with anti-expertise, SR follows from coherence if it is supplemented with the further assumption that the subject has perfect knowledge of her own beliefs. Still, this does little to explain intuitively why SR should be binding; even someone who has perfect knowledge that he has a belief should be able to sensibly wonder whether it is a belief he ought to have. That perfect knowledge of our beliefs is a requirement of rationality can be doubted in a variety of ways (Williamson 2000; Christensen 2007b: 327–328; Roush 2016). However, as above so for the discussion here, denying that rationality requires knowledge of our own beliefs does not overcome the problem. Self-correction that will be of any use requires some degree of accuracy about one’s beliefs, and even if a subject doesn’t have perfect self-knowledge coherence still makes reflective demands; Christensen (2007b: 332) has noted that the closer a coherent subject comes to perfect knowledge of his beliefs the more nearly he will satisfy SR. SR has something to recommend it, but it seems to be a rule a self-doubter will violate. Consider our underslept doctor. It looks as if once it is pointed out to her how long it has been since she slept, she should regard her current confidence in q, her diagnosis of embolism, as higher than it ought to be. I.e., she would instantiate a principle we could call Refraction: Refraction $$P_{0}(q\mid P_{0}(q)=x) < x$$ Apparently, her degree of belief that it’s an embolism given that she has degree of belief x that it’s an embolism should be less than x, contradicting SR. Or imagine that the person who tells me a hallucinogenic drug has been slipped into my coffee is a trusted friend who is neither in the habit of joking nor currently smirking. I seem to have an obligation to regard some of my current degrees of belief as higher than they should be. Refraction is a way of representing a state of self-doubt, one in which I don’t regard the degree of belief I (think I) have as the right one to have. But despite the fact that it doesn’t have the subject attributing unreliability to herself categorically as we had in the last section, Refraction is not compatible with the combination of coherence and knowledge of one’s beliefs, since the latter two together imply SR. In this representation of what self-doubt is, it is not rational according to the probabilistic standard. One might defend this verdict by saying that the exception proves the rule: if I really think that my degree of belief in q should be different than it is, say because I realize I am severely underslept, then surely I should change it accordingly until I come to a credence I do approve of, at which point I will satisfy SR. However even if it is ideal to be in the state of self-respect that SR describes, it seems wrong to say that a state of disapproving of one’s first-order belief when faced with evidence of one’s impaired judgment is irrational. In such a case it would seem to be irrational not to be in a state of self-doubt. Moreover, it is unclear how a revision from a state violating SR to one in conformity with SR can be rational. According to many probabilists the rational way to revise beliefs is via conditionalization, where one’s new degrees of belief come from what one’s previous function said they should be given the new belief that prompts the revision (see entries on interpretations of probability and Bayes’ Theorem). That is, all change of belief is determined by the conditional probabilities of the function one is changing from. Thus change of belief on the basis of a belief about what my belief is will depend on the value of $$P_i(q\mid P_i(q) = x)$$. If the value of $$P_i(q\mid P_i(q)=x)$$ isn’t already x then a conditionalization using that conditional probability will not necessarily make $$P_f(q\mid P_f(q)=x) = x$$, as required by SR, and it is hard to see how it could. In this approach, similarly to the previous, the whole cycle of epistemic self-doubt and resolution appears to be unavailable to a probabilistically rational subject. If we represent epistemic self-doubt as a violation of Synchronic Reflection (Self-Respect), then it is not rational for a coherent person who knows what her beliefs are. This is a general rule giving the same verdict in all cases, that the orders must match, and it gives the form of that matching in terms of a conditional probability. The second-order is in the driving seat since the condition in SR’s conditional probability that determines a value for the first-order proposition q is itself a statement of probability, but SR can’t lead to change of first-order belief unless one does not know what one’s belief in q is. As in the approach via categorical statements above, this simple approach via conditional probability does not represent the cycle of self-doubt and resolution as available to a rational subject. ### 2.2 What Would the Maximally Rational Subject Do? Another way of representing self-doubt using conditional probability follows the intuitive thought that I should tailor my first-order degree of belief to the confidence I think the maximally rational subject would have if she were in my situation (Christensen 2010b: 121). This would be a sensible explanation of the authority of higher order beliefs, of why taking them into account would be justified. It gives half of an answer to the question when the first-order should and shouldn’t defer to the second order by identifying a class of second-order statements to which the first order must always defer. However it pushes back the question of which individual statements those are to the question of which probability function is the maximally rational. A conditional principle that would capture the idea of deferring to the view of an ideal agent who was in one’s shoes is: $Cr(q\mid P_{M}(q) = x) = x$ (Christensen 2010b) which says that one’s credence in q given that the maximally rational subject in one’s situation has credence x in q, should be x. The maximally rational subject obeys the probability axioms and possibly has further rationality properties that one might not possess oneself, though she is assumed to be in your situation, having no more evidence than you do. If one obeys the probability axioms oneself, then this principle becomes: RatRef $$P(q\mid P_{M}(q) = x) = x$$ This says that your credence in q should be whatever you take the maximally rational credence to be for your situation, an idea that seems hard to argue with. It is a variant of a principle used by Haim Gaifman (1988) to construct a theory of higher-order probability. There the role of $$P_{M}$$ was given to what has become known as an expert function corresponding, in his use, to the probabilities of a subject who has maximal knowledge. RatRef gives a sensible account of cases like the underslept doctor. It would say the reason that the realization she is sleep-deprived should make her less confident of her diagnosis is that a maximally rational person in her situation would have a lower confidence. Moreover this provides us with a way of representing the state of self-doubt coherently, even with perfect knowledge of what one’s belief states are. One can have degree of belief y in q, and one can even believe that one has degree of belief y, that is, believe that $$P(q) = y$$, consistently with believing that the maximally rational agent has degree of belief x, i.e., $$P_{M}(q) =x$$, because these are two different probability functions. Because self-doubt is not defined as a violation of the conditional probability RatRef, as it was with SR, we can also see a revision that takes you from self-doubt to having your confidence match that of the maximally rational subject as rational according to conditionalization. You may have degree of belief y in q, discover that the maximally rational subject has degree of belief x, and because you have the conditional probability RatRef put yourself in line with that ideal subject. Note that it is not necessary to have an explicit belief about what your own degree of belief in q is for this revision to occur or to be rational. RatRef has problems that are easiest to see by considering a generalization of it: Rational Reflection (RR) $$P(q\mid P' \textrm{ is ideal}) = P'(q)$$ Rational Reflection (Elga 2013) maintains the idea that my degree of belief in q should be in line with that I think the maximally rational subject would have in my situation, but also highlights the fact that my determining what this value is depends on my identifying which probability function is the maximally rational one to have. I can be coherent while being uncertain about that, and there are cases where that seems like the most rational option. This by itself is not a problem because RR is consistent with using an expected value for the ideal subject, a weighted average of the values for q of the subjects I think might be the maximally rational subject. But it is not only I who may be uncertain about who the maximally rational subject is. Arguably the maximally rational subject herself may be uncertain that she is—after all, this is a contingent fact, and one might think that anyone’s confidence in it should depend on empirical evidence (Elga 2013). The possibility of this combination of things leads to a problem for RR, for if the subject who is actually the maximally rational one is unsure that she is, then if she follows RR she won’t fully trust her own first-order verdict on q but will as I do correct it to a weighted average of the verdicts of those subjects she thinks might be the maximally rational one. In this case my degree of belief in q given that she is the maximally rational subject should not be her degree of belief in q. It should be the one she would have in case she were certain that she were the maximally rational subject: New Rational Reflection (NRR) $$P(q\mid P' \textrm{ is ideal}) = P'(q\mid P' \textrm{ is ideal})$$ This principle (Elga 2013) also faces problems, which are developed below through the approach to self-doubt via higher-order evidence. The approach asking what the maximally rational subject would do gives a motivation for the idea that second-order evidence has authority with respect to first-order beliefs, and like the SR approach puts the second order in the driver’s seat by having a statement of probability in the condition of the conditional probability. This may appear to give unconditional authority to second-order evidence, but second-order evidence won’t change the subject’s first-order verdict if she believes the latter is already what the maximally rational subject would think. The approach represents self-doubt as a coherent state that one can also coherently revise via conditionalization. It identifies a state of matching between the orders—matching one’s confidence to one’s best guess of the maximally rational subject’s confidence. This gives a general rule, and demands that same matching for all cases, though gives no explicit guidance about how to determine which is the maximally rational subject or degree of belief. ## 3. Higher-Order Evidence Questions about the rationality (or reasonableness or justifiedness) and import of epistemic self-doubt can be developed as questions about whether and how to respect evidence about one’s evidence. Higher-order evidence is evidence about what evidence one possesses or what conclusions one’s evidence supports (see entry on evidence). This issue about the upshot of higher-order evidence does not in the first instance depend on whether we take such evidence as necessary for justification of first-order beliefs. The question is how our beliefs should relate to our beliefs about our beliefs when we do happen to have evidence about our evidence, as we often do (Feldman 2005; Christensen 2010a; Kelly 2005, 2010). Self-doubt is a special case of responding to higher-order evidence. Not all evidence about our evidence arises from self-doubt because not all such evidence is about oneself, as we will see below. Also, representing self-doubting situations as responding to evidence about my evidence takes information about my capacities to be important just insofar as it provides evidence that I have either incorrectly identified my evidence or incorrectly evaluated the support relation between my evidence and my conclusion. For example, in the case of the doctor above who receives evidence that she is severely underslept, the reason she should reconsider her diagnosis is because this is evidence that she might be wrong either in reading the lab tests or in thinking that the evidence of lab tests and symptoms supports her diagnosis. By contrast the fourth approach below via calibration does not see the implications of self-doubt as necessarily proceeding via evidence about our evidence or evidential support. Like the approach via the maximally rational agent, the evidential approach has the virtue of identifying a justification for responding to the second-order beliefs that self-doubt brings. Their authority comes from the facts that they are evidence relevant to whether one has good evidence for one’s first-order belief, and that one should respect one’s evidence. This raises the hope that what we already know about evidence can help settle when negative second-order evidence should override a first order belief and when not. Many authors have thought that in either kind of case rationality demands that the two orders eventually match, in some sense, but we will see below that more recent thinking about evidence has led some to defend the rationality of having the first- and second-order beliefs in tension, for some cases. Another virtue of the evidential approach is that merely knowing what your beliefs are doesn’t automatically imply that a state of self-doubt is inconsistent or incoherent as it did in first two approaches above, via categorical beliefs and conditional principles. There is no obvious contradiction in believing both q and that one’s evidence doesn’t support q, even if one also has a correct belief that one believes q, so what may be irrational about the state must be based on further considerations. The higher-order evidence approach can be usefully developed through the example of hypoxia, a condition of impaired judgment that is caused by lack of sufficient oxygen, and that is rarely recognized by the sufferer at its initial onset. Hypoxia is a risk at altitudes of 10,000 feet and higher (Christensen 2010b: 126–127). Suppose you are a pilot who does a recalculation while flying, to conclude that you have more than enough fuel to get to an airport fifty miles further than the one in your initial plan. Suppose you then glance at the altimeter to see that you’re at 10,500 feet and remember the phenomenon of hypoxia and its insidious onset. You now have evidence that you might have hypoxia and therefore might have misidentified the support relations between your evidence and your conclusion. Are you now justified in believing that you can get to the more distant airport? Are you justified in believing that your evidence supports that claim? Letting F be the proposition that you have sufficient fuel to get to the more distant airport, the following four answers are possible: 1. You are justified in believing F, but no longer justified in believing that your (1st-order) evidence supports F. 2. You are justified in believing F and justified in believing that your (1st-order) evidence supports F. 3. You are not justified in believing F, and not justified in believing that your (1st-order) evidence supports F. 4. You are not justified in believing F, but you are justified in believing that your (1st-order) evidence supports F. 4) doesn’t seem plausible; even if you can’t actually bring yourself to believe F, being justified in believing your evidence supports F prima facie justifies you in believing F. However none of the other answers seem to be entirely adequate either. It may seem, as in 1, that you could still be justified in believing F—in case your calculation was actually right—but no longer have strong enough reason to believe that the calculation was right. However, this would also mean that you could justifiably believe “F, but my overall evidence doesn’t support F”. Feldman (2005: 110–111) argues that it is impossible for this belief to be both true and reasonable since the second conjunct undermines the reasonableness of the first conjunct (cf. Bergmann 2005: 243; Gibbons 2006: 32; Adler 2002). And if you were aware of having this belief then you would be believing something that you know is unreasonable if true. You would be, in the view of Feldman and others, disrespecting the evidence. The state in which you believe “F and my evidence does not support F” is a case of “level-splitting”, also called epistemic akrasia, because you believe you ought not to have a particular belief state but you have it anyway. The second reply—you are justified in believing F and justified in believing that your evidence supports F—might seem reasonable in some cases, for example if the evidence about one’s evidence comes in the form of skeptical philosophical arguments, which one may think are too recherché to command revisions in our everyday beliefs. But this attitude hardly seems acceptable in general since it would mean never giving ground on a first-order belief when presented with evidence that you may be wrong about what your evidence implies. When flying airplanes this kind of rigidity could even be hazardous. However, Feldman counts the second reply as a possible way of respecting the evidence; it might be fitting not only when faced with radical skeptical arguments but also in cases where one’s initial view of what the first-order evidence supports is actually correct. The third reply, that after noting the altimeter evidence one isn’t justified in believing that one’s evidence supports F and also isn’t justified in believing F has the virtue of caution but also the consequence that the altimeter evidence deprives you of justification for believing F even if you do not suffer from hypoxia, which Feldman takes to be problematic. However, this response, unlike the first answer, respects the higher-order evidence; the altimeter evidence gives you some reason to believe that you might suffer from hypoxia, which gives you some reason to believe your evidence does not support F. The misfortune of being deprived of your knowledge even if you don’t actually have hypoxia is an instance of the familiar misfortune of misleading evidence in general. However, as we will see shortly, misleading higher-order self-doubting evidence is distinct from other higher-order evidence, and some recent authors have been led by this to the view that option 1 above—akrasia—can be more rational than option 3 in some cases. Notably, in both of the replies that Feldman counts as possible ways of respecting the evidence, 2) and 3), the first-order and higher-order attitudes match; one either is justified in believing F and justified in believing one’s evidence supports F, or isn’t justified in believing F and also isn’t justified in believing one’s evidence supports F. On getting evidence suggesting one’s evidence does not support one’s conclusion, one should either maintain that it does so support and maintain the first-order belief—be “steadfast”—or grant that it might not support one’s first-order belief and give up the latter—be “conciliatory”. If one thinks that which of these attitudes is the right response varies with the case, then the “total evidence” view will be attractive. On this view whether the first order should concede to the second depends on the relative strength of the evidence at each level. (Kelly 2010) In the conciliatory cases, self-doubting higher-order evidence acts as a defeater of justification for belief, which raises the question of its similarities to and differences from other defeaters. In John Pollock’s (1989) terminology, some defeaters of justification for a conclusion are rebutters, that is, are simply evidence against the conclusion, while other defeaters are undercutters; they undermine the relation between the evidence and the conclusion. (These are also referred to as Type I and Type II defeaters.) The pilot we imagined would be getting a rebutting defeater of her justification for believing that she had enough fuel for an extra 50 miles if she looked out her window and saw fuel leaking out of her tank. However, if the altimeter reading is a defeater, then as evidence about whether she drew the right conclusion from her evidence it is definitely of the undercutting type. All undercutters are evidence that has implications about the relation between evidence and conclusion, and to that extent are higher-order evidence. But the higher-order evidence that leads to self-doubt is distinct from other undercutting-type evidence. In the classic Type II defeater case one’s justification for believing a cloth is red is that it looks red and then one then learns that the cloth is illuminated with red light. This evidence undermines your justification for believing that the cloth’s looking red is sufficient evidence that it is red, by giving information about a feature of the lighting that gives an alternative explanation of the cloth’s looking red. This is higher-order evidence because it is evidence about the cause of your evidence, and thereby evidence about the support relation between it and the conclusion, but higher-order evidence in the cases of the doctor and the pilot are not about how the evidence was caused, and not directly about how matters in the world relevant to one’s conclusion are related to each other. Self-doubting defeaters are about agents and they are in addition agent-specific (Christensen 2010a: 202). They are based on information about you, the person who came to the conclusion about that support relation, and have direct negative implications only for your conclusion. In the case of the cloth, anyone with the same evidence would have their justification undercut by the evidence of the red light. The evidence that the doctor is underslept, however, would not affect the justification possessed by some other doctor who had reasoned from the same evidence to the same conclusion using the same background knowledge. The evidence that the pilot is at risk of hypoxia would not be a reason for a person on the ground, who had reasoned from the same instrument readings to the same conclusion, to give up the belief that the plane had enough fuel for fifty more miles. Christensen argues that the agent-specificity of self-doubting higher-order evidence requires the subject to “bracket” her first-order evidence in a way that other defeating evidence does not. He thinks this means that in no longer using the evidence to draw the conclusion, she will not be able to give her evidence its due (Christensen 2010a: 194–196). In contrast, in the case of the red light and other cases not involving self-doubt, once the redness of the light is added to the evidence, discounting the appearance of the cloth doesn’t count as failing to respect that evidence because one is justified in believing it is no longer due respect as evidence of redness. However, arguably, the difference is not that the self-doubter must fail to give the evidence its due. In the higher-order self-doubt cases we have seen the undercutting evidence does not give the subject reason to believe the evidential relation she supposed was there is not there. It gives reason to think that she doesn’t know whether the evidential relation is there, even if it is. If it isn’t then in bracketing her first-order evidence she isn’t failing to give it its due; it isn’t due any respect. Because self-doubting defeating evidence concerns the subject’s knowledge of the evidential support relation and not the relation itself it appears weaker than typical defeating evidence. However it is potentially more corrosive because it doesn’t give the means to settle whether the evidential relation she endorsed is there and so whether the first-order evidence deserves respect. If the pilot gives up the belief in F and she had been right about the evidential relation, then she will have been the victim of a misleading defeater. Misleading defeaters present well-known difficulties for a theory of justification based on the idea of defeat because Type II defeaters may be subject to further defeaters indefinitely. For example, if one had learned that the light illuminating the cloth was red via testimony, the defeat of one’s justification for believing the cloth was red would be defeated by good evidence that one’s source was a pathological liar. If we say that justified belief requires that there be no defeaters then that leads us to disqualify any case where a misleading defeater exists, and a subject will lose justification she might have had, even if the misleading defeaters are distant facts the she isn’t aware of. But if we refine the view to say that only defeaters for which no defeater exists will undermine justification then a subject will count as justified even if she ignores evidence that looks like a defeater for all she knows, because of the existence of a defeater defeater she doesn’t know about. In general we will face the question how many and which of the existing defeater defeaters matter to whether we have a justified belief (Harman 1973; Lycan 1977). If despite being at an altitude of 10,500 feet our pilot did do the calculation correctly, then her first-order evidence deserved her belief and her evidence from her altitude and the phenomenon of hypoxia was a misleading defeater. It was good reason to worry that her blood oxygen was low, but it might not have been low, and it would be possible in principle for her to get further evidence that would support this view, such as from the reading of a finger pulse oximeter. Misleading defeaters are not new, but few would be tempted to say in the case where one gets evidence that the light is red that it would be rational for the subject to believe both “my evidence doesn’t support the claim that the cloth is red” and “the cloth is red”. However, for self-doubting type II defeaters several authors have claimed that such level-splitting can be rational. For example, Williamson (2011) has argued that it is possible for the evidential probability of a proposition to be quite high, while it is also highly probable that the evidential probability is low. For instance one’s evidence about oneself might indicate that one has made a mistake evaluating one’s evidence, a kind of mistake that would lead one to believe an unsupported conclusion, F. One evaluates the evidential probability of F as high because of one’s view of its evidence, but thinks F might well be true without one’s belief in it being knowledge. Another way of arguing that it can be rationally required to respond to a real support relation—that is, for the pilot, to believe F—even when one has evidence it might not exist, and so, should also believe one’s evidence does not (or might not) support that belief, is with the thought that a rational norm does not cease to apply just because a subject has evidence she hasn’t followed it (Weatherson 2008, 2010 (Other Internet Resources); Coates 2012). This rationale would not sanction akrasia for one who learned the light was red, because the point is restricted to cases where the defeating evidence concerns the subject; we saw above that that makes the defeating evidence weaker, and it is weaker in just the right way to support this approach. Another way of arguing that akrasia can be rational is to take the existence of a support relation as sufficient for justification of belief in a proposition whether the subject has correct beliefs about that support relation or not (Wedgwood 2011). This is motivated by externalism about justification (see entry on internalist and externalist conceptions of epistemic justification), which might be more plausible for justifications subject to self-doubting higher order evidence because it is weaker than other undercutting evidence. In a different tack, it has been argued that a general rule that takes negative self-doubting higher-order evidence to always exert some defeating force on first-order beliefs will be very hard to come by. Because the subject is being asked to behave rationally in the face of evidence that she has not behaved rationally, she is subject to norms that give contradictory advice, and fully general rules for adjudicating between such rules are subject to paradoxes (Lasonen-Aarnio 2014). Possibly the only thing harder than defending a fully general rule requiring the first-order and second-order to match is accepting the intuitive consequences of level-splitting or akrasia. In this situation one believes a certain belief state is (or might be) irrational but persists in it anyway. Horowitz (2014) has defended the Non-Akrasia Constraint (also sometimes referred to as an Enkratic Principle) which forbids being highly confident in both “q” and “my evidence does not support q”, in part by arguing that allowing akrasia delivers highly counter-intuitive follow-on consequences in paradigm cases of higher-order evidence. For example, if our pilot maintains confidence that F, she has enough fuel, how should she explain how she got to a belief in F that she thinks is true when she also thinks her evidence does not support F? It would seem that she can only tell herself that she must have gotten lucky. She could further tell herself that if the reason she persisted in believing F despite the altimeter reading was that she did in fact have low blood oxygen, then it was really lucky she had that hypoxia! Otherwise in evaluating her total evidence correctly she would have come to a false belief that not-F. Reasoning so, the pilot would be using her confidence in F as a reason to believe the altimeter reading was a misleading defeater, which does not seem to be a good way of finding that out. Moreover, if she did this argument a number of times she could use the track record so formed to bootstrap her way to judging herself reliable after all (Christensen 2007a,b; White 2009; Horowitz 2014—for general discussion of what is wrong with bootstrapping, see Vogel 2000 and Cohen 2002). Akrasia also sanctions correspondingly odd betting behavior. To the extent that New Rational Reflection (of the previous section) calls for a match between first-order and second-order beliefs it counts as a Non-Akrasia principle. However, this particular matching requirement is subject to several problems about evidence that are brought out by Lasonen-Aarnio (2015). It requires substantive assumptions about evidence and updating our beliefs that are not obvious, and does not appear to respect the internalism about rationality that apparently motivates it, namely that one’s opinions about what states it is rational to be in match what states one is actually in. Moreover it does not seem that New Rational Reflection could embody the attractive idea that in general a subject may rationally always be uncertain whether she is rational, i.e., even the ideal agent can doubt that she is the ideal agent—an idea that RatRef failed to conform with and that led to the formulation of this new principle. This is because New Rational Reflection must assume that some things, such as conditionalization, cannot be doubted to be rational, i.e., to be what the ideal agent would do. It is not clear that we should have expected that everything can be doubted at once (Vickers 2000; Roush et al. 2012), but this is an ongoing area of research (Sliwa & Horowitz 2015). Another problem some have seen with any version of Rational Reflection is that it ultimately doesn’t allow the subject to remain unsure what degree of belief it is rational for her to have. It forces her to match her first-order degree of belief to a specific value, namely, a weighted average of the degrees of belief that she thinks it might be rational to have. It collapses her uncertainty about what is rational to a certainty about the average of the possibilities and forces her to embrace that precise value. This doesn’t allow a kind of mismatch or akrasia that might be the right way to respond to some higher-order evidence, where one is confident that q and also thinks it likely that the evidence supports a lower confidence than one has but is unsure what that lower confidence should be. Maybe one should not defer to the higher-order evidence in this case, because one is not sure what its verdict is (Sliwa & Horowitz 2015). See the higher-order calibration approach below for a way of representing this uncertainty that can give a justification for thinking that matching to averages is rational. The evidential approach locates the authority that second-order information about our judgment has over us in the idea that it is evidence and we should respect our evidence. A state of self-doubt on this view is confidence both that q and that one’s evidence may well not support q. This state does not make the subject inconsistent, but is a state of level-splitting or akrasia. Matching on this view is constituted by agreement between one’s level of confidence in q and how far one thinks one’s evidence supports q, and doesn’t allow akrasia, but taking an evidential approach does not by itself settle whether rationality requires matching, or under what circumstances first- or second-order evidence should determine one’s first order confidence. General rules about how to adjudicate in self-doubting cases between the claims of the two orders of evidence may be hard to get due to paradoxes, and to the need in every instance to hold some features of rationality as undoubtable in order to initiate and resolve one’s doubt. ## 4. Calibration and Higher-Order Objective Probability Another approach to self-doubt explains the authority that second-order evidence sometimes has over one’s first-order beliefs by means of the idea that such evidence provides information about the relation of one’s first-order beliefs to the way the world is which one is obligated to take into account. That is, evidence like the altitude reading, sleep-deprivation, and empirical studies of the unreliability of eyewitness testimony, provides information about whether your beliefs are reliable indicators of the truth. We take the reading of a thermometer no more seriously than we regard the instrument as reliable. Our beliefs can be viewed as readings of the world and treated the same way (Roush 2009; White 2009; Sliwa & Horowitz 2015). ### 4.1 Guess Calibration One way of formulating a constraint that says we should be no more confident than we are reliable is by requiring guess calibration (GC): If I draw the conclusion that q on the basis of evidence e, my credence in q should equal my prior expected reliability with respect to q. (White 2009; Sliwa & Horowitz 2015) Your expected reliability with respect to q is understood as the probability—chance or propensity—that your guess of q is true. You may not be sure what your reliability is so you will use an expected value, a weighted average of the values you think are possible, and this should be a prior probability, evaluated independently of your current belief that p. Self-doubt on this picture would be a state in which you’ve drawn conclusion q, say because your confidence in q exceeded a given threshold, but also have reason to believe your reliability with respect to q is not as high as that confidence, and such a state would be a violation of GC. Whether this self-doubting state can be coherent when a subject knows her own beliefs depends very much on how the reliability aspect is formulated. If chances and propensities that your guesses are true are determined by frequencies of ordered pairs of guesses of q and truth or falsity of q then self-doubt will make one incoherent here in the way that representing oneself as an anti-expert did above, because coherence and expected reliability will be logically equivalent.[2] In any case, GC requires matching between the orders in all cases, and tells us that the match is between your confidence and your reliability. GC makes sense of the intuition in some cases of self-doubt that the subject should drop her confidence. The pilot on looking at the altimeter reading should cease to be so sure she has enough gas for fifty more miles because it gives her reason to think she is in a state where her calculations will not reliably turn out truths. Similarly, the doctor realizing she is severely underslept acquires reason to think she is in a state where her way of coming to beliefs does not reliably lead to true conclusions. The main dissatisfaction with GC has been that it apparently cedes all authority to second-order evidence. In fact in GC’s formulation the confidence you should finally have in q does not depend on how far the evidence e supports q or how far you think e supports q, but only on what you think is your propensity for or frequency of getting it right about q, whether you used evidence or not. There may be cases where second-order evidence is worrying enough that one’s first order conclusion should be mistrusted entirely, even if it was in fact soundly made—maybe the pilot and doctor are such cases since the stakes are high. But as we saw above it doesn’t seem right across the board to take first-order evidence to count for nothing when second-order evidence is around. Here we can see that by supposing that two people, Anton and Ana, reason to different conclusions, q and not-q, on the basis of the same evidence, Anton evaluating the evidence correctly, Ana not. Suppose both are given the same undermining evidence, say that people in their conditions only get it right 60% of the time. According to GC, rationality requires both of them to become 60% confident in their conclusions. Anton, who reasoned correctly from the evidence, is no more rational than Ana, and has no right to a higher confidence in his conclusion q than Ana who reasoned badly has for her conclusion not-q (Sliwa & Horowitz 2015). ### 4.2 Evidential Calibration It seems wrong that second-order evidence should always swamp the first-order verdict completely, so the calibration idea has been re-formulated so as to incorporate dependence on first-order evidence explicitly, in the evidential calibration constraint (EC): When one’s evidence favors q over not-q, one’s credence in q should equal the [prior] expected reliability of one’s educated guess that q. (Sliwa & Horowitz 2015) Your educated guess corresponds to the answer that you have the highest credence in. Reliability of such a guess is defined as the probability that you would assign the highest credence to the true answer if you had to choose, and as above this probability is understood as your propensity to guess correctly. What is used in EC, as in GC, is an expected rather than actual reliability so it is weighted by how likely you think each possible reliability level is. The difference between GC and EC is that in the latter the calibration requirement depends explicitly on which conclusion the first-order evidence actually supports. On this principle, Anton, who reasoned correctly with the first-order evidence, is rational to be .6 confident in q rather than not-q because q is the conclusion the first-order evidence actually supports. The contribution of the second-order evidence is to reduce his confidence in that conclusion from a high value to .6. According to Sliwa and Horowitz EC implies that Ana is not rational to have a .6 confidence in not-q because not-q is not the conclusion the evidence actually favors. It would be rational for her to have .6 confidence that q, the same as Anton. This claim highlights ambiguities in the phrase “one’s educated guess that q”. Expected reliability is the probability that you would assign the highest credence to the true answer, and the undermining evidence both Anton and Ana were given said that in the conditions they were in they had a 60% chance of their guess being the right answer.[3] If so, then neither of them have enough information to know the expected reliability of a guess that q. If the probabilities it appeared they were given, for one’s guess about q whatever it is, are to be usable, then the phrase in EC would need to be interpreted “one’s educated guess that q or not-q”. The fact that Ana did not actually guess that q makes for a difficulty on either interpretation of the higher-order evidence and EC. Suppose the 60%-probability evidence they were given was indeed only about guesses that q, and that “educated guess that q” in EC refers only to guesses that q. If the word “one’s” in the phrase “one’s educated guess that q” refers narrowly to the individual EC is being applied to, then EC does not imply anything for Ana, since she did not guess that q. If “one’s” refers broadly to anyone who guesses that q in the conditions Anton and Ana were in, then it does follow that what is rational for Ana is to believe q with 60% confidence. However, whether they are given general evidence about guesses that q or not-q, or separate statistics on the success of q-guesses and of not-q guesses, that higher-order evidence would have given Ana no means to correct herself. Because she got it wrong in the first step by incorrectly concluding the first-order evidence supported not-q she will also lack the means to correct herself, that is, to know whether she should be 60% confident in q or in not-q. What EC says it is rational for her to do in the situation is not something she has the ability to do. It might be possible to avoid these difficulties in a reformulation, but they are consequences of the move in EC to add deference to the actual first-order evidential support relation. EC rules out many cases of bootstrapping that level-splitting views allow. For example, a bootstrapping doctor with evidence that he is unreliable assembles a sterling track record of success in his first-order decisions by judging the correctness of his conclusions by his confidence in those conclusions. He thinks the evidence of his unreliability that he started out with has now been outweighed, so he concludes that he is reliable after all. EC doesn’t allow this to be rational because it doesn’t allow him to assemble the track record in the first place, since he is obligated in every instance to take into account the expected (un)reliability that he has evidence for. It is unclear, however, that EC similarly rules out bootstrapping for a subject who begins with no evidence at all about her reliability. The EC reformulation of GC takes a different view of whether rationality requires us to get it right about the first-order support relation or merely to get it right by our own lights, but this question is of course not specific to the topic of the relation of first-order and second-order evidence. For example, in a probabilistic account evidential support relations are dictated completely by conditional probabilities. In a subjective Bayesian version of this picture rationality requires one to have the confidence dictated by the subjective conditional probabilities that follow from one’s confidences in other propositions. In an objective version rationality would obligate one to have a confidence that is in line with the objective conditional probabilities. There are other ways of making out subjective vs. objective views of the relevant evidential support relations, and whether we should favor one or the other depends on more general considerations that could provide independent reason to favor one or the other view in the current debate about order relations. Though this distinction is not specific to the current context it appears to have played a role in some authors’ intuitions about level-splitting above. For example, when Weatherson and Coates say that the subject ought to believe what the first-order evidence actually supports because a norm does not cease to apply just because one has evidence one hasn’t followed it, they assume that the relevant norm and evidential support relation are objective. Wedgwood’s appeal to externalism about justification also takes its bearings from what the first-order evidence actually supports rather than what to one’s own point of view it seems to support. A challenge for these approaches that achieve some additional authority for first order evidence over the second order by requiring deference to the actual evidential relation at the first order is to explain why this is an obligation at the first order but a subject need only take into account the expected reliability at the second order. ### 4.3 Calibration in Higher-Order Probability Another approach that sees rationality constraints between the two orders as based on taking evidence of one’s expected reliability into account derives the constraints top-down from general, widely held, subjective Bayesian assumptions about evidential support, and explicit representation of second-order reliability claims in higher-order objective probability (Roush 2009). Like approach 2 above it uses subjective conditional probability to express the match that is required between the two orders, but it avoids the consequence we saw in most of those approaches—and in the categorical approach and the other calibration approaches just discussed—that a state of self-doubt combined with knowledge of one’s beliefs is incoherent. Unlike the first two calibration approaches it gives an explanation why calibration is part of rationality; it does this by deriving the constraint from another widely accepted assumption, the Principal Principle. We can write a description of the relation of the subject’s belief in q to the way the world is—her reliability—as an objective conditional probability: Calibration Curve $$\PR(q\mid P(q)=x) = y$$ The objective probability of q given that the subject believes q to degree x is y. This is a curve, a function that allows reliability y to vary with the independent variable of confidence, x, with different variables used in order to allow for the possibility that the subject’s degree of belief tends not to match the objective probability, and that the level and direction of mismatch can vary with the level of confidence. The curve is specific to proposition q and to the subject whose probability function is P. A subject is calibrated on q, on this definition, if his calibration curve is the line $$x = y$$.[4] Calibration curves are widely studied by empirical psychologists who find that human beings’ reliability tends on average to vary systematically and uniformly with confidence, with for example high confidence tending to overconfidence, as in eyewitness testimony. Despite the averages found when subjects take tests in controlled settings, the curves also vary with sub-group, individual traits, professional skills, and particular circumstances. All manner of higher-order evidence about a subject’s belief-forming processes, methods, circumstances, track-record, and competences are relevant to estimating this function. In real life no one could get enough evidence in one lifetime to warrant certainty about an individual’s calibration curve for q in a set of circumstances, but if one is a Bayesian one can form a confidence about what a person’s calibration curve is, or what value it has for some argument x, that is proportional to the strength of one’s evidence about this, and one can have such a confidence about one’s own calibration curve. On this approach epistemic self-doubt is a state where one is confident and more or less correct that one believes q to degree x, that is, $$P(q) = x$$, but also has an uncomfortably high level of confidence, say $$≥ .5$$, that one is unreliable about q at that confidence. That is, one has confidence $$≥ .5$$ that the objective probability of q when one has x-level of confidence in q is different from x, which we would write $$P(\PR(q\mid P(q)=x) \ne x) ≥ .5$$. Let us say that the different value is y, so $$P(\PR(q\mid P(q)=x) = y) ≥ .5$$, $$y \ne x$$. Whether or not the reason for this unreliability is that one tends to mistake evidential support relations and whether one does or does not think a given evidential support relation obtains, make no general difference to this evaluation which is simply about whether one tends to get things right when doing the sort of thing one did in coming to be confident in q to level x;[5] it is about the relation between one’s confidence and the way things are. On this view, a state of self-doubt involves a combination of states of the following sort: \begin{align*} P(q) & = x\\ P(P(q) = x) & = .99 \qquad\textrm{(high)}\\ P(\PR(q\mid P(q) = x) = y) & ≥ .5, \qquad y \ne x\\ \end{align*} You actually believe q to degree x, you are confident (say at .99) that you so believe, and you have an uncomfortably high level of confidence that you are not calibrated for q at x, that the objective probability of q when you are x confident of q is y. This state escapes incoherence for two reasons. One is that one’s confidence either about one’s degree of belief or one’s reliability is not 1, and unlike some conditional probability formulations of self-doubt above, the slightest uncertainty is enough to make it coherent to attribute a large discrepancy between your believed confidence and your believed reliability. This is made possible by the second factor, that (un)reliability is expressed here as an objective conditional probability, and coherence alone does not dictate how subjective and objective probabilities must relate. This is analogous to the reason that the approach via the maximally rational subject above was able to represent a state of self-doubt as coherent, namely, that in evaluating my own P I compare it to a different probability function. However, in this case the second function is not an expert function that declares unconditionally what value the maximally rational subject’s value for q would be, but a calibration function, a conditional probability that tells one what objective probability is indicated by one’s subjective probability. One difference between the two approaches is that there are obvious ways to investigate calibration curves empirically, whereas it would be hard to recruit enough maximally rational subjects for a statistically significant study so we tend to be left appealing to intuitions about what seems rational. Once the defeating information about the relation of a subject’s credences to the world is expressed in objective probability it can be represented explicitly as a consideration the subject takes on board in assessing the quality of the degree of belief she takes herself to have in q and resolving the question what her degree of belief should be, thus: $P(q\mid P(q)=x \amp \PR(q\mid P(q)=x)=y) = ?$ This asks for the degree of belief the subject should have in q on condition that she actually has degree of belief x in q and the objective probability of q given that she has degree of belief x in q is y. This expression is the left-hand side of Self-Respect/Synchronic Reflection with a further conjunct added to its condition. SR doesn’t specify what to do when there is another conjunct and so is not suited to explicitly represent the question of self-doubt, which means that the self-doubting examples above are not counterexamples to it (Roush 2009). However, some in the past have endorsed variants on an unrestricted version of SR (Koons 1992; Gaifman 1988) where the value of this expression is x regardless of what other conjunct might be present: Unrestricted Self-Respect (USR)[6] $$P(q\mid P(q)=x \amp r) = x$$, for r any proposition Dutch book arguments that might give support to SR do not do the same for USR, leaving us with a need to find other ways of evaluating it when r is the statement of a calibration curve. It is not incoherent but it is baldly counterintuitive to suppose that the subject should have degree of belief x when she believes that her so believing is an indicator that the objective probability of q is not x, and a principled argument can also be made to this effect (Roush 2009). Unpacking the condition $$P(q)=x \amp \PR(q\mid P(q)=x) = y$$, it seems to say that my credence is x and when my credence is x the objective probability is y, inviting us to discharge and infer that the objective probability is y. If so,[7] then the expression would reduce to: $P(q\mid \PR(q)=y) = ?$ which is the left-hand side of a generalization of the Principal Principle (see entry on David Lewis) Principal Principle (PP)[8] $$P(q\mid Ch(q)=y) = y$$ from chance to any type of objective probability. PP says that your credences in propositions should conform to what you take to be their chances of being true, and, admissibility issues notwithstanding, it is hard to deny that there exists a domain in which the Principal Principle is compelling, and surely one where the generalization to any type of objective probability is too. If so then the answer to the question what the subject’s credence in q ought to be in light of her consideration of information about her reliability is: Cal $$P(q\mid (P(q)=x \amp \PR(q\mid P(q)=x)=y)) = y$$ Cal says that your credence in q given that your credence in q is x and the objective probability of q given that your credence in q is x is y, should be y. Cal is a synchronic constraint, but if we revise our credences by conditionalization then it implies a diachronic constraint: Re-Cal $$P_{n+1}(q) = P_{n}(q\mid (P_{n}(q)=x \amp \PR(q\mid P_{n}(q)=x)=y)) = y$$ This calibration approach tells the subject how to respond to information about her cognitive impairment in every case. It uses the information about herself to correct her belief about the world. Intuitively it is a graded generalization of the thought that if you knew of someone (or yourself) that he invariably had false beliefs, then you could gain a true belief by negating everything he said. Cal and Re-Cal give an explicit characterization of self-doubt and justification of a unique and determinate response to it on the basis of deeper principles that are compelling independently of the current context. Cal follows from only two assumptions, first that probabilistic coherence is a requirement of rationality, and second that rationality requires one’s credences to align with what according to one’s evidence are the objective probabilities. Re-Cal comes from further assuming that updating our beliefs should occur by conditionalization. Although self-doubt under the current definition of it is not an incoherent state, Cal implies that rationality always requires a resolution of the doubt that brings matching between the two levels, and tells us that the matching consists in the alignment of subjective and perceived objective probabilities. High confidences in “q”, “I have confidence x in q”, and “the objective probability of q when I have confidence x in q is low” are not incoherent, but they do violate the Principal Principle. Re-Cal tells us how to get back in line with PP. Though Re-Cal has us conditioning on second-order evidence, the adjustment it recommends depends on both first- and second-order evidence and does not always favor one level or the other. How much authority the second-order claim about the reliability/calibration curve has depends very much on the quality of one’s evidence about it. This can be seen by imagining being uncertain about, e.g., one’s calibration curve, i.e., $$P(\PR(q\mid P(q) = x) = y) < 1$$, and doing a Jeffrey conditionalization version of Re-Cal (Roush 2017, Other Internet Resources). But even in case one has perfect knowledge of one’s calibration curve, the role of the first-order evidence in determining one’s first-order belief is ineliminable. The verdict, the level of confidence, that the first order gave you for q is the index for determining which point on the calibration curve is relevant to potentially correcting your degree of belief. To understand why this is far from trivial, recall that the curve can in principle and does often in fact have different magnitudes and directions of distortion at different confidences. The verdict’s dependence on the first-order evidential support relation is different from that of EC in another way, since it uses not the objective support relation at the first order but the consequences of the subject’s take on it. Thus, Ana above would not be left not knowing how to make herself rational. The fact that the update proceeds by conditionalization means that all of the kinds of evaluation of evidence that conditionalization imposes come along with that. Misleading self-doubting defeaters troubled some authors above and led them to level-splitting views, but they are handled by Re-Cal as conditionalization always handles them. Self-doubting defeaters are processed at face value as relevant to the calibration curve in proportion to their quality as evidence. Convergence theorems tell us that if the world isn’t systematically deceptive then the misleading defeaters will be washed out, that is, defeated by some other evidence, in the long run. In some cases that will happen only long after we’re all dead, but if one views that as inadequate then that is a dissatisfaction with subjective Bayesianism, and is not specific to its usage here. The approach to epistemic self-doubt in terms of higher-order probability allows the state of self-doubt to be rational (coherent), and to be rationally resolved. Cal expresses a requirement of matching between the two orders in all cases, though it does not imply that attributing a mismatch to oneself is incoherent. Neither order is always dominant; both orders always make a contribution to determining the resolution at the first order of conflicts between orders, and their relative contribution depends on the quality of the evidence at each order. Cal and Re-Cal explain why one should revise in light of higher-order evidence, when one should, by reference only to probabilistic coherence, the Principal Principle, and conditionalization. Cal and Re-Cal are general and make available all of the resources of the Bayesian framework for analysis of higher-order evidence. A further notable fact about the framework is that Re-Cal allows for cases where news about one’s reliability should increase one’s confidence, which would be appropriate for example in cases, easy to imagine, where one acquired evidence that one was systematically underconfident. Thus, it is possible for second-order evidence to make it rational to be not only steadfast or conciliatory, but even emboldened. ## Bibliography • Adler, Jonathan E., 1990, “Conservatism and Tacit Confirmation”, Mind, 99(396): 559–70. doi:10.1093/mind/XCIX.396.559 • Alston, William P., 1980, “Level Confusions in Epistemology”, Midwest Studies in Philosophy, 5: 135–150. doi:10.1111/j.1475-4975.1980.tb00401.x • Bergmann, Michael, 2005, “Defeaters and Higher-Level Requirements”, The Philosophical Quarterly, 55(220): 419–436. doi:10.1111/j.0031-8094.2005.00408.x • Briggs, Rachael, 2009, “Distorted Reflection”, Philosophical Review, 118(1): 59–85. doi:10.1215/00318108-2008-029 • Christensen, David, 1991, “Clever Bookies and Coherent Beliefs”, Philosophical Review, 100(2): 229–47. doi:10.2307/2185301 • –––, 1994, “Conservatism in Epistemology”, Noûs, 28(1): 69–89. doi:10.2307/2215920 • –––, 2000, “Diachronic Coherence versus Epistemic Impartiality”, Philosophical Review, 109(3): 349–71. doi:10.2307/2693694 • –––, 2007a, “Does Murphy’s Law Apply in Epistemology? Self-Doubts and Rational Ideals”, Oxford Studies in Epistemology, 2: 3–31. • –––, 2007b, “Epistemic Self-Respect”, Proceedings of the Aristotelian Society, 107(1[3]): 319–337. doi:10.1111/j.1467-9264.2007.00224.x • –––, 2010a, “Higher-Order Evidence”, Philosophy and Phenomenological Research, 81(1): 185–215. doi:10.1111/j.1933-1592.2010.00366.x • –––, 2010b, “Rational Reflection”, Philosophical Perspectives, 24: 121–140. doi:10.1111/j.1520-8583.2010.00187.x • –––, 2011, “Disagreement, Question-begging, and Epistemic Self-Criticism”, Philosopher’s Imprint, 11(6): 1–22. [Christensen 2011 available online] • Christensen, David and Jennifer Lackey (eds.), 2013, The Epistemology of Disagreement: New Essays, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199698370.001.0001 • Coates, Allen, 2012, “Rational Epistemic Akrasia”, American Philosophical Quarterly, 49(2): 113–124. • Cohen, Stewart, 2002, “Basic Knowledge and the Problem of Easy Knowledge”. Philosophy and Phenomenological Research, 65(2): 309–39. doi:10.1111/j.1933-1592.2002.tb00204.x • Conee, Earl, 1987, “Evident, but Rationally Unacceptable”, Australasian Journal of Philosophy, 65(3): 316–326. doi:10.1080/00048408712342971 • Dawid, A.P., 1982, “The Well-Calibrated Bayesian”, Journal of the American Statistical Association, 77(379): 605–610. • Egan, Andy and Adam Elga, 2005, “I Can’t Believe I’m Stupid”, Philosophical Perspectives, 19: 77–93. doi:10.1111/j.1520-8583.2005.00054.x • Elga, Adam, 2007, “Reflection and Disagreement”, Noûs, 41(3): 478–502. doi:10.1111/j.1468-0068.2007.00656.x • –––, 2013 “The Puzzle of the Unmarked Clock and the New Rational Reflection Principle”, Philosophical Studies, 164(1): 127–139. doi:10.1007/s11098-013-0091-0 • Evnine, Simon J., 2008, Epistemic Dimensions of Personhood, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199239948.001.0001 • Feldman, Richard, 2005 “Respecting the Evidence”, Philosophical Perspectives, 19: 95–119. doi:10.1111/j.1520-8583.2005.00055.x • Foley, Richard, 1982, “Epistemic Conservatism”, Philosophical Studies, 43(2): 165–82. doi:10.1007/BF00372381 • Gaifman, Haim, 1988, “A Theory of Higher-Order Probabilities”, in Causation, Chance and Credence: Proceedings of the Irvine Conference on Probability and Causation, July 15-19, 1985, vol. 1, (The University of Western Ontario Series in Philosophy of Science, 41), Brian Skyrms and William L. Harper (eds.), Dordrecht: Springer Netherlands, 191–219. doi:10.1007/978-94-009-2863-3_11 • Gibbons, John, 2006, “Access Externalism”, Mind, 115(457): 19–39. doi:10.1093/mind/fzl019 • Harman, Gilbert, 1973, “Evidence one does not Possess”, Ch. 9 of Thought, Princeton: Princeton University Press. • Horowitz, Sophie, 2014, “Epistemic Akrasia”, Noûs, 48(4): 718–744. doi:10.1111/nous.12026 • Kelly, Thomas, 2005, “The Epistemic Significance of Disagreement”, Oxford Studies in Epistemology, 1: 167–196. • –––, 2010, “Peer Disagreement and Higher-Order Evidence”, in Richard Feldman and Ted A. Warfield (eds.), Disagreement. New York: Oxford University Press, 111–174. doi:10.1093/acprof:oso/9780199226078.003.0007 • Koons, Robert C., 1992, Paradoxes of Belief and Strategic Rationality. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511625381 • Lasonen-Aarnio, Maria, 2014, “Higher-Order Evidence and the Limits of Defeat”, Philosophy and Phenomenological Research, 88(2): 314–345. doi:10.1111/phpr.12090 • –––, 2015, “New Rational Reflection and Internalism about Rationality”, Oxford Studies in Epistemology, 5: 145–179. doi:10.1093/acprof:oso/9780198722762.003.0005 • Lewis, David, 1980, “A Subjectivist’s Guide to Objective Chance”, in Richard C. Jeffrey (ed.), Studies in Inductive Logic and Probability, University of California Press, pp. 83–132. Reprinted in his Philosophical Papers, Vol. 2, New York: Oxford University Press, 1986: 84–132. • Lycan, William G., 1977, “Evidence one does not Possess”, Australasian Journal of Philosophy, 55(2): 114–126. doi:10.1080/00048407712341141 • Owens, David, 2002, “Epistemic Akrasia”, The Monist, 85(3): 381–397. • Pollock, John L., 1989, Contemporary Theories of Knowledge. New York: Rowman and Littlefield. • Richter, Reed, 1990, “Ideal Rationality and Hand-Waving”, Australasian Journal of Philosophy, 68(2): 147–156. doi:10.1080/00048409012344171 • Roush, Sherrilyn, 2009, “Second-Guessing: A Self-Help Manual”, Episteme, 6(3): 251–68. doi:10.3366/E1742360009000690 • –––, 2016, “Knowledge of Our Own Beliefs”, Philosophy and Phenomenological Research, first online 4 February 2016. doi:10.1111/phpr.12274 • Roush, Sherrilyn, Kelty Allen, and Ian Herbert, 2012, “Skepticism about Reasoning”, in New Waves in Philosophical Logic, Greg Restall and Jillian Russell (eds.), Hampshire, England: Palgrave MacMillan, 112–141. • Seidenfeld, Teddy, 1985, “Calibration, Coherence, and Scoring Rules”, Philosophy of Science, 52(2): 274–294. doi:10.1086/289244 • Sklar, Lawrence, 1975, “Methodological Conservatism”, Philosophical Review, 84(3): 374–400. doi:10.2307/2184118 • Skyrms, Brian, 1980, “Higher Order Degrees of Belief”, in Prospects for Pragmatism: Essays in Memory of F.P. Ramsey, D.H. Mellor (ed.), Cambridge: Cambridge University Press, pp. 109–137. • Sliwa, Paulina and Sophie Horowitz, 2015, “Respecting All the Evidence”, Philosophical Studies, 172(11): 2835–2858. doi:10.1007/s11098-015-0446-9 • Sobel, Jordan Howard, 1987, “Self-Doubts and Dutch Strategies”, Australasian Journal of Philosophy, 65(1): 56–81. doi:10.1080/00048408712342771 • Shoemaker, Sydney, 1994, “Moore’s paradox and Self-Knowledge”, Philosophical Studies, 77(2–3): 211–28. doi:10.1007/BF00989570 • Sorensen, Roy A. 1987, “Anti-Expertise, Instability, and Rational Choice”, Australian Journal of Philosophy, 65(3): 301–315. doi:10.1080/00048408712342961 • –––, 1988, Blindspots. New York: Oxford University Press. • Talbott, W.J., 1991, “Two Principles of Bayesian Epistemology”, Philosophical Studies, 62(2): 135–50. doi:10.1007/BF00419049 • van Fraassen, Bas C., 1983, “Calibration: A Frequency Justification for Personal Probability”, Physics, Philosophy and Psychoanalysis: Essays in Honour of Adolf Grünbaum, (Boston Studies in the Philosophy of Science, 76), R.S. Cohen and L. Laudan (eds.), Dordrecht: Springer Netherlands, 295–320. doi:10.1007/978-94-009-7055-7_15 • –––, 1984, “Belief and the Will”, Journal of Philosophy, 81(5): 235–56. doi:10.2307/2026388 • –––, 1995, “Belief and the Problem of Ulysses and the Sirens”, Philosophical Studies, 77(1): 7–37. doi:10.1007/BF00996309 • Vickers, John M., 2000 “I Believe It, But Soon I’ll Not Believe It Any More: Scepticism, Empiricism, and Reflection”, Synthese, 124(2): 155–74. doi:10.1023/A:1005213608394 • Vogel, Jonathan, 2000, “Reliabilism Leveled”, Journal of Philosophy, 97(11): 602–623. doi:10.2307/2678454 • Vranas, Peter B.M., 2004, “Have your cake and eat it too: The Old Principal Principle Reconciled with the New”, Philosophy and Phenomenological Research, 69(2): 368–382. doi:10.1111/j.1933-1592.2004.tb00399.x • Weatherson, Brian, 2008 “Deontology and Descartes’ Demon”, Journal of Philosophy, 105(9): 540–569. doi:10.5840/jphil2008105932 • Wedgwood, Ralph, 2011, “Justified Inference”, Synthese, 189(2): 1–23. doi:10.1007/s11229-011-0012-8 • White, Roger, 2009, “On treating oneself and others as thermometers”, Episteme, 6(3): 233–250. doi:10.3366/E1742360009000689 • Williamson, Timothy, 2011, “Improbable Knowing”, in Evidentialism and Its Discontents, Trent Doherty (ed.), Oxford: Oxford University Press, pp. 147–164. doi:10.1093/acprof:oso/9780199563500.003.0010
# How to make the best dosa? And how to beat homesickness with food In early 2019, I quit by job at NITI Aayog and moved to Tripura. Although we got married a year earlier, Chandni and I hadn’t stayed together yet. This was our first attempt at recreating what we saw our parents were able to do - stay together under one roof, cook food, and go about our normal lives. This was important for us because we were staying more than 4000 kilometres away from our homes, and anything familiar to hold on to would make it easier on us. A big part of that is our own food. Dosa is definitely at the top of that list ### How to make bad dosa? 1. Making dosa with instant dosa powder available in the market is a sure shot way to make some of the worst dosa you’ll ever make. Don’t do it. We did it for far too long because we didn’t commit ourselves to make the dosa batter ourselves. 2. The second mistake that compounded our troubles was using non-stick pan for making dosa. It’s not worth the effort. The recipe for bad dosa is instant dosa powder and non-stick cookware. Avoid at all costs. ### How to make good dosa? 1. Make the batter yourself 2. Invest in a good cast iron dosa tawa On this, I’ll entertain no debate. Nothing beats a cast iron tawa in the quality of dosa it produces. Now let’s get into the details of each step. #### How to make the best dosa batter? The ingredients are easy to get almost anywhere in India - rice, urad dal, fenugreek seeds, and flattened rice. That’s it. • Rice: If you are in Kerala, no need to worry at all. Get doppi rice. However, if you are outside Kerala, you might want to do a trial and error a few times to figure out which works well. Usually, boiled rice works well. Ask the storekeeper if it’s a stick variety. If yes, then ask for a less sticky variety. To balance the stickiness, you can add a part of raw rice. • Urad Dal: Again, if you are Kerala, no need to worry at all. Get usual uzhunnu. If you are outside Kerala, try to get whole urad dal and not split urad dal. I’ve tried with two different ratios - by weight and by volume. For me, measuring by volume has given the best results. (Here 1 cup is 200 m.l.). The below given measurements are sufficient for approximately 25-30 medium sized dosas. • Rice - 2 cups • Dal - $\frac{1}{2}$ cup • Fenugreek seeds - 1 tablespoon • Flattened rice - $\frac{1}{4}$ cup Steps 1. Wash all ingredients very well. 2. Soak them for at least 6 hours 3. Grind all ingredients together till a smooth, thick paste is formed. 4. Keep it aside for 8-10 hours to allow fermentation 5. Before making dosa, add water as needed. (Okay, I give up. It’s very difficult for me to put it in words what consistency the batter should be! 😉) Tips • Don’t add salt until after fermentation • If you want the dosa to have a slight golden colour, add a bit of jaggery mixed with water #### Cast Iron Pan One of the most important ingredients to make successful dosa is to invest in a good quality cast iron dosa tawa. For a long time I thought the tawa was made of stone because in malayalam we call it dosa kallu or dosa stone. Until recently, Chandni and I continued to make dosa in non-stick tawas with below average success. We finally decided to take the plunge and purchase one. Due to lockdown restrictions, it took more than 45 days for it to arrive here in Kanchanpur, Tripura. Once you get it, the first thing you have to do is to season it. How to season a cast iron tawa 1. Wash off rust and dirt completely. This will take several attempts. Use a mixture of water, vinegar and soap to quicken the process. Make sure rust and dirt are completely gone. 2. Wipe it with a dry cloth and allow it to dry completely. 3. Apply oil (preferably gingelly oil, or any vegetable oil. Don’t use coconut oil) on the usable side of the tawa. Rest it for 15 minutes. 4. Put it on medium flame. When it’s reasonable hot, scrub the tawa with a sliced onion. You can see goo being formed. Switch off the gas. 5. Allow the tawa to cool down. Wash it properly. 6. Repeat Steps 2, 3, 4, and 5 at least 4 times. 7. Apply oil again and keep it overnight. It’ll be ready for use in the morning. Congrats! Your cast iron tawa is now seasoned well. ### Making Dosa So now you have the best dosa batter and a cast iron pan. All you have to do is make crispy dosa. 1. Put cast iron tawa on medium flame 2. Apply gingelly oil (or any vegetable oil) and spread it with a sliced onion or a clean tissue 3. Sprinkle water to cool down the pan (this allows us to spread the batter evenly) 4. Pour a ladle of batter on the middle of the pan and carefully spread it in round circular motion 5. Allow the batter to partially cook 6. Pour a few drops of oil on the dosa (to make it crispier) 7. Once the dosa turns golden brown and crispy, using a spatula, flip it to allow the other side to cook 8. Take it off the tawa and enjoy your dosa! #### Bring your home closer with food This is especially true for people who live far away from their hometowns. The sure shot way to make you forget homesickness is to cook the food that you are familiar with. Since moving to Tripura, Chandni and I have been slowly upping our cooking game. Full disclosure: She does 70% of the cooking. I do 30% (mostly breakfast items, which are my specialty) and help out wherever I can. We are now confident that we can make most of the Kerala dishes here, in Tripura. A part of Kerala lives inside us through our food. 😄 ##### Arun Sudarsan ###### Economist and Policy Researcher Arun is an Economist, passionate about Open Data and its potential to increase state transparency and accountability. Loves teaching. Previously worked at NITI Aayog. To subscribe to this blog’s mailing list, please enter your details here. Check your spam folder if you are missing updates. Thanks for subscribing!
Any questions contact: [email protected] 802-847-2194 8:00AM-4:00PM # matplotlib set default font January 18, 2021 by Filed under Blog pyplot as plt mpl. A class for storing and manipulating font properties. However, that didn’t work so I changed the font size in the plotting file. Learning by Sharing Swift Programing and more …. Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library.It is an amazing visualization library in Python for 2D plots of arrays and used for working with the broader SciPy stack. rcParams['font.sans-serif'] = ['Arial'] Then we will create a plot using matplot. Open Source Software. play_arrow. set_default_weight (self, weight) [source] ¶ Set the default font weight. and doesn't render right, fair enough. Method 1: Using set_figheight() and set_figwidth() For changing height and width of a plot set_figheight and set_figwidth are used . https://matplotlib.org/gallery/api/font_file.html, https://matplotlib.org/gallery/api/font_file.html, How can I capture the stdout from a process that is ALREADY running, How to save PNG file from NSImage (retina issues). Then you can refer matplotlib to it like this (replace “crm10.ttf” with your file): print(fpath) will show you where you should put the .ttf. Note: Although you can do this, unless you’re practicing to make a house style I recommend specifying single-use fonts (the above section) instead of … This page is based on a Jupyter/IPython Notebook: download the original .ipynb It’s pretty easy to find someone online giving you a list of all of the fonts available in matplotlib, but they’re always really ugly boring lists.This gives you a list plus samples of each font. subplots () ax . Adding annotations / text also works in seaborn with the same methods. A dictionary to override the default text properties. In this article, we are going to Change Legend Font Size in Matplotlib. show () - The SourceForge Team The font set used for normal text is necessarily different from the one for MathText. How to set Helvetica as the default sans-serif font in Matplotlib November 15, 2012 matplotlib helvetica tutorial If you’re a typography junkie like me, you’re probably sick of seeing Bitstream Vera Sans as the typesetting font for Python ’s plotting library, Matplotlib . Should be fixed in 2.0.1 but I’ve included the workaround in the 2nd part of the answer. matplotlib.pyplot.annotate, which uses the same kwargs as .text. plot ([ 1 , 2 , 3 ], label = 'test' ) ax . We really appreciate your help! It has a module named pyplot which makes things easy for plotting. rc ('font', size=10) #controls default text size plt. edit close. Re: [Matplotlib-users] setting figure font to helvetica Re: [Matplotlib-users] setting figure font to helvetica From: per freem - 2009-06-29 23:41:10 Python3. How to change the font style of text in matplotlib? I wanted to change it permanently, so edited the matplotlibrc file which holds all the default settings. Matplotlib.axis.Axis.set_default_intervals() Function I have been trying to change the default font to Arial. #-*- coding: UTF-8 -*- from matplotlib import rcParams rcParams['font.family'] = 'sans-serif' Then we continue to set font for font family: sans-serif. Next Page . Some styles failed to load. The text. Oh no! List all fonts available in matplotlib plus samples. I found a couple of tutorials to change the default font of matplotlib by modifying some files in the folders where matplotlib stores its default font – see this blog post – but I am looking for a less radical solution since I would like to use more than one font in my plot (text, label, axis label, etc). Previous Page . Matplotlib’s default tick locators and formatters are designed to be generally sufficient in many common situations. The coordinate system can be changed using the transform parameter. If you know your way around your browser's dev tools, we would appreciate it if you took the time to send us a line to help us track down this issue. How to Change Font Sizes on a Matplotlib Plot. fontdict: dictionary, optional, default: None. Bold text can be specified with .text or .annotate. plt.xticks(fontsize= ) to Set Matplotlib Tick Labels Font Size from matplotlib import pyplot as plt from datetime import datetime, timedelta xvalues = range(10) yvalues = xvalues fig,ax = plt.subplots() plt.plot(xvalues, yvalues) plt.xticks(fontsize=16) plt.grid(True) plt.show() plt.xticks(fontsize=16) plt.xticks gets or sets the properties of tick locations and labels of the x-axis. Use the weight or fontweight parameter. Similarly, the y-axis is set to ‘Damped oscillation [V],’ bypassing it directly to the set_ylabel method. While it is impossible to select the best default for all cases, these are designed to work well in the most common cases. matplotlib change default font. You can also specify a default font for everything in matplotlib. It sounds as an easy problem but I do not find any effective solution to change the font (not the font size) in a plot made with matplotlib in python. If not, matplotlib falls back to another font. matplotlib.pyplot.text. Fortunately this is easy to do using the following code: import matplotlib.pyplot as plt plt. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview … gives Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Say you want Comic Sans for the title and Helvetica for the x label. Font Size : The font size or text size is how large the characters displayed on a screen or printed on a page are. Department of Chemical Engineering, Stanford University. The list of matplotlib’s font family arguments is here. To find the file: To see the change, you have to close python and boot up again so that it does not pull anything from the cache. I found a couple of tutorials to change the default font of matplotlib by modifying some files in the folders where matplotlib stores its default font – see this blog post – but I am looking for a less radical solution since I would like to use more than one font in my plot (text, label, axis label, etc). Syntax: matplotlib.pyplot.legend(*args, **kwargs) It can be done in different ways: To use font size as a parameter. In the above example, the label of the x-axis is set as ‘time [s],’ bypassing the text as an argument to the set_xlabel() method. Update #3: There is a bug in Matplotlib 2.0.0 that’s causing tick labels for logarithmic axes to revert to the default font. I have been trying to change the default font to Arial. legend () plt . class matplotlib.font_manager.FontProperties (family = None, style = None, variant = None, weight = None, stretch = None, size = None, fname = None) [source] ¶ Bases: object. You can see the output here: I wanted to change it permanently, so edited the matplotlibrc file which holds all the default settings. To set the default font to be one that supports the code points you need, prepend the font name to 'font.family' or the desired alias lists matplotlib . The Helvetica font does not come included with Windows, so to use it you must download it as a .ttf file. So it makes sense to set both types individually with the respective rcParams. Specify a default font for everything on your graphs. Contribute to matplotlib/matplotlib development by creating an account on GitHub. and for the font.family you set a list of font styles to try to find in order: rcParams [ 'font.sans-serif' ] = [ 'Tahoma' , 'DejaVu Sans' , 'Lucida Grande' , 'Verdana' ] from matplotlib import rcParams rcParams [ 'font.family' ] = 'sans-serif' rcParams [ 'font.sans-serif' ] = [ 'Tahoma' ] import matplotlib.pyplot as plt fig , ax = plt . Update #2: I’ve figured out changing legend title fonts too. Trying to fix it by setting mathtext, even though I would not expect it to have an effect since the font of the axis does change just by specifying 'font.sans-serif': "Arial". I want to set the default matplotlib fonts through the rc parameters in a way that is portable to computers that might not have the same set of fonts available. link brightness_4 code # importing the matplotlib library . Advertisements. Matplotlib library mainly used to create 2-dimensional graphs and plots. Removing handlers from python’s logging loggers, Concatenate strings in python in multiline, Check whether a file exists without exceptions, Merge two dictionaries in a single expression in Python. A 'classic' style sheet is provided so reverting to the 1.x default values is a single line of python. If you are a control freak like me, you may want to explicitly set all your font sizes: import matplotlib.pyplot as plt SMALL_SIZE = 8 MEDIUM_SIZE = 10 BIGGER_SIZE = 12 plt.rc('font', size=SMALL_SIZE) # controls default text sizes plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize … © 2020, Yasser Khan. The legend() method in matplotlib describes the elements in the plot. rcParams ["font.sans-serif"] = ["Arial", "Liberation Sans", "Bitstream Vera Sans"] mpl. Do you have any of the fonts in plt.rcParams['font.cursive']? You can also use rcParams to change the font family globally. Some styles failed to load. rcParams [ … rc ('axes', titlesize=10) #fontsize of the title plt. s: str. Oh no! Accounting; CRM; Business Intelligence Matplotlib - Setting Ticks and Tick Labels. If you want to change the font size you can edit the file. The first way to use Chinese is to give a valid font name to Matplotlib. To use prop keyword to change the font size in legend. This will affect every single plot you make. style. To use rcParams Method. Matplotlib has so far - in all our previous examples - automatically taken over the task of spacing points on the axis.Matplotlib's default tick locators and formatters are designed to be generally sufficient in many common situations. By default, this is in data coordinates. Beforegoing on to the next step, make sure that there are Chinese fonts on yoursystem. The position to place the text. If you want to directly change the font, use this in the plotting file. Position and labels of ticks are often explicitly mentioned to suit specific requirements. The most important changes in matplotlib 2.0 are the changes to the default style. If fontdict is None, the defaults are determined by your rc parameters. filter_none. Please try reloading this page Help Create Join Login. import matplotlib.style import matplotlib as mpl mpl. I got reasonable font for "monospace", "fantasy" "sans-serif" and "serif" family, but "cursive" looked exactly the same as "sans-serif", which is the default font.family value. Update: See the bottom of the answer for a slightly better way of doing it. Here are various ways to change the default plot size as per our required dimensions or resize a given plot. how do I make a single legend for many subplots with matplotlib? ax.set_xlabel(r'Variable $\alpha$') Further details of this problem are discussed at First, we set font family for text. As an example, Comic Sans MS does not have an integral symbol, right? Setting labels for the axes is a way of implementing the text in the matplotlib figures plotted. Thank You ! Apr 15, 2015. If that is not the case, or if you want to try a new Chinese font, forexample, Source Han Serifrecently released by Google and Adobe2. Math text in matplotlib is rendered smaller than the regular text as show by. The initial value is 'normal'. matplotlib: plotting with Python. Here is what I thought would work: import matplotlib as mpl import matplotlib. Often you may want to change the font sizes of various elements on a Matplotlib plot. Ticks are the markers denoting data points on axes. A Computer Science portal for geeks. Are designed to be generally sufficient in many common situations often explicitly mentioned to suit requirements! Title fonts too given plot using the following code: import matplotlib ) method in 2.0! ) and set_figwidth are used a page are Then we will create a plot using matplot using the code. ’ s font family arguments is here what I thought would work: import matplotlib as mpl import.. A given plot say you want Comic Sans MS does not come included with Windows, so use. The 1.x default values is a single legend for many subplots with matplotlib 'font ', titlesize=10 ) # of! And Helvetica for the x label Comic Sans for the title plt width of a plot using matplot sheet! Common situations of matplotlib ’ s default tick locators and formatters are designed to generally... The same kwargs as.text should be fixed in 2.0.1 but I ’ ve figured changing! Displayed on a matplotlib plot sheet is provided so reverting to the next step, make that... A dummy symbol been trying to change it permanently, so edited the file. Specific requirements glyph for '- ' [ U+2212 ], ’ bypassing it directly to the next step make. Glyph for '- ' [ U+2212 ], ’ bypassing it directly to the step! The default style of a plot using matplot adding annotations / text also works in seaborn with the respective.. Source ] ¶ set the default font for everything in matplotlib text the... ’ s default tick locators and formatters are designed to be generally sufficient in common. = [ Arial '', Bitstream Vera Sans '', Bitstream Vera Sans '' ].... The answer set_figwidth ( ) method in matplotlib is rendered smaller than the regular as... Use prop keyword to change the default font weight of ticks are the changes to the next step make! The respective rcparams rcparams to change the font Sizes on a matplotlib.... Legend for many subplots with matplotlib use Chinese is to give a font... 2: I ’ ve included the workaround in the plotting file gives font '! Page Help create Join Login style of text in matplotlib tick locators and formatters are designed to be generally in. Matplotlib as mpl import matplotlib as mpl import matplotlib Join Login by creating an account on GitHub this! [ V ], substituting with a dummy symbol in 2.0.1 but I ’ ve figured out changing title. Edit the file types individually with the same kwargs as.text font.sans-serif '' ] = [ '. Prop keyword to change legend font size in matplotlib describes the elements in the part! Per our required dimensions or resize a given plot named pyplot which makes things easy for plotting the font in! Using set_figheight ( ) and set_figwidth ( ) for changing height and width of a plot set_figheight set_figwidth... How large the characters displayed on a page are matplotlibrc file which holds the. We will create a plot set_figheight and set_figwidth are used the matplotlibrc file which holds the... Any of the fonts in plt.rcParams [ 'font.cursive ' ] Then we will create a plot using.... On to the set_ylabel method ; CRM ; Business Intelligence the first way to use Chinese to... A glyph for '- ' [ U+2212 ], ’ bypassing it directly to the method. 'Font ', titlesize=10 ) # fontsize of the title and Helvetica for matplotlib set default font! ( [ 1, 2, 3 ], substituting with a dummy.. Makes sense to set both types individually with the same methods development by creating an on... Come included with Windows, so edited the matplotlibrc file which holds all the default plot matplotlib set default font as per required... = 'test ' ) ax for the x label come included with Windows, so to use prop keyword change! Is provided so reverting to the next step, make sure that there are Chinese fonts on yoursystem common! Denoting data points on axes that didn’t work so I changed the font size or text size plt download! 'Axes ', titlesize=10 ) # fontsize of the title and Helvetica for the axes is a legend... Our required dimensions or resize a given plot necessarily different from the one for.... 3 ], label = 'test ' ) ax the most common cases matplotlib set default font so reverting the. Using set_figheight ( ) for changing height and width of a plot set_figheight and set_figwidth are used is provided reverting... The y-axis is set to ‘ Damped oscillation [ V ], =... For the axes is a way of implementing the text in matplotlib part of the title Helvetica. Most important changes in matplotlib is rendered smaller than the regular text as show by method in matplotlib is smaller... Fonts on yoursystem: See the bottom of the title and Helvetica for title! [ 'font.sans-serif ' ] Then we will create a plot using matplot ''... Try reloading this page Help create Join Login are going to change the font size you can use... Be fixed in 2.0.1 but I ’ ve included the workaround in the 2nd part of the plt. Symbol, right the matplotlib figures plotted work well in the plot ' not. Prop keyword to change the font, use this in the plot style. [ 'Arial ' ] = [ 'Arial ' ] to work well in the most cases! Size: the font style of text in the plotting file all cases, these are designed be! Name to matplotlib ( self, weight ) [ source ] ¶ the... ) ax 2: I ’ ve figured out changing legend title fonts.. To be generally sufficient in many common situations self, weight ) [ source ] ¶ set the font! Liberation Sans '' ] mpl tick locators and formatters are designed be! Bottom of the answer for a slightly better way of doing it font.sans-serif '' ] [. Set_Ylabel method Helvetica for the axes is a way of implementing the text in matplotlib is smaller. Back to another font labels for the x label it you must it... Title fonts too a given plot to Arial figures plotted use rcparams to the! It directly to the 1.x default values is a single legend for many subplots with matplotlib font weight =! Trying to change font Sizes on a matplotlib plot of doing it Team Learning by Sharing Programing! Are going to change the default settings a single line of python # controls default size. An integral symbol, right one for MathText is to give a valid font name to matplotlib single... Then we will create a plot using matplot in matplotlib describes the elements in the file. Position and labels of ticks are often explicitly mentioned to suit specific requirements individually with the respective rcparams describes elements... Rendered smaller than the regular text as show by so edited the matplotlibrc file which holds the! How to change it permanently, so edited the matplotlibrc file which holds all the default.... It has a module named pyplot which makes things easy for plotting Sans MS does not come included with,... All the default settings has a module named pyplot which makes things easy for plotting changes in matplotlib things for! Characters displayed on a screen or printed on a matplotlib plot None, the y-axis is to! Impossible to select the best default for all cases, these are designed to be sufficient! A module named pyplot which makes things easy for plotting try reloading page. Should be fixed in 2.0.1 but I ’ ve included the workaround in the.! How do I make a single line of python it has a module named pyplot which makes things for! A 'classic ' style sheet is provided so reverting to the set_ylabel method download it a! To create 2-dimensional graphs and plots by Sharing Swift Programing and more … Helvetica... Are going to change the font set used for normal text is necessarily different from the one for.... Width of a plot set_figheight and set_figwidth are used, we are going to matplotlib set default font font... By Sharing Swift Programing and more … determined by your rc parameters your rc parameters wanted to change the font. Help create Join Login impossible to select the best default for all cases, are!: the font size in legend of matplotlib ’ s default tick locators and formatters are to. / text also works in seaborn with the same kwargs as.text uses the same kwargs as.text different... Text can be changed using the following code: import matplotlib as mpl import as. [ 'Arial ' ] = [ font.sans-serif '' ] = [ 'Arial ' ] Then we create. Is a way of implementing the text in matplotlib, matplotlib falls back another! To work well in the plotting file would work: import matplotlib.pyplot as plt.! Set_Ylabel method use prop keyword to change the default settings line of python matplotlib falls back to another font doing... For a slightly better way of implementing the text in matplotlib 2.0 are the changes to the default style does. Text size plt various ways to change the font family arguments is.! Method in matplotlib 2: I ’ ve included the workaround in the plot it as.ttf! Development by creating an account on GitHub setting labels for matplotlib set default font title.. All cases, these are designed to be generally sufficient in many common situations ’! A default font for everything matplotlib set default font matplotlib describes the elements in the 2nd of... Use Chinese is to give a valid font name to matplotlib with a dummy symbol the code... Font Sizes on a matplotlib plot it permanently, so edited the matplotlibrc file which holds all the default..
Signal Convolution Logic We introduce a new logic called Signal Convolution Logic (SCL) that combines temporal logic with convolutional filters from digital signal processing. SCL enables to reason about the percentage of time a formula is satisfied in a bounded interval. We demonstrate that this new logic is a suitable formalism to effectively express non-functional requirements in Cyber-Physical Systems displaying noisy and irregular behaviours. We define both a qualitative and quantitative semantics for it, providing an efficient monitoring procedure. Finally, we prove SCL at work to monitor the artificial pancreas controllers that are employed to automate the delivery of insulin for patients with type-1 diabetes. Authors • 4 publications • 10 publications • 30 publications • 23 publications • A Logic for Monitoring Dynamic Networks of Spatially-distributed Cyber-Physical Systems Cyber-Physical Systems (CPS) consist of inter-wined computational (cyber... 05/24/2021 ∙ by E. Bartocci, et al. ∙ 0 • Monitoring Mobile and Spatially Distributed Cyber-Physical Systems Cyber-Physical Systems (CPS) consist of collaborative, networked and tig... 04/15/2019 ∙ by Ezio Bartocci, et al. ∙ 0 • Data-driven Design of Context-aware Monitors for Hazard Prediction in Artificial Pancreas Systems Medical Cyber-physical Systems (MCPS) are vulnerable to accidental or ma... 04/06/2021 ∙ by Xugui Zhou, et al. ∙ 9 • Mining Environment Assumptions for Cyber-Physical System Models Many complex cyber-physical systems can be modeled as heterogeneous comp... • Metrics for Signal Temporal Logic Formulae Signal Temporal Logic (STL) is a formal language for describing a broad ... 08/01/2018 ∙ by Curtis Madsen, et al. ∙ 0 • Online Monitoring of Metric Temporal Logic using Sequential Networks Metric Temporal Logic (MTL) is a popular formalism to specify patterns w... 01/01/2019 ∙ by Dogan Ulus, et al. ∙ 0 • Generating Automated and Online Test Oracles for Simulink Models with Continuous and Uncertain Behaviors Test automation requires automated oracles to assess test outputs. For c... 03/08/2019 ∙ by Claudio Menghi, et al. ∙ 0 This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. 1 Introduction Cyber-Physical Systems (CPS) are engineering, physical and biological systems tightly integrated with networked computational embedded systems monitoring and controlling the physical substratum. The behaviour of CPS is generally modelled as a hybrid system where the flow of continuous variables (representing the state of the physical components) is interleaved with the occurrence of discrete events (representing the switching from one mode to another, where each mode may model a different continuous dynamics). The noise generated by sensors measuring the data plays an important role in the modes switching and it can be captured using a stochastic extension of hybrid systems. The exhaustive verification for these systems is in general undecidable. The available tools for reachability analysis are based on over-approximation of the possible trajectories and the final reachable set of states may result too coarse (especially for nonlinear dynamics) to be meaningful. A more practical approach is to simulate the system and to monitor both the evolution of the continuous and discrete state variables with respect to a formal requirement that specifies the expected temporal behaviour (see [4] for a comprehensive survey). Temporal logics such as Metric Interval Temporal Logic (MITL) [13] and its signal variant, Signal Temporal Logic (STL) [7], are powerful formalisms suitable to specify in a concise way complex temporal properties. In particular, STL enables to reason about real-time properties of components that exhibit both discrete and continuous dynamics. The Boolean semantics of STL decides whether a signal is correct or not w.r.t. a given specification. However, since a CPS model approximates the real system, the Boolean semantics is not always suitable to reason about its behaviour, because it is not tolerant to approximation errors or to uncertainty. More recently, several notions of quantitative semantics (also called robustness) [7, 9, 14] have been introduced to overcome this limitation. These semantics enrich the expressiveness of Boolean semantics, passing from a Boolean concept of satisfaction (yes/no) to a (continuous) degree of satisfaction. This allows us to quantify “how much” (w.r.t. a given notion of distance) a specific trajectory of the simulated system satisfies a given requirement. A typical example is the notion of robustness introduced by Fainekos et al. in [9], where the binary satisfaction relation is replaced with a quantitative robustness degree function. The positive or negative sign of the robustness value indicates whether the formula is respectively satisfied or violated. This notion of quantitative semantics is typically exploited in the falsification analysis [8, 16, 4, 1] to systematically generate counterexamples by searching, for example, the sequence of inputs that would minimise the robustness towards the violation of the requirement. On the other hand, the maximisation of the robustness can be employed to tune the parameters of the system [3, 2, 4, 6] to obtain a better resilience. A more thorough discussion on other quantitative semantics will be provided in Section 2. Motivating Challenges. Despite STL is a powerful specification language, it does not come without limitations. An important type of properties that STL cannot express are the non-functional requirements related to the percentage of time certain events happen. The globally and eventually operators of STL can only check if a condition is true for all time instants or in at least one time instant, respectively. There are many real situations where these conditions are too strict, where it could be interesting to describe a property that is in the middle between eventually and always. Consider for instance a medical CPS, e.g., a device measuring glucose level in the blood to release insulin in diabetic patients. In this scenario, we need to check if glucose level is above (or below) a given threshold for a certain amount of time, to detect critical settings. Short periods under Hyperglycemia (high level of glucose) are not dangerous for the patient. An unhealthy scenario is when the patient remains under Hyperglycemia for more than 3 hours during the day, i.e., for of 24 hours (see Fig. 1 left). This property cannot be specified by STL. A second issue is that often such measurements are noisy, and measurement errors or short random fluctuations due to environmental factors can easily violate (or induce the satisfaction) of a property. One way to approach this problem is to filter the signal to reduce the impact of noise, This requires a signal pre-processing phase, which may however alter the signal introducing spurious behaviours. Another possibility, instead is to ask that the property is true for at least 95% of operating time, rather than for 100% of time, this requirements can be seen as a relaxed globally condition (see Fig. 1 right). Finally, there are situations in which the relevance of events may change if they happen at different instants in a time window. For instance, while measuring glucose level in blood, it is more dangerous if the glucose level is high just before meal, that means “the risk becomes greater as we move away from the previous meal and approach the next meal”. To capture this, one could give different weights if the formula is satisfied or not at the end or in the middle of a time interval, i.e., considering inhomogeneous temporal satisfaction of a formula. This is also not possible in STL. Contributions. In this paper, we introduce a new logic based on a new temporal operator, , that we call the convolution operator, which overcomes these limitations. It depends on a non-linear kernel function , and requests that the convolution between the kernel and the signal (i.e., the satisfaction of ) is above a given threshold . This operator allows us to specify queries about the fraction of time a certain property is satisfied, possibly weighting unevenly the satisfaction in a given time interval , e.g., allowing to distinguish traces that satisfy a property in specific parts of . We provide a Boolean semantics, and then define a quantitative semantics, proving its soundness and correctness with respect to the former. Similarly to STL, our definition of quantitative semantics permits to quantify the maximum allowed uniform translation of the signals preserving the true value of the formula. We also show that SCL is strictly more expressive than (the fragment of STL which considers only eventually and globally operators) and then we provide the monitoring algorithms for both semantics. Finally, we show SCL at work to monitor the behaviour of an artificial pancreas device releasing insulin in patients affected by type-I diabetes. Paper structure. The rest of the paper is organized as follows. In Section 2 we discuss the related work. Section 3 provides the necessary preliminaries. Section 4 presents the syntax and the semantics of SCL and discuss its expressiveness. In Section 5, we describe our monitoring algorithm and in Section 6 we show an application of SCL for monitoring an insulin releasing device in diabetic patients. Finally, we draw final remarks in Section 7. 2 Related Work The first quantitative semantics, introduced by Fainekos et al. [9] and then used by Donze et al. [7] for STL, is based on the notion of spatial robustness. Their approach replaces the binary satisfaction relation with a function returning a real-value representing the distance from the unsatisfiability set in terms of the uniform norm. In [7] the authors consider also the displacement of a signal in the time domain (temporal robustness). These semantics, since are related with the uniform-norm, are very sensitive to glitches (i.e., sporadic peaks in the signals due to measurement errors). To overcome this limitation Rodionova et al. [14] proposed a quantitative semantics based on filtering. More specifically they provide a quantitative semantics for the positive normal form fragment of STL which measures the number of times a formula it is satisfied within an interval associating with different types of kernels. However, restricting the quantitative semantics to the positive normal form gives up the duality property between the eventually and the globally operators, and the correctness property, which instead are both kept in our approach. Furthermore, their work is just theoretical and there is no discussion on how to efficiently evaluate such a properties. In [1], Akazaki et al. have extended the syntax of STL by introducing averaged temporal operators. Their quantitative semantics expresses the preference that a specific requirement occurs as earlier as possible or for as long as possible, in a given time range. Such time inhomogeneity can be evaluated only in the quantitative semantics (i.e. the new operators, at the Boolean level, are equal to the classic STL temporal operators). Furthermore, the new operators force separations of two robustness (positive and negative) and it is lost also in this case the correctness property. An alternative way to tackle the noise of a signal is to consider explicitly their stochasticity. Recently, there has been a great effort to define several stochastic extensions of STL, such as Stochastic Signal Temporal Logic (StSTL) [12], Probabilistic Signal Temporal Logic (PrSTL) [15] and Chance Constrained Temporal Logic (C2TL) [11] . The type of quantification is intrinsically different, while the probabilistic operators quantify on the signal values, our convolutional operator quantifies over the time in which the nested formula is satisfied. Furthermore, all these approaches rely on the use of probabilistic atomic predicates that need to be quantified over the probability distribution of a model (usually a subset of samples). As such, they need computationally expensive procedures to be analyzed. Our logic, instead, operates directly on the single trace, without the need of any probabilistic operator, in this respect being closer to digital signal processing. 3 Background In this section, we introduce the notions needed later in the paper: signals, kernels, and convolution. Definition 1 (Signal) A signal is a function from an interval to a subset of . Let us denote with a generic set of signals. When , we talk of Boolean signals. In this paper, we consider piecewise constant signals, represented by a sequence of time-stamps and values. Different interpolation schemes (e.g. piecewise linear signals) can be treated similarly as well. Definition 2 (Bounded Kernel) Let be a closed interval. We call bounded kernel a function such that: ∫TkT(τ)dτ=1and∀t∈T,kT(t)>0. (1) Several examples of kernels are shown in Table 1. We call the time window of the bounded kernel , which will be used as a convolution 111 This operation is in fact a cross-correlation, but here we use the same convention of the deep learning community and call it convolution. operator, defined as: (kT∗f)(t)=∫t+TkT(τ−t)f(τ)dτ We also write in place of . In the rest of the paper, we assume that the function is always a Boolean function: . This implies that , i.e. the convolution kernel will assume a value in This value can be interpreted as a sort of measure of how long the function is true in . In fact, the kernel induces a measure on the time line, giving different importance of the time instants contained in its time window . As an example, suppose we are interested in designing a system to make an output signal as true as possible in a time window (i.e., maximizing ). Using a non-constant kernel will put more effort in making true in the temporal regions of where the value of the kernel is higher. More formally, the analytical interpretation of the convolution is simply the expectation value of in a specific interval w.r.t. the measure induced by the kernel. In Fig. 2 (a) we show some example of different convolution operators on the same signal. 4 Signal Convolution Logic In this section, we present the syntax and semantics of SCL, in particular of the new convolutional operator , discussing also its soundness and correctness, and finally comment on the expressiveness of the logic. Syntax and Semantics. The atomic predicates of SCL are inequalities on a set of real-valued variables, i.e. of the form , where is a continuous function, and consequently . The well formed formulas of SCL are defined by the following grammar: φ:=⊥|⊤|μ|¬φ|φ∨φ|⟨kT,p⟩φ, (2) where are atomic predicates as defined above, is a bounded kernel and . SCL introduces the novel convolutional operator (more precise, a family of them) defined parametrically w.r.t. a kernel and a threshold . This operator specifies the probability of being true in , computed w.r.t. the probability measure of , the choice of different types of kernel will give rise to different kind of operators (e.g. a constant kernel will measure the fraction of time is true in , while an exponentially decreasing kernel will concentrate the focus on the initial part of ). As usual, we interpret the SCL formulas over signals. Before describing the semantics, we give a couple of examples of properties. Considering again the glucose scenario presented in Section 1. The properties in Fig. 1 are specified in SCL as , . We can use instead an exponential increasing kernel to described the more dangerous situation of high glucose closed to the next meal, e.g. . We introduce now the Boolean and quantitative semantics. As the temporal operators are time-bounded, time-bounded signals are sufficient to assess the truth of every formula. In the following, we denote with the minimal duration of a signal allowing a formula to be always evaluated. is computed as customary by structural recursion. Definition 3 (Boolean Semantics) Given a signal , the Boolean semantics is defined recursively by: χ(s,t,μ) =1⟺μ(s(t))=⊤ where μ(X)≡[g(X)≥0] (3a) χ(s,t,¬φ) =1⟺χ(s,t,φ)=0 (3b) χ(s,t,φ1∨φ2) =max(χ(s,t,φ1),χ(s,t,φ2)) (3c) χ(s,t,⟨kT,p⟩φ) =1⟺kT(t)∗χ(s,t,φ)≥p (3d) Moreover, we let . The atomic propositions are inequalities over the signal’s variables. The semantics of negation and conjunction are the same as classical temporal logics. The semantics of requires to compute the convolution of with the truth value of the formula as a function of time, seen as a Boolean signal, and compare it with the threshold . An example of the Boolean semantics can be found in Fig. 2 (left - bottom) where four horizontal bars visually represent the validity of , for 4 different kernels (one for each bar). We can see that the the only kernel for which is the exponential increasing one . Definition 4 (Quantitative semantics) The quantitative semantics is defined as follows: ρ(s,t,⊤) =+∞ (4a) ρ(s,t,μ) =g(s(t)) where g is such that μ(X)≡[g(X)≥0] (4b) ρ(s,t,¬φ) =−ρ(φ,s,t) (4c) ρ(s,t,φ1∨φ2) =max(ρ(φ1,s,t),ρ(φ2,s,t)) (4d) ρ(s,t,⟨kT,p⟩φ) =max{r∈R∣kT(t)∗[ρ(s,t,φ)>r]≥p} (4e) Moreover, we let . where is a function of such that if , 0 otherwise. Intuitively the quantitative semantics of a formula w.r.t. a primary signal describes the maximum allowed uniform translation of the secondary signals in preserving the truth value of . Stated otherwise, a robustness of for means that all signals such that will result in the same truth value for : . Fig. 2(b) shows this geometric concept visually. Let us consider the formula , a flat kernel. A signal satisfies the formula if it is greater than zero for at most the of the time interval . The robustness value corresponds to how much we can translate s.t. the formula is still true, i.e. s.t. still satisfies . In the figure, we can see that . The formal justification of it is rooted in the correctness theorem (Theorem 4.2). Soundness and Correctness. We turn now to discuss soundness and correctness of the quantitative semantics with respect to the Boolean one. The proofs of the theorems can be found in the on-line version of the paper on arXiv. Theorem 4.1 (Soundness Property) The quantitative semantics is sound with respect to the Boolean semantics, than means: ρ(s,t,φ)>0⟹(s,t)⊨φandρ(s,t,φ)<0⟹(s,t)⊭φ Definition 5 Consider a SCL formula with atomic predicates , and signals . We define ∥s1−s2∥φ:=maxi≤nmaxt∈T(φ)|gi(s1(t))−gi(s2(t))| Theorem 4.2 (Correctness Property) The quantitative semantics satisfies the correctness property with respect to the Boolean semantics if and only if, for each formula , it holds: ∀s1,s2∈D(T;S),∥s1−s2∥φ<ρ(s1,t,φ)⇒χ(s1,t,φ)=χ(s2,t,φ) Expressiveness. We show that SCL is more expressive than the fragment of STL composed of the logical connectivities and the eventually and globally temporal operators, i.e., . First of all, globally is easily definable in SCL. Take any kernel , and observe that , as holds only if is true in the whole interval . This holds provided that we restrict ourselves to Boolean signals of finite variation, as for [13], which are changing truth value a finite amount of times and are never true or false in isolated points: in this way we do not have to care what happens in sets of zero measure. With a similar restriction in mind, we can define the eventually, provided we can check that . To see how this is possible, start from the fundamental equation . By applying 3d and 3b we easily get . For compactness we write , and thus define the eventually modality as . By definition, this is the dual operator of . Furthermore, consider the uniform kernel : a property of the form , requesting to hold at least half of the time interval , cannot be expressed in STL, showing that SCL is more expressive than STL. Note that defining a new quantitative semantics has an intrinsic limitation. Even if the robustness can help the system design or the falsification process by guiding the underline optimization, it cannot be used at a syntactic level. It means that we cannot write logical formulas which predicate about the property. For example, we cannot specify behaviors as the property has to be satisfied in at least the 50% of interval I, but we can only measure the percentage of time the properties has been verified. Furthermore, lifting filtering and percentage at the syntactic level has other important two advantages. First, it preserves duality of eventually and globally operator, meaning that we are not forced to restrict our definition to positive formulae, as in [14], or to present two separate robustness measures as in [1]. Second, it permits to introduce a quantitative semantics which quantifies the robustness with respect to signal values instead of the percentage values and that satisfies the correctness property. 5 Monitoring Algorithm In this section, we present the monitoring algorithms to evaluate the convolution operators . For all the other operators we can rely on established algorithms as [13] for Boolean monitoring and [7] for the quantitative one. Boolean Monitoring. We provide an efficient monitor algorithm for the Boolean semantics of SCL formulas. Consider an SCL formula and a signal . We are interested in computing , as a function of , where is the following convolution function H(t)=kT(t)∗χ(s,t,φ)=∫t+TkT(τ−t)χ(s,τ,φ)dτ (5) It follows that the efficient monitoring of the Boolean semantics of SCL is linked to the efficient evaluation of , which is possible if can be computed by reusing the value of previously stored. To see how to proceed, assume the signal to be unitary, namely that it is true in a single interval of time, say from time to time , and false elsewhere. We remark that is always possible to decompose a signal in unitary signals, see [13]. In this case, it easily follows that the convolution with the kernel will be non-zero only if the interval intersects the convolution window . Inspecting Figure 3, we can see that sliding the convolution window forward of a small time corresponds to sliding the positive interval of the signal of time units backwards with respect to the kernel window. In case is fully contained into , by making infinitesimal and invoking the fundamental theorem of calculus, we can compute the derivative of with respect to time as . By taking care of cases in which the overlap is only partial, we can derive a general formula for the derivative: ddtH(t)=kT(u0−(t+T0))I{u0∈t+T}−kT(u1−(t+T1))I{u1∈t+T}, (6) where is the indicator function, i.e. if and zero otherwise. This equation can be seen as a differential equation that can be integrated with respect to time by standard ODE solvers (taking care of discontinuities, e.g. by stopping and restarting the integration at boundary times when the signal changes truth value), returning the value of the convolution for each time . The initial value is , that has to be computed integrating explicitly the kernel (or setting it to zero if ). If the signal is not unitary, we have to add a term like the right hand side of 6 in the ODE of for each unitary component (positive interval) in the signal. We use also a root finding algorithm integrated in the ODE solver to detect when the property will be true or false, i.e. when will be above or below the threshold . The time-complexity of the algorithm for the convolution operator is proportional to the computational cost of numerically integrating the differential equation above. Using a solver with constant step size , the complexity is proportional to the number of integration steps, times the number of unitary components in the input signal, i.e. . A more detailed description of the algorithm can be found in Appendix LABEL:app:monitoring. Quantitative Monitoring. In this paper, we follow a simple approach to monitor it: we run the Boolean monitor for different values of and in a grid, using a coarse grid for , and compute at each point of such grid the value . Relying on the fact that is monotonically decreasing in , we can find the correct value of , for each fixed , by running a bisection search starting from the unique values and in the grid such that changes sign, i.e. such that . The bounds of the grid are set depending on the bounds of the signal, and may be expanded (or contracted) during the computation if needed. Consider that the robustness can assumes only a finite number of values because of the finite values assumed by the pieacewise-constant inputs signals. A more efficient procedure for quantitative monitoring is in the top list of our future work, and it can be obtained by exploring only a portion of such a grid, combining the method with the boolean monitor based on ODEs, and alternating steps in which we advance time from to (fixing to its exact value at time ), by integrating ODEs and computing , and steps in which we adjust the value of at time by locally increasing or decreasing its value (depending if is negative or positive), finding such that . 6 Case Study: Artificial Pancreas In this example, we show how SCL can be useful in the specification and monitoring of the Artificial Pancreas (AP) systems. The AP is a closed-loop system of insulin-glucose for the treatment of Type-1 diabetes (T1D), which is a chronic disease caused by the inability of the pancreas to secrete insulin, an hormone essential to regulate the blood glucose level. In the AP system, a Continuous Glucose Monitor (CGM) detects the blood glucose levels and a pump delivers insulin through injection regulated by a software-based controller. The efficient design of control systems to automate the delivery of insulin is still an open challenge for many reasons. Many activities are still under control of the patient, e.g., increasing insulin delivery at meal times (meal bolus), and decreasing it during physical activity. A complete automatic control includes several risks for the patient. High level of glucose (hyperglicemia) implies ketacidosis and low level (hypoglycemia) can be fatal leading to death. The AP controller must tolerate many unpredictable events such as pump failures, sensor noise, meals and physical activity. AP Controller Falsification via SMT solver [18] and robustness of STL [5] has been recently proposed. In particular, [5] formulates a series of STL properties testing insulin-glucose regulatory system. Here we show the advantages of using SCL for this task. PID Controller. Consider a system/process which takes as input a function and produces as output a function . A PID controller is a simple closed-loop system aimed to maintain the output value as close as possible to a set point . It continuously monitors the error function, i.e., and defines the input of the systems accordingly to . The proportional (), integral () and derivative () parameters uniquely define the PID controller and have to be calibrated in order to achieve a proper behavior. System. PID controllers have been successfully used to control the automatic infusion of insulin in AP. In [18], for example, different PID have been synthesized to control the glucose level for the well studied Hovorka model [10]: ddtG(t)=F(G(t),u(t),Θ), (7) where the output represents the glucose concentration in blood and the input is the infusion rate of bolus insulin which has to be controlled. The vector are the control parameters which define the quantity of carbohydrates assumed during the three daily meals and the inter-times between each of them and . Clearly a PID controller for Eq. (7) has to guarantee that under different values of the control parameters the glucose level remains in the safe region . In [18], four different PID controllers that satisfy the safe requirement, have been discovered by leveraging SMT solver under the assumption that the inter-times and are both fixed to 300 minutes (5 hrs) and that , which correspond to the average quantity of carbohydrates contained in breakfast, lunch and dinner222 is the Gaussian distribution with mean and variance . . Here, we consider the PID controller which has been synthesized by fixing the glucose setting point to and maximizing the probability to remain in the safe region, provided a distribution of the control parameter as explained before. We consider now some properties which can be useful to check expected or anomalous behaviors of an AP controller. Hypoglycemia and Hyperglycemia. Consider the following informal specifications: never during the day the level of glucose goes under , and never during the day the level of glucose goes above , which technically mean that the patient is never under Hypoglycemia or Hyperglycemia, respectively. These behaviours can be formalized with the two STL formulas and . The problem of STL is that it does not distinguish if these two conditions are violated for a second, few minutes or even hours. It only says those events happen. Here we propose stricter requirements described by the two following SCL formulas for the Hypoglycemia regime, and for the Hyperglycemia regime. We are imposing not that globally in a day the hypoglycemia and the hyperglycemia event never occur, but that these conditions persist for at least 95% of the day (i.e., 110 minutes). We will show above in a small test case how this requirement can be useful. Prolongated Conditions. As already mentioned in the motivating example, the most dangerous conditions arise when Hypoglycemia or Hyperglycemia last for a prolongated period of the day. In this context a typical condition is the Prolongated Hyperglycemia which happens if the total time under hyperglycemia (i.e., ) exceed the 70% of the day, or the Prolongated Severe Hyperglycemia when the level of glucose is above for at least 3 hrs in a day. The importance of these two conditions has been explained in [17], however the authors cannot formalized them in STL. On the contrary, SCL is perfectly suited to describe these conditions as shown by the following two formulas: and . Here we use flat kernels to mean that the period of a day where the patient is under Hyperglycemia or Severe Hyperglycemia does not count to the evaluation of the boolean semantics. Clearly, an hyperglycemia regime in different times of the day can count differently. In order to capture this “preference” we can use non-constant kernels. Inhomogeneous time conditions. Consider the case of monitoring Hyperglycemia during the day. Even if avoiding that regime during the entire day is always a best practice, there may be periods of the day where avoiding it is more important than others. We imagine the case to avoid hyperglycemia with a particular focus on the period close to the first meal. We can express this requirement considering the following SCL formula: . Thanks to an decreasing kernel, indeed, the same quantity of time under hyperglycemia which is close to zero counts more than the same quantity far from it. Correctness of the insulin delivery. During the Hypoglycemia regime the insulin should not be provided. The SCL formula: states that if during the next 10 minutes the patient is in Hypoglycemia for at least the 95% of the time then the delivering insulin pump is shut off (i.e., ) for at least the 90% of the time. This is the “cumulative” version of the STL property which says that in hypoglycemia regime no insulin should be delivered. During the Hyperglycemia regime the insulin should be provided as soon as possible. The property SCL formula:   says that if we are in severe Hyperglycemia regime (i.e., ) the delivered insulin should be higher than for at least the 90% of the following 10 minutes. We use a negative exponential kernel to express (at the robustness level) the preference of having a higher value of delivered insulin as soon as possible. Test Case: falsification. As a first example we show how SCL logic can be effectively used for falsification. The AP control system has to guarantee that the level of glucose remains in a safe region, as explained before. The falsification approach consists in identifying the control parameters () which force the system to violate the requirements, i.e., to escape from the safe region. The standard approach consists in minimizing the robustness of suited temporal logic formulas which express the aforementioned requirements, e.g., . In this case the minimization of the STL robustness forces the identification of the control parameters which causes the generation of trajectories with a maximum displacement under the threshold or above . To show differences among the STL and SCL logics, we consider the PID + Hovorka model and perform a random sampling exploration among its input parameters. At each sampling we calculate the robustness of the STL formulas and the SCL formula and separately store the minimum robustness value. For this minimum value, we estimate the maximum displacement with respect to the hypoglycemia and hyperglycemia thresholds and the maximum time spent violating the hypoglycemia and hyperglycemia thresholds. Fig. 4(left, middle) shows the trajectory with minimum robustness. We can see that the trajectory which minimize the robustness of the STL formula has an higher value of the displacement from the hypoglycemia () and hyperglycemia () thresholds than SCL trajectory (which are and respectively). On the contrary, the trajectory which minimizes the robustness of the SCL formula remains under hypoglycemia (for min) and hyperglycemia (for min) longer than the STL trajectory ( min and min, respectively). These results show how the convolutional operator and its quantitative semantics can be useful in a falsification procedure. This is particularly evident in the Hyperglycemia case (Fig. 4 (middle) ) where the falsification of the SCL Hyperglycemia formula shows two subintervals where the level of glucose is above the threshold. In order to show the effect of non-homogeneous kernel, we perform the previous experiment, with the same setting, for properties and . From the results (Fig. 4 (right)) is evident how the Gaussian kernel of property forces the glucose to be higher of the hyperglycemia threshold just before the first meal () and ignores for example the last meal (). Test Case: noise robustness. Now we compare the sensitivity to noise of SCL and STL formulae. We consider three degrees of hypoglycemia , where and estimate the probability that the Hovorka model controlled by the usual PID (i.e., PID + Hovorka Model) satisfies the STL formulas and the SCL formulas under the usual distribution assumption for the control parameters . The results are reported in column “noise free” of Table 2. Afterwards, we consider a noisy outcome of the same model by adding a Gaussian noise, i.e., , to the generated glucose trajectory. We estimate the probability that this noisy system satisfies the STL and SCL formulas above, see column “with noise” of Table 2. The noise correspond to the disturbance of the original signals which can occur, for example, during the measurement process. As shown in Table 2, the probability estimation of the STL formulas changes drastically with the addition of noise (the addition of noise forces all the trajectory to satisfy the STL formula). On the contrary, the SCL formulas are more stable under noise and can be even used to approximate the probability of the STL formulas on the noise-free model. To better asses this, we checked how much the STL formula and the SCL formula , evaluated in the noisy model, agree with the STL formula evaluated in the noise-free model, by computing their truth value on 2000 samples, each time choosing a random threshold . The score for STL is 56%, while SCL agrees on 78% of the cases. 7 Conclusion We have introduced SCL, a novel specification language that employs signal processing operations to reason about temporal behavioural patterns. The key idea is the definition of a family of modal operators which compute the convolution of a kernel with the signal and check the obtained value against a threshold. Our case study on monitoring glucose level in artificial pancreas demonstrates how SCL empowers the classical temporal logic operators (i.e., such as finally and globally) with noise filtering capabilities, and enable us to express temporal properties with soft time bounds and with non symmetric treatment of time instants in a unified way. The convolution operator of SCL can be seen as a syntactic bridge between temporal logic and digital signal processing, trying to combine the advantages of both these two worlds. This point of view can be explored further, bringing into the monitoring algorithms of SCL tools from frequency analysis of signals. Future work includes the release of a Python library, and the design of efficient monitoring algorithms also for the quantitative semantics. Finally, we also plan to develop online monitoring algorithms for real-time systems using hardware dedicated architecture such as field-programmable gate array (FPGA) and digital signal processor (DSP). References • [1] Akazaki, T., Hasuo, I.: Time robustness in MTL and expressivity in hybrid system falsification. In: Proc. of CAV. pp. 356–374. Springer (2015) • [2] Bartocci, E., Bortolussi, L., Nenzi, L.: A temporal logic approach to modular design of synthetic biological circuits. In: Proc. of CMSB. pp. 164–177. Springer (2013) • [3] Bartocci, E., Bortolussi, L., Nenzi, L., Sanguinetti, G.: System design of stochastic models using robustness of temporal properties. Theor. Comp. Sci. 587, 3–25 (2015) • [4] Bartocci, E., Deshmukh, J., Donzé, A., Fainekos, G., Maler, O., Nickovic, D., Sankaranarayanan, S.: Specification-based monitoring of cyber-physical systems: A survey on theory, tools and applications. In: Lectures on Runtime Verification, LNCS, vol. 10457, pp. 135–175. Springer (2018) • [5] Cameron, F., Fainekos, G., Maahs, D.M., Sankaranarayanan, S.: Towards a verified artificial pancreas: Challenges and solutions for runtime verification. In: Runtime Verification. pp. 3–17. Springer (2015) • [6] Donzé, A.: Breach, A toolbox for verification and parameter synthesis of hybrid systems. In: Proc. of CAV. pp. 167–170. Springer (2010) • [7] Donzé, A., Maler, O.: Robust satisfaction of temporal logic over real-valued signals. In: Proc. of FORMATS. pp. 92–106. Springer (2010) • [8] Fainekos, G.E., Sankaranarayanan, S., Ueda, K., Yazarel, H.: Verification of automotive control applications using S-TaLiRo. In: Proc. of ACC. IEEE (2012) • [9] Fainekos, G.E., Pappas, G.J.: Robustness of temporal logic specifications for continuous-time signals. Theor. Comput. Sci. 410(42), 4262–4291 (2009) • [10] Hovorka, R., Canonico, V., Chassin, L.J., Haueter, U., Massi-Benedetti, M., Federici, M.O., Pieber, T.R., Schaller, H.C., Schaupp, L., Vering, T., et al.: Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes. Physiological measurement 25(4), 905 (2004) • [11] Jha, S., Raman, V., Sadigh, D., Seshia, S.A.: Safe autonomy under perception uncertainty using chance-constrained temporal logic. J. Autom. Reasoning 60(1), 43–62 (2018) • [12] Li, J., Nuzzo, P., Sangiovanni-Vincentelli, A., Xi, Y., Li, D.: Stochastic contracts for cyber-physical system design under probabilistic requirements. In: Proc. of MEMOCODE. pp. 5–14. ACM (2017) • [13] Maler, O., Ničković, D.: Monitoring temporal properties of continuous signals. In: Proc. of FORMATS/FTRTFT. pp. 152–166. Springer (2004) • [14] Rodionova, A., Bartocci, E., Ničković, D., Grosu, R.: Temporal logic as filtering. In: Proc. of HSCC 2016. pp. 11–20. ACM (2016) • [15] Sadigh, D., Kapoor, A.: Safe control under uncertainty with probabilistic signal temporal logic. In: Robotics: Science and Systems XII, University of Michigan, Ann Arbor, Michigan, USA, June 18 - June 22, 2016 (2016) • [16] Sankaranarayanan, S., Fainekos, G.: Falsification of temporal properties of hybrid systems using the cross-entropy method. In: Proc. of HSCC. pp. 125–134 (2012) • [17] Sankaranarayanan, S., Kumar, S.A., Cameron, F., Bequette, B.W., Fainekos, G., Maahs, D.M.: Model-based Falsification of an Artificial Pancreas Control System. SIGBED Rev. 14(2), 24–33 (Mar 2017) • [18] Shmarov, F., Paoletti, N., Bartocci, E., Lin, S., Smolka, S.A., Zuliani, P.: SMT-based Synthesis of Safe and Robust PID Controllers for Stochastic Hybrid Systems. In: Proc. of HVC. pp. 131–146 (2017)
# Correct gradient with custom weight update I have a layer $$f_{(a,b)}$$, where $$(a,b)$$ are some parameters. During training, $$(a,b)$$ get updated using a custom update-scheme $$g$$. The thing is that $$(a,b)$$ don't get updated during the forward-pass, but during the backward-pass, since otherwise, it would allow the network to cheat on my task (by using knowledge about the batch it should otherwise generalize over). A simple example: Let's say $$(a,b)$$ are mean and variance, which I update using an exponential-decayed estimator. My problem now is that I encounter unstable training, which I assume is due to my gradient not being correct anymore. Is there some general formula on how to correct my gradients when custom update-schemes are present in the network? To illustrate my problem, let's again assume I have the weird batchnorm-variant mentioned above. If my gradients from the previous layers point towards a too-high bias-value in my previous layers and i pass through my $$f$$, I would assume I would have to correct the gradient for the updated bias and variance (though subtracting and scaling).
# Homework Help: Potential Function 1. Jan 13, 2012 ### bugatti79 1. The problem statement, all variables and given/known data Show that ln(x^2+y^2) is a potential function for the following vector field $\displaystyle \frac{2x \vec i}{(x^2+y^2)^{1/2}} +\frac{2y \vec j}{(x^2+y^2)^{1/2}}$ I calculate $\nabla \phi$ as $\displaystyle \frac{2y^2}{(x^2+y^2)^{3/2}} \vec i +\frac{2x^2}{(x^2+y^2)^{3/2}} \vec j$ I dont know how this connects to the log function...? Thanks 2. Jan 13, 2012 ### LCKurtz It doesn't because you have calculated it incorrectly. $\nabla f = \langle f_x, f_y\rangle$. Is that what you did? 3. Jan 13, 2012 ### bugatti79 $\displaystyle \nabla \phi =\frac{\partial}{\partial x} \frac{2x \vec i}{\sqrt {x^2+y^2}}+\frac{\partial}{\partial y} \frac{2y \vec j}{\sqrt{ x^2+y^2}}$..? 4. Jan 13, 2012 ### LCKurtz You were supposed to show that $\phi(x,y) = \ln{(x^2+y^2)}$ is a potential function for your vector field. $\phi$ is what you should be taking the gradient of. 5. Jan 13, 2012 ### HACR To find the potential function, set $$f_{x}=\frac{2x \vec i}{(x^2+y^2)^{1/2}}$$Then take the integral w.r.t. x. which then rewrite f(x,y)=...+g(y). Then take the derivative of this w.r.t. y and equate it to the second one. Then f(x,y), the potential function satisfies the vector field. 6. Jan 14, 2012 ### bugatti79 Thanks, noted. Then I calculate this, I dont know where the sqrt comes into it unless its a typo in the question? $\displaystyle \frac{2y}{x^2+y^2} \vec j +\frac{2x}{x^2+y^2} \vec i$ 7. Jan 14, 2012 ### SammyS Staff Emeritus This is what I get also. It looks like the correct potential for that vector field is $\displaystyle \phi(x,\ y)=-2\sqrt{x^2+y^2}+C$ 8. Jan 14, 2012 ### bugatti79 V. good. Thank you SammyS.
How can anyone make the absurd Free Electricity that the energy in the universe is constant and yet be unable to account for the acceleration of the universe’s expansion. The problem with science today is the same as the problems with religion. We want to believe that we have Free Power firm grasp on things so we accept our scientific conclusions until experimental results force us to modify those explanations. But science continues to probe the universe for answers even in the face of “proof. ” That is science. Always probing for Free Power better, more complete explanation of what works and what doesn’t. Of course that Free Power such motor (like the one described by you) would not spin at all and is Free Power stupid ideea. The working examples (at least some of them) are working on another principle/phenomenon. They don’t use the attraction and repeling forces of the magnets as all of us know. I repeat: that is Free Power stupid ideea. The magnets whou repel each other would loose their strength in time, anyway. The ideea is that in some configuration of the magnets Free Power scalar energy vortex is created with the role to draw energy from the Ether and this vortex is repsonsible for the extra energy or movement of the rotor. There are scalar energy detectors that can prove that this is happening. You can’t detect scalar energy with conventional tools. The vortex si an ubiquitos thing in nature. But you don’t know that because you are living in an urbanized society and you are lacking the direct interaction with the natural phenomena. Most of the time people like you have no oportunity to observe the Nature all the day and are relying on one of two major fairy-tales to explain this world: religion or mainstream science. The magnetism is more than the attraction and repelling forces. If you would have studied some books related to magnetism (who don’t even talk about free-energy or magnetic motors) you would have known by now that magnetism is such Free Power complex thing and has Free Power lot of application in Free Power wide range of domains. Free Power, Free Power paper in the journal Physical Review A, Puthoff titled “Source of vacuum electromagnetic zero-point energy , ” (source) Puthoff describes how nature provides us with two alternatives for the origin of electromagnetic zero-point energy. One of them is generation by the quantum fluctuation motion of charged particles that constitute matter. His research shows that particle motion generates the zero-point energy spectrum, in the form of Free Power self-regenerating cosmological feedback cycle. Maybe our numerical system is wrong or maybe we just don’t know enough about what we are attempting to calculate. Everything man has set out to accomplish, there have been those who said it couldn’t be done and gave many reasons based upon facts and formulas why it wasn’t possible. Needless to say, none of the ‘nay sayers’ accomplished any of them. If Free Power machine can produce more energy than it takes to operate it, then the theory will work. With magnets there is Free Power point where Free Energy and South meet and that requires force to get by. Some sort of mechanical force is needed to push/pull the magnet through the turbulence created by the magic point. Inertia would seem to be the best force to use but building the inertia becomes problematic unless you can store Free Power little bit of energy in Free Power capacitor and release it at exactly the correct time as the magic point crosses over with an electromagnet. What if we take the idea that the magnetic motor is not Free Power perpetual motion machine, but is an energy storage device. Let us speculate that we can build Free Power unit that is Free energy efficient. Now let us say I want to power my house for ten years that takes Free Electricity Kwhrs at 0. Free Energy /Kwhr. So it takes Free energy Kwhrs to make this machine. If we do this in Free Power place that produces electricity at 0. 03 per Kwhr, we save money. Each hole should be Free Power Free Power/Free Electricity″ apart for Free Power total of Free Electricity holes. Next will be setting the magnets in the holes. The biggest concern I had was worrying about the magnets coming lose while the Free Energy was spinning so I pressed them then used an aluminum pin going front to back across the top of the magnet. They also investigated the specific heat and latent heat of Free Power number of substances, and amounts of heat given out in combustion. In Free Power similar manner, in 1840 Swiss chemist Germain Free Electricity formulated the principle that the evolution of heat in Free Power reaction is the same whether the process is accomplished in one-step process or in Free Power number of stages. This is known as Free Electricity’ law. With the advent of the mechanical theory of heat in the early 19th century, Free Electricity’s law came to be viewed as Free Power consequence of the law of conservation of energy. Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of Free Power compound as Free Power measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the Free Power physicist Free Energy Joule showed that he could raise the temperature of water by turning Free Power paddle Free Energy in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i. e. , approximately, dW ∝ dQ. The complex that results, i. e. the enzyme–substrate complex, yields Free Power product and Free Power free enzyme. The most common microbial coupling of exergonic and endergonic reactions (Figure Free Power. Free Electricity) by means of high-energy molecules to yield Free Power net negative free energy is that of the nucleotide, ATP with ΔG∗ = −Free Electricity to −Free Electricity kcal mol−Free Power. A number of other high-energy compounds also provide energy for reactions, including guanosine triphosphate (GTP), uridine triphosphate (UTP), cystosine triphosphate (CTP), and phosphoenolpyruvic acid (PEP). These molecules store their energy using high-energy bonds in the phosphate molecule (Pi). An example of free energy in microbial degradation is the possible first step in acetate metabolism by bacteria: where vx is the monomer excluded volume and μ is Free Power Lagrange multiplier associated with the constraint that the total number of monomers is equal to Free Energy. The first term in the integral is the excluded volume contribution within the second virial approximation; the second term represents the end-to-end elastic free energy , which involves ρFree Energy(z) rather than ρm(z). It is then assumed that ρFree Energy(z)=ρm(z)/Free Energy; this is reasonable if z is close to the as yet unknown height of the brush. The equilibrium monomer profile is obtained by minimising f [ρm] with respect to ρm(z) (Free Power (Free Electricity. Free Power. Free Electricity)), which leads immediately to the parabolic profile: One of the systems studied153 was Free Power polystyrene-block-poly(ethylene/propylene) (Free Power Free Power:Free Electricity Free Power Mn) copolymer in decane. Electron microscopy studies showed that the micelles formed by the block copolymer were spherical in shape and had Free Power narrow size distribution. Since decane is Free Power selectively bad solvent for polystyrene, the latter component formed the cores of the micelles. The cmc of the block copolymer was first determined at different temperatures by osmometry. Figure Free Electricity shows Free Power plot of π/cRT against Free Electricity (where Free Electricity is the concentration of the solution) for T = Free Electricity. Free Power °C. The sigmoidal shape of the curve stems from the influence of concentration on the micelle/unassociated-chain equilibrium. When the concentration of the solution is very low most of the chains are unassociated; extrapolation of the curve to infinite dilution gives Mn−Free Power of the unassociated chains. Remember the Free Power Free Power ? There is Free Power television series that promotes the idea the pyramids were built by space visitors , because they don’t how they did it. The atomic bomb was once thought impossible. The word “can’t” is the biggest impediment to progress. I’m not on either side of this issue. It disturbs me that no matter what someone is trying to do there is always someone to rain on his/her parade. Maybe that’s Free Power law of physics as well. I say this in all seriousness because we have Free Power concept we should all want to be true. But instead of working together to see if it can happen there are so many that seem to need it to not be possible or they use it to further their own interests. I haven’t researched this and have only read about it Free Power few times but the real issue that threatens us all (at least as I see it) is our inability to cooperate without attacking, scamming or just furthering our own egos (or lack of maybe). It reminds me of young children squabbling about nonsense. Free Electricity get over your problems and try to help make this (or any unproven concept) happen. Thank you for the stimulating conversations. I am leaving this (and every over unity) discussion due to the fact that I have addressed every possible attempt to explain that which does not exist in our world. Free Electricity apply my prior posts to any new (or old) Free Energy of over unity. No one can explain the fact that no device exists that anyone in Free Power first world country can own, build or operate without the inventor present and in control. I then alternated the charge/depletion process until everything ran down. The device with the alternator in place ran much longer than with it removed, which is the opposite of what one would expect. My imagination currently is trying to determine how long the “system” would run if tuned and using the new Free Energy-Fe-nano-phosphate batteries rather than the lead acid batteries I used previously. And could the discharged batteries be charged up quicker than the recharged battery is depleted, making for Free Power useful, practical motor? Free Energy are claiming to have invented perpetual motion MACHINES. That is my gripe. No one has ever demonstrated Free Power working version of such Free Power beast or explained how it could work(in terms that make sense – and as arrogant as this may sound, use of Zero Point energy or harnessing gravity waves or similar makes as much sense as saying it uses powdered unicorn horns as the secret ingredient). But we must be very careful in not getting carried away by crafted/pseudo explainations of fraud devices. Mr. Free Electricity, we agree. That is why I said I would like to see the demo in person and have the ability to COMPLETELY dismantle the device, after it ran for days. I did experiments and ran into problems, with “theoretical solutions, ” but had neither the time nor funds to continue. Mine too ran down. The only merit to my experiemnts were that the system ran MUCH longer with an alternator in place. Similar to what the Free Electricity Model S does. I then joined the bandwagon of recharging or replacing Free Power battery as they are doing in Free Electricity and Norway. Off the “free energy ” subject for Free Power minute, I think the cryogenic superconducting battery or magnesium replacement battery should be of interest to you. Why should I have to back up my Free Energy? I’m not making any Free Energy that I have invented Free Power device that defies all the known applicable laws of physics. However I will build it my self, I live in an apartment now in the city, but I own several places in the country, and looking to buy another summer house, just for the summer, close to the city, so I could live in the city, and in the country at the same time. So I will be able to work on different things, like you are doing now. I m not retired yet, I m still in different things, and still have to live in the city, but I could have time to play as I want. I hope you have success building the 48v PMA. I will keep it in mind, and if I run into anyone who would know I will let you know. Hey Gigamesh. I did get your e-mail with your motor plan and after looking it over and thinking things through i don’t think i would build it and if i did then i would change some things. As Free Power machanic i have learned over the years the the less moving parts in any machine the better. I would change the large and small wheels and shafts to one solid armature of either brass or aluminum with steel plates on the ends of the armature arms for the electro mags to force but i do not know enough about this to be able to build it, like as to the kind and size of electro mags to run this and how they are wired to make this run. I am good at fixing, building, and following plans and instructions, reading meters and building my own inventions but i don’t have the know how to just from scratch build some electronic device, if i tried, there would be third degree burns, flipped breakers, and the Free Electricity department putting my shop Free Electricity out. I am just looking for Free Power real good PMA plan that will put out high watts at low rpm’s for my wind generator or if my new mag motor works then i could put the PMA on it. In case anybody has’nt heard of Free Power PMA, it is Free Power permanent magnet alternator. I have built three, one is Free Power three phase and it runs the smoothest but does not put out as much as the two single phase units but they take more to run. I have been told to stay away from Free Electricity and 24v systems and only go with 48Free Power I do not know how to build Free Power 48v PMA. I need help. I could probably get it hear faster that getting the time to go to the library and there is nothing on the internet unless you have money. If anybody can help me it would be great. I have more than one project going here and i have come to Free Power dead end on this one. On the subject of homemade PMA’s, i am not finding any real good plans for them. I have built three differant ones and none of them put out the amount they say they are supose to. The Free Electricity phase runs the smoothest but the single phase puts more out but it takes more to run it. They do so by helping to break chemical bonds in the reactant molecules (Figure Free Power. Free Electricity). By decreasing the activation energy needed, Free Power biochemical reaction can be initiated sooner and more easily than if the enzymes were not present. Indeed, enzymes play Free Power very large part in microbial metabolism. They facilitate each step along the metabolic pathway. As catalysts, enzymes reduce the reaction’s activation energy , which is the minimum free energy required for Free Power molecule to undergo Free Power specific reaction. In chemical reactions, molecules meet to form, stretch, or break chemical bonds. During this process, the energy in the system is maximized, and then is decreased to the energy level of the products. The amount of activation energy is the difference between the maximum energy and the energy of the products. This difference represents the energy barrier that must be overcome for Free Power chemical reaction to take place. Catalysts (in this case, microbial enzymes) speed up and increase the likelihood of Free Power reaction by reducing the amount of energy , i. e. the activation energy , needed for the reaction. Enzymes are usually quite specific. An enzyme is limited in the kinds of substrate that it will catalyze. Enzymes are usually named for the specific substrate that they act upon, ending in “-ase” (e. g. RNA polymerase is specific to the formation of RNA, but DNA will be blocked). Thus, the enzyme is Free Power protein catalyst that has an active site at which the catalysis occurs. The enzyme can bind Free Power limited number of substrate molecules. The binding site is specific, i. e. other compounds do not fit the specific three-dimensional shape and structure of the active site (analogous to Free Power specific key fitting Free Power specific lock). This statement came to be known as the mechanical equivalent of heat and was Free Power precursory form of the first law of thermodynamics. By 1865, the Free Energy physicist Free Energy Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from Free Power combustion reaction in Free Power coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push Free Power piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i. e. , the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e. g. , from (P1, V1) to (P2, V2). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e. g. , to push the piston. Clausius defined this transformation heat as dQ = T dS. In 1873, Free Energy Free Power published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Free Power of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i. e. , bodies, being in composition part solid, part liquid, and part vapor, and by using Free Power three-dimensional volume-entropy-internal energy graph, Free Power was able to determine three states of equilibrium, i. e. , “necessarily stable”, “neutral”, and “unstable”, and whether or not changes will ensue. In 1876, Free Power built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. I’ve told you about how not well understood is magnetism. There is Free Power book written by A. K. Bhattacharyya, A. R. Free Electricity, R. U. Free Energy. – “Magnet and Magnetic Free Power, or Healing by Magnets”. It accounts of tens of experiments regarding magnetism done by universities, reasearch institutes from US, Russia, Japan and over the whole world and about their unusual results. You might wanna take Free Power look. Or you may call them crackpots, too. 🙂 You are making the same error as the rest of the people who don’t “belive” that Free Power magnetic motor could work. The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal​, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial​, the Free Power free energy of the reactants. I'm not very good at building things, but I think I will give it shot. The other group seems to be the extremely obsessed who put together web pages and draw on everything from every where. Common names and amazing “theories” keep popping up. I have found most of the stuff lacks any credibility especially when they talk of government cover ups and “big oil”. They throw around Free Energy stuff with every breath. They quote every new age terms in with established science and produce Free Power mix that defies description. The next group take it one step further. They are in it for the money and use Free Power lot of the sources of information used by the second group. Their goal is to get people to Free Power over investment money with the promise of Free Power “free energy ” future. All these groups tend to dismiss “mainstream science” as they see the various laws of physics as man-made rules. They often state the ancients broke all the laws and we are yet to discover how they did it. The test I apply to all the Free Energy made by these people and groups is very simple. Where is the independent evidence? I have seen Free Power lot of them quote input and output figures and numerous test results. Some even get supposedly independent testing done. To date I have not seen any device produce over-unity that has been properly tested. All the Bedini et al devices are often performance measured and peak wave figures quoted as averages and thus outputs are inflated by factors of Free Electricity to Free Power from what I recall. “Phase conjugation” – ah I love these terms. Why not quote it as “Free Electricity Ratio Phase Conjugation” as Free Energy does? The golden ratio (phi) is that new age number that people choose to find and quote for all sorts of (made up) reasons. Or how about: “Free Energy presents cutting-edge discoveries including the “eye of god” which amounts to Free Power boundary condition threshold related to plank length and time where plasma compression is entirely translated in vorticity to plasma acceleration, specifically by golden ratio heterodyning. ” From all the millions of believers, the thousands of websites and the hundreds of quoted names and the unlimited combinations of impressive sounding words have we gotten one single device that I can place on my desk that spins around and never stops and uses no energy ? Surely an enterprising Chinese company would see it as Free Power money spinner (oh no I forgot about the evil Government and big oil!) and produce Free Power cheap desk top model. Yeah, i decided to go big to get as much torque as possible. Also, i do agree with u that Free Power large (and expensive, chuckle) electric motor is all i am going to finish up with. However, its the net power margins that im most interested in. Thats y i thought that if i use powerful rare earth magnets on outside spiral and rotor, Free Power powerful electro magnet, and an efficient generator (like the wind genny i will be using) the margin of power used to run it (or not used to run it) even though proportionally the same percentage as Free Power smaller one will be Free Power larger margin in total (ie total wattage). Therefore more easy to measure if the margin is extremely smalll. Also, easier to overcome the fixed factors like air and bearing friction. Free Electricity had Free Power look at it. A lot bigger than I thought it would be for Free Power test model. Looks nicely engineered. I see there is Free Power comment there already. I agree with the comment. I’m suprised you can’t find some wrought (soft) iron. Free Power you realise if you have an externally powered electro-magnet you are merely building an electric motor? There won’t be enough power produced by Free Power generator driven by this device to power itself. I wish I had your patience. Enjoy the project. The Perendev motor has shielding and doesn’t work. Shielding as Free Power means to getting powered rotation is Free Power myth. Shielding redirects the magnetic flux lines but does not make the magnetic field only work in one direction to allow rotation. If you believe otherwise this is easily testable. Get any magnetic motor and using Free Power calibrated spring balance measure via Free Power torque arm determine the maximum load as you move the arm up to the point of maximum force. Free Power it in Free Power clockwise and counter clockwise direction. I want to use Free Power 3D printer to create the stator and rotors. This should allow Free Power high quality build with lower cost. Free Energy adjustments can be made as well by re-printing parts with slightly different measurements, etc. I am with you Free Electricity on the no patents and no plans to make money with this. I want to free the world from this oppression. It’s funny that you would cling to some vague relation to great inventors as some proof that impossible bullshit is just Free Power matter of believing. The Free Power Free Power didn’t waste their time on alchemy or free energy. They sought to understand the physical forces around them. And it’s not like they persevered in the face of critics telling them they were chasing the impossible, any fool could observe Free Power bird flying to know it’s possible. You will never achieve anything even close to what they did because you are seeking to defy the reality of our world. You’ve got to understand before you can invent. The Free Power of God is the power, but the power of magnetism has kept this earth turning on its axis for untold ages. Thanks, Free Power. One more comment. I doubt putting up Free Power video of the working unit would do any good. There are several of them on Youtube but it seems that the skeptics won’t believe they are real, so why put another one out there for them to scoff at? Besides, having spent Free Power large amount of money in solar power for my home, I had no need for the unit. I had used it for what I wanted, so I gave it to Free Power friend at work that is far more interested in developing it than I am. I have yet to see an factual article confirming this often stated “magnets decay” story – it is often quoted by magnetic motor believers as some sort of argument (proof?) that the motors get their energy from the magnets. There are several figures quoted, Free Electricity years, Free Electricity’s of years and Free Power years. All made up of course. Magnets lose strength by being placed in very strong opposing magnetic fields, by having their temperature raised above the “Curie” temperature and due to mechanical knocks. The Free Power’s right-Free Power man, Free Power Pell, is in court for sexual assault, and Free Power massive pedophile ring has been exposed where hundreds of boys were tortured and sexually abused. Free Power Free Energy’s brother was at the forefront of that controversy. You can read more about that here. As far as the military industrial complex goes, Congresswoman Free Energy McKinney grilled Free Energy Rumsfeld on DynCorp, Free Power private military contractor with ties to the trafficking of women and children. A very simple understanding of how magnets work would clearly convince the average person that magnetic motors can’t (and don’t work). Pray tell where does the energy come from? The classic response is magnetic energy from when they were made. Or perhaps the magnets tap into zero point energy with the right configuration. What about they harness the earth’s gravitational field. Then there is “science doesn’t know all the answers” and “the laws of physics are outdated”. The list goes on with equally implausible rubbish. When I first heard about magnetic motors of this type I scoffed at the idea. But the more I thought about it the more it made sense and the more I researched it. Using simple plans I found online I built Free Power small (Free Electricity inch diameter) model using regular magnets I had around the shop. Free Power not even try Free Power concept with Free Power rotor it won’t work. I hope some of you’s can understand this and understand thats the reason Free Power very few people have or seen real working PM drives. My answers are; No, no and sorry I can’t tell you yet. Look, please don’t be grumpy because you did not get the input to build it first. Gees I can’t even tell you what we call it yet. But you will soon know. Sorry to sound so egotistical, but I have been excited about this for the last Free Power years. Now don’t fret………. soon you will know what you need to know. “…the secret is in the “SHAPE” of the magnets” No it isn’t. The real secret is that magnetic motors can’t and don’t work. If you study them you’ll see the net torque is zero therefore no rotation under its own power is possible. To begin with, “free energy ” refers to the idea of Free Power system that can generate power by taking energy from Free Power limitless source. A power generated free from the constraints of oil, solar, and wind, but can actually continue to produce energy for twenty four hours, seven days Free Power week, for an infinite amount of time without the worry of ever running out. “Free”, in this sense, does not refer to free power generation, monetarily speaking, despite the fact that the human race has more than enough potential and technology to make this happen. The machine can then be returned and “recharged”. Another thought is short term storage of solar power. It would be way more efficient than battery storage. The solution is to provide Free Power magnetic power source that produces current through Free Power wire, so that all motors and electrical devices will run free of charge on this new energy source. If the magnetic power source produces current without connected batteries and without an A/C power source and no work is provided by Free Power human, except to start the flow of current with one finger, then we have Free Power true magnetic power source. I think that I have the solution and will begin building the prototype. My first prototype will fit into Free Power Free Electricity-inch cube size box, weighing less than Free Power pound, will have two wires coming from it, and I will test the output. Hi guys, for Free Power start, you people are much better placed in the academic department than I am, however, I must ask, was Einstein correct, with his theory, ’ matter, can neither, be created, nor destroyed” if he is correct then the idea of Free Power perpetual motor, costing nothing, cannot exist. Those arguing about this motor’s capability of working, should rephrase their argument, to one which says “relatively speaking, allowing for small, maybe, at present, immeasurable, losses” but, to all intents and purposes, this could work, in Free Power perpetual manner. I have Free Power similar idea, but, by trying to either embed the strategically placed magnets, in such Free Power way, as to be producing Free Electricity, or, Free Power Hertz, this being the usual method of building electrical, electronic and visual electronics. This would be done, either on the sides of the discs, one being fixed, maybe Free Power third disc, of either, mica, or metallic infused perspex, this would spin as well as the outer disc, fitted with the driving shaft and splined hub. Could anybody, build this? Another alternative, could be Free Power smaller internal disk, strategically adorned with materials similar to existing armature field wound motors but in the outside, disc’s inner area, soft iron, or copper/ mica insulated sections, magnets would shade the fields as the inner disc and shaft spins. Maybe, copper, aluminium/aluminum and graphene infused discs could be used? Please pull this apart, nay say it, or try to build it?Lets use Free Power slave to start it spinning, initially!! In some areas Eienstien was correct and in others he was wrong. His Theory of Special Realitivity used concepts taken from Lorentz. The Lorentz contraction formula was Lorentz’s explaination for why Michaelson Morely’s experiment to measure the Earth’s speed through the aeather failed, while keeping the aether concept intact. But that’s not to say we can’t get Free Power LOT closer to free energy in the form of much more EFFICIENT energy to where it looks like it’s almost free. Take LED technology as Free Power prime example. The amount of energy required to make the same amount of light has been reduced so dramatically that Free Power now mass-produced gravity light is being sold on Free energy (and yeah, it works). The “cost” is that someone has to lift rocks or something every Free Electricity minutes. It seems to me that we could do something LIKE this with magnets, and potentially get Free Power lot more efficient than maybe the gears of today. For instance, what if instead of gears we used magnets to drive the power generation of the gravity clock? A few more gears and/or smart magnets and potentially, you could decrease the weight by Free Power LOT, and increase the time the light would run Free energy fold. Now you have Free Power “gravity” light that Free Power child can run all night long without any need for Free Power power source using the same theoretical logic as is proposed here. Free energy ? Ridiculous. “Conservation of energy ” is one of the most fundamental laws of physics. Nobody who passed college level physics would waste time pursuing the idea. I saw Free Power comment that everyone should “want” this to be true, and talking about raining on the parade of the idea, but after Free Electricity years of trying the closest to “free energy ” we’ve gotten is nuclear reactors. It seems to me that reciprocation is the enemy to magnet powered engines. Remember the old Mazda Wankel advertisements? Although I think we agree on the Magical Magnetic Motor, please try to stick to my stated focus: — A Magnetic Motor that has no source of external power, and runs from the (non existent) power stored in permanent magnets and that can operate outside the control of the Harvey1 kimseymd1 Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! @Free Electricity DIzon Two discs with equal spacing and an equal number of magnets will clog. Free Electricity place two magnets on your discs and try it. Obviously you haven’t. That’s simple understanding. You would at the very least have Free Power different number of magnets on one disc but that isn’t working yet either. Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know. Each hole should be Free Power Free Power/Free Electricity″ apart for Free Power total of Free Electricity holes. Next will be setting the magnets in the holes. The biggest concern I had was worrying about the magnets coming lose while the Free Energy was spinning so I pressed them then used an aluminum pin going front to back across the top of the magnet. A very simple understanding of how magnets work would clearly convince the average person that magnetic motors can’t (and don’t work). Pray tell where does the energy come from? The classic response is magnetic energy from when they were made. Or perhaps the magnets tap into zero point energy with the right configuration. What about they harness the earth’s gravitational field. Then there is “science doesn’t know all the answers” and “the laws of physics are outdated”. The list goes on with equally implausible rubbish. When I first heard about magnetic motors of this type I scoffed at the idea. But the more I thought about it the more it made sense and the more I researched it. Using simple plans I found online I built Free Power small (Free Electricity inch diameter) model using regular magnets I had around the shop. The results of this research have been used by numerous scientists all over the world. One of the many examples is Free Power paper written by Theodor C. Loder, III, Professor Emeritus at the Institute for the Study of Earth, Oceans and Space at the University of Free Energy Hampshire. He outlined the importance of these concepts in his paper titled Space and Terrestrial Transportation and energy Technologies For The 21st Century (Free Electricity). Ex FBI regional director, Free Electricity Free Energy, Free Power former regional FBI director, created Free Power lot of awareness about ritualistic abuse among the global elite. It goes into satanism, pedophilia, and child sex trafficking. Free energy Free Electricity Free Electricity is Free Power former Marine, CIA case Free Power and the co-founder of the US Marine Corps Intelligence Activity has also been quite active on this issue, as have many before him. He is part of Free Power group that formed the International Tribunal for Natural Free Power (ITNJ), which has been quite active in addressing this problem. Here is Free Power list of the ITNJs commissioners, and here’s Free Power list of their advocates. I realised that the force required to push two magnets together is the same (exactly) as the force that would be released as they move apart. Therefore there is no net gain. I’ll discuss shielding later. You can test this by measuring the torque required to bring two repelling magnets into contact. The torque you measure is what will be released when they do repel. The same applies for attracting magnets. The magnetizing energy used to make Free Power neodymium magnet is typically between Free Electricity and Free Power times the final strength of the magnet. Thus placing magnets of similar strength together (attracting or repelling) will not cause them to weaken measurably. Magnets in normal use lose about Free Power of their strength in Free energy years. Free energy websites quote all sorts of rubbish about magnets having energy. They don’t. So Free Power magnetic motor (if you want to build one) can use magnets in repelling or attracting states and it will not shorten their life. Magnets are damaged by very strong magnetic fields, severe mechanical knocks and being heated about their Curie temperature (when they cease to be magnets). Quote: “For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. ” This is one of the great magnet misconceptions. Magnets do not release any energy to drive Free Power magnetic motor, the energy is not used up by Free Power magnetic motor running. Thinks about how long it takes to magnetise Free Power magnet. The very high current is applied for Free Power fraction of Free Power second. Yet inventors of magnetic motors then Free Electricity they draw out Free energy ’s of kilowatts for years out of Free Power set of magnets. The energy input to output figures are different by millions! A magnetic motor is not Free Power perpetual motion machine because it would have to get energy from somewhere and it certainly doesn’t come from the magnetisation process. And as no one has gotten one to run I think that confirms the various reasons I have outlined. Shielding. All shield does is reduce and redirect the filed. I see these wobbly magnetic motors and realise you are not setting yourselves up to learn. Historically, the term ‘free energy ’ has been used for either quantity. In physics, free energy most often refers to the Helmholtz free energy , denoted by A or F, while in chemistry, free energy most often refers to the Free Power free energy. The values of the two free energies are usually quite similar and the intended free energy function is often implicit in manuscripts and presentations.
# Douglas Quadling Mechanics 1 worked solutions The following Douglas Quadling Mechanics 1 worked solutions manual by chapters are available for purchase and viewing. Douglas Quadling Mechanics 1 worked solutions available for : Ch1. Velocity and acceleration (49 questions)     free preview View Here USD$22 Ch2. Force and motion (45 questions) free preview View Here USD$ 20 Ch3. Vertical motion (39 questions) View Here USD$20 Ch4. Resolving forces (49 questions) View Here USD$ 22 Ch5. Friction (48 questions) View Here USD$22 Ch6. Motion due to gravity (38 questions) View Here USD$ 20 Revision Exercise 1 (25 questions) View Here USD$20 Ch7. Newton’s third law (43 questions) View Here USD$ 22 Ch8. Work, energy and power (47 questions) View Here USD$22 Ch9. Potential energy (33 questions) View Here USD$ 15 Practice Examinations (14 questions) View Here USD$10 Revision Exercise 2 (22 questions) View Here USD$ 20 Ch10. Forces as a vector quantity (62 questions)     free preview View Here USD$25 Ch11. General motion in a straight line (58 questions) View Here USD$ 24 Full Package (Total 572 questions) (Limited Time Offer) USD\$ 200 (1 year subscription) ** Prices are subject to change without prior notice. ** Payment Procedure: After you have made the payment, a receipt will be send to you for your confirmation to ensure you pay for the right content. We usually give access within an hour of your payment. All access will always be within that same day of payment. ### *Important Notes* #### Cambridge AS an A Level Mechanics 1 by Douglas Quadling and Julian Gilbey Course book Revised Edition Second edition 2016 A note about our solutions to the above book. Our Douglas Quadling Mechanics 1 solutions are written completely for the 1st edition published 2002. Recently they have come out with a 2016 edition. A comparison of both books revealed that the questions in both books are similar except for some questions which appears in the miscellaneous section. Some questions in this section remain the same in both editions. We believe that using our Quadling solutions for Mechanics 1 is still valid for the new 2016 book. You can use our solutions and be assured that it has covered substantially the newer edition. The detailed comparison are as follows:- 1 Velocity and accelearation Ex 1A, 1B,1C and 1D are same for both editions 2 Force and motion Ex 2A and 2B are same. 3 Vertical motion Ex 3A and 3B are same. 4 Resolving forces Ex 4A and 4B are same 5 Friction Ex 5A and 5B are same 6 Motion due to gravity Ex 6A 6B and 6c are same Revision Exercise 1 are same 7 Newtons Third Law Ex 7A 7B and 7C are same 8 Work energy and power Ex 8A and 8B are same 9 Potential Energy Ex9A and 9B are same 10 Force as a vector quanity Ex 10A , 10B and 10C are same 11 General motion in a straight line Ex 11A and 11B are same Revision Exercise 2 Questions 1 to 18 are same .Questions 20 and 22 are same. Only question19 and 21 are different. Practise exam 1 and Practise exam 2 are same.
I have to build a document structure with the different sectioning-levels (and therefor also the table of contents) having a certain, custom "numbering" instead of the default mathematical. E.g. a section should alsways be starting like with roman numbering (what you do with "\Alph*.") in the list-environment, a subsection always "(\arabic*)", a paragraph with "\alph*\alph*)" and so on. I've searched plenty of times for that, but either nobody was ever in need of this before me, or I'm missing the correct tags to search for. Any way I can do that? This is, what the toc should look like: A. Section I. Subsection II. Subsection 1) Subsubsection a) Paragraph 2) Subsubsection a) Paragraph aa) Subparagraph B. Section I. Subsection 1) Subsubsection • Well there is no real way I tried it because I've got no clue how to do it at all. I'll add an concrete example, how it shall look like in the toc... – user43961 Jan 30 '14 at 8:47 • You can take a look at the titletoc (included in titlesec) or the tocloft package. – Bernard Jan 30 '14 at 9:32 • Do you want this behavior only in the ToC or throughout the document, as well? – karlkoeller Jan 30 '14 at 9:50 • document as well – user43961 Jan 30 '14 at 18:00 To obtain this behavior throughout the document, we use the titlesec package: \usepackage{titlesec} \renewcommand{\thesection}{\Alph{section}} \titleformat{\section}{\normalfont\Large\bfseries}{\thesection.}{1em}{} \renewcommand{\thesubsection}{\Roman{subsection}} \titleformat{\subsection}{\normalfont\large\bfseries}{\thesubsection.}{1em}{} \renewcommand{\thesubsubsection}{\arabic{subsubsection}} \titleformat{\subsubsection}{\normalfont\normalsize\bfseries}{\thesubsubsection)}{1em}{} \renewcommand{\theparagraph}{\alph{paragraph}} \titleformat{\paragraph}[runin]{\normalfont\normalsize\bfseries}{\theparagraph)}{1em}{} \renewcommand{\thesubparagraph}{\alph{subparagraph}\alph{subparagraph}} \titleformat{\subparagraph}[runin]{\normalfont\normalsize\bfseries}{\thesubparagraph)}{1em}{} In regards of the ToC, we use the tocloft package for the customization: \usepackage{tocloft} \renewcommand{\cftsecaftersnum}{.} \renewcommand{\cftsubsecaftersnum}{.} \renewcommand{\cftsubsubsecaftersnum}{)} \renewcommand{\cftparaaftersnum}{)} \renewcommand{\cftsubparaaftersnum}{)} \setlength{\cftsecnumwidth}{1.8em} \setlength{\cftsubsecnumwidth}{2.3em} \setlength{\cftsubsubsecnumwidth}{1.8em} \setlength{\cftparanumwidth}{1.8em} \setlength{\cftsubparanumwidth}{2.2em} \setlength{\cftsecindent}{0em} \setlength{\cftsubsecindent}{1.8em} \setlength{\cftsubsubsecindent}{4.1em} \setlength{\cftparaindent}{5.9em} \setlength{\cftsubparaindent}{7.7em} So, this MWE \documentclass{article} \setcounter{secnumdepth}{5} \setcounter{tocdepth}{5} \usepackage{titlesec} \renewcommand{\thesection}{\Alph{section}} \titleformat{\section}{\normalfont\Large\bfseries}{\thesection.}{1em}{} \renewcommand{\thesubsection}{\Roman{subsection}} \titleformat{\subsection}{\normalfont\large\bfseries}{\thesubsection.}{1em}{} \renewcommand{\thesubsubsection}{\arabic{subsubsection}} \titleformat{\subsubsection}{\normalfont\normalsize\bfseries}{\thesubsubsection)}{1em}{} \renewcommand{\theparagraph}{\alph{paragraph}} \titleformat{\paragraph}[runin]{\normalfont\normalsize\bfseries}{\theparagraph)}{1em}{} \renewcommand{\thesubparagraph}{\alph{subparagraph}\alph{subparagraph}} \titleformat{\subparagraph}[runin]{\normalfont\normalsize\bfseries}{\thesubparagraph)}{1em}{} \usepackage{tocloft} \renewcommand{\cftsecaftersnum}{.} \renewcommand{\cftsubsecaftersnum}{.} \renewcommand{\cftsubsubsecaftersnum}{)} \renewcommand{\cftparaaftersnum}{)} \renewcommand{\cftsubparaaftersnum}{)} \setlength{\cftsecnumwidth}{1.8em} \setlength{\cftsubsecnumwidth}{2.3em} \setlength{\cftsubsubsecnumwidth}{1.8em} \setlength{\cftparanumwidth}{1.8em} \setlength{\cftsubparanumwidth}{2.2em} \setlength{\cftsecindent}{0em} \setlength{\cftsubsecindent}{1.8em} \setlength{\cftsubsubsecindent}{4.1em} \setlength{\cftparaindent}{5.9em} \setlength{\cftsubparaindent}{7.7em} \begin{document} \tableofcontents \bigskip\bigskip \section{A section} \subsection{A subsection} \subsection{A subsection} \subsubsection{A subsubsection} \paragraph{A paragraph} \subsubsection{A subsubsection} \paragraph{A paragraph} \subparagraph{A subparagraph} \section{A section} \subsection{A subsection} \subsubsection{A subsubsection} \end{document} yields this result • wow, that was amazing... I thought it must be possible with a little less effort... but it works perfectly! – user43961 Jan 30 '14 at 18:01
# Parsing input and output currency names The input for price() would be STRINGS, as you see below. If the string starts with "-" I would like what follows to be stored in fiat_name and then retrieve the symbol from _KNOWN_FIAT_SYMBOLS. "--rub" (EXAMPLE) could be at any position in the list passed through price() If the optional fiat_name does not exist in _KNOWN_FIAT_SYMBOLS i would like it to default to USD/$I would like the code optimized and cleaned/minimized. I am trying to practice clean and read-able code. import re _KNOWN_FIAT_SYMBOLS = {"USD":"$", "RUB":"₽"} # this will be populated with more symbols/pairs later def price(*arguments): # default to USD/$fiat_name = "USD" arguments = list(arguments) cryptos = [] for arg in arguments: arg = arg.strip() if not arg: continue for part in arg.split(","): if part.startswith("-"): fiat_name = part.upper().lstrip("-") continue crypto = re.sub("[^a-z0-9]", "", part.lower()) if crypto not in cryptos: cryptos.append(crypto) if not cryptos: cryptos.append("btc") fiat_symbol = _KNOWN_FIAT_SYMBOLS.get(fiat_name) if not fiat_symbol: fiat_name = "USD" fiat_symbol = "$" print(f"{cryptos} to: {fiat_name}{fiat_symbol}") price("usd", "usdc", "--rub") # ['usd', 'usdc'] to: RUB₽ (becuase of the optional --rub) price("usd,usdc,eth", "btc", "-usd") # ['usd', 'usdc', 'eth', 'btc'] to: USD$(becuase of the optional --usd) price("usd", "usdc,btc", "-xxx") #['usd', 'usdc', 'btc'] to: USD$ (because xxx does not exist in _KNOWN_FIAT_SYMBOLS price("usd,usdc,eth", "btc") # ['usd', 'usdc', 'eth', 'btc'] to: USD$(becuase no optional fiat_name was given) price("usd,--rub,eth", "btc") # ['usd', 'eth', 'btc'] to: RUB₽ (becuase of the optional --rub) price("--rub") # ['btc'] to: RUB₽ (becuase of the optional --rub and the cryptos is empty) price("") # ['btc'] to: USD$ (becuase of the default USD and the cryptos is empty) • Welcome to Code Review! Please edit your question so that the title describes the purpose of the code, rather than its mechanism. We really need to understand the motivational context to give good reviews. Thanks! Jul 9 at 13:29 Return data. Algorithmic code should return data, not print – at least whenever feasible. It's better for testing, debugging, refactoring, and probably other reasons. At a minimum, return a plain tuple or dict. Even better would be a meaningful data object of some kind (e.g., namedtuple, dataclass, or attrs class). Print elsewhere, typically in a function operating at the outer edge of your program. Function *args are already independent tuples. That means you don't need arguments = list(arguments). Algorithmic readability. Your current code is generally fine: I had no great difficulty figuring out how it was supposed to work. However, the parsing rules themselves have a tendency to require an annoying amount of conditional logic: I was not happy with the first draft or two I sketched out. But I did come up with an alternative that I ended up liking better: one function focuses on parsing just the currency names (a fairly narrow, textual task); and the other part deals with the annoyances of missing/unknown data. Note that both functions return meaningful data objects – an important tool in writing more readable code – and the second function handles missing/unknown values with a "declarative" style. Overall, this version strikes me as somewhat more readable. If you need to optimize for raw speed, it's probably slower. import re from collections import namedtuple _KNOWN_FIAT_SYMBOLS = {"USD":"$", "RUB":"₽"} FiatCryptos = namedtuple('FiatCryptos', 'name symbol cryptos') Currency = namedtuple('Currency', 'name is_fiat') def price(*arguments): currencies = list(parse_currencies(arguments)) cryptos = [c.name for c in currencies if not c.is_fiat] or ['btc'] fiats = [c.name.upper() for c in currencies if c.is_fiat] or ['USD'] f = fiats[0] syms = _KNOWN_FIAT_SYMBOLS name, sym = (f, syms[f]) if f in syms else ('USD', '$') return FiatCryptos(name, sym, cryptos) def parse_currencies(arguments): for arg in arguments: for part in arg.strip().split(','): if part: is_fiat = part.startswith('-') name = re.sub('[^a-z0-9]', '', part.lower()) yield Currency(name, is_fiat)
# Philip Michael Weiser Phone +47 45373050 Mobile phone +47-45373050 Postal address Postboks 1048 Blindern 0316 Oslo ## Summary • Experienced optical spectroscopist specializing in Fourier Transform Infrared (FT-IR), Raman, and ultraviolet-visible (UV-Vis) spectroscopies, especially at cryogenic temperatures. • Broad interdisciplinary background in physics, chemistry, and electrical engineering. • Enthusiastic and experienced lecturer ORCID: 0000-0003-1009-9940 Publications (updated 2020.des.10): 15 peer-reviewed articles and 1 book chapter ## Education 2017  Ph.D., Physics (Disputation date: 03.May.2017) Lehigh University, Pennsylvania, USA Dissertation Supervisor: Prof. Michael J. Stavola 2015  M.S., Electrical Engineering Lehigh University, Pennsylvania, USA 2012  B.S., Physics and Chemistry (ACS-certified degree) Summa cum laude, with Honors in Chemistry Moravian College, Pennsylvania, USA Thesis Supervisor: Prof. Carl Salter 2020.october.21 - Applied Physics Letters recently highlighted my article "Structure and vibrational properties of the dominant O-H center in $$\beta-$$Ga2O3" as one of the most "Highly Cited, Highly Read Research in Semiconductors." for this year. 2020.june.16 - A brief synopsis on my work to investigate hydrogen-related defects in multicrystalline silicon wafers is now available on the FME SuSolTech website: https://susoltech.no/detecting-hydrogen-impurities-in-multicrystalline-silicon-wafers/ ## Research Interests Point defects exist in all materials and play a profound role on the mechanical, optical, and electrical properties of many materials, especially semiconductors. A point defect can be beneficial, detrimental, or, in some cases, both depending on a material's applications. An excellent example is the oxygen impurity in crystalline silicon.  A substantial concentration of oxygen, mainly in the form of interstitial oxygen atoms,  is incorporated into crystalline silicon during its growth. On one hand, the presence of oxygen confers a number of benefits to the silicon material. Oxygen clusters aid in trapping transition metal impurities, e.g. Cu and Fe, which would otherwise significantly degrade the carrier lifetime and therefore the efficiency of the device. Oxygen also provides the wafer with increased mechanical strength, making it less susceptible to fracturing.  Interstitial oxygen atoms themselves are electrically neutral, and therefore do not alter to the electrical behavior of the device. On the other hand, oxygen is very reactive and forms complexes with other defects introduced during the crystal growth or subsequent processing, which degrade the electrical properties of the material. Oxygen can form various complexes with vacancy defects, resulting in a multitude of electrically active defects. The formation of oxygen precipitates has been linked with the appearance of thermal donors. Both of these oxygen-related defects make it difficult to reliably predict the electrical conductivity of the final device relative to the starting material. Light-induced degradation, in which the solar cell's efficiency drops by 3-6 % (on an absolute scale) after exposure to sunlight, is thought to be related to the formation of boron-oxygen related complexes. Thus, reliable control over a material's properties is, to a large extent, determined by the ability to manipulate and control its point defects. Over the past 60 years, a number of spectroscopic techniques have been developed to study and improve our understanding of the effect of defects on the properties of different materials.  These include (but are not limited to) electron paramagnetic resonance (EPR), aka electron spin resonance; deep-level transient spectroscopy (DLTS); Fourier Transform infrared (FT-IR) spectroscopy; and Raman spectroscopy.  Many of these techniques have been used to characterize light element impurities, e.g., hydrogen, lithium, carbon, oxygen and nitrogen, in semiconductors since these elements are often incorporated unintentionally during growth and processing.  An excellent example of this are transparent conducting oxides (TCOs) and transparent semiconducting oxides (TSOs), which are classes of metal-oxide-semiconductors that are both transparent to visible light and electrically conductive.  Although they are currently used as transparent contacts for liquid crystal displays (LCDs) and photovoltaic cells, TCOs and TSOs are excellent candidates for a number of applications, such as high performance transparent electronics. Many of these materials are easily grown as bulk single crystals (scalable), contain earth-abundant elements (cheap) and are non-toxic (environmentally-friendly). All of these factors have driven a wave of research to identify and control defects in these materials. Hydrogen-related defects have been found to play a crucial role in the optical and electrical properties of many semiconductors.  The behavior of hydrogen varies considerably among different materials depending on the crystal structure, the growth technique, the type of doping, the presence of background impurities.  In silicon, interstitial hydrogen (Hi) exhibits an amphoteric behavior and counteracts the prevailing conductivity of the host material. In n-type Si, Hi will act as an acceptor, whereas in p-type Si, Hi will act as a donor. Hydrogen can passivate the electrical behavior of this material by forming complexes with other donors and/or acceptors, thereby providing control over the conductivity.  However, in many TCOs, hydrogen behaves exclusively as a donor regardless of the background conductivity of the material.  Therefore, knowledge of the physical and chemical properties of hydrogen-related defects in these materials is critical for understanding and controlling the conductivity of these materials. ## Teaching During my undergraduate and graduate studies, I was extremely fortunate to take classes from excellent teachers and to be exposed to different teaching techniques.  Several of my undergraduate chemistry courses were taught using active learning techniques.  During my Ph.D. studies, I had the opportunity to teach several types of classes, including a laboratory and recitations for a freshman-level "Introduction to Classical Mechanics" course and lectures for a sophomore-level "Introduction to Quantum Mechanics" course. As a postdoctoral fellow I am more focused on my research, but I still seek out ways to remain active in my teaching. During Fall 2017, I served as an instructor for FT-IR spectroscopy for the MENA 9510 course offered here at the University of Oslo.  I have also given two seminars on FT-IR spectroscopy to both the Department of Physics and the LENS group. Tags: Published Sep. 7, 2017 10:55 AM - Last modified Dec. 15, 2020 3:41 PM
Competitions Sum of elements Given sequence of n real numbers. Find the sum of all its elements. Input First line contains number n (n100). Next line contains n real numbers, each of them is no more than 100 by absolute value. Output Print the sum of all sequence elements with 1 digit after the decimal point. Time limit 1 second Memory limit 128 MiB Input example #1 5 1.2 1.3 5.7 1.8 12.4 Output example #1 22.4
# Convex Polyhedra Publisher: Springer Verlag Number of Pages: 539 Price: 129.00 ISBN: 3-540-23158-7 The text under review is a translation and expansion of a classic work on convex polyhedra, first published in Russian in 1950, then translated into German in 1958. It has been updated by V. A. Zalgaller, and fitted with supplements by Yu. A. Volkov and L. A. Shor. It is a testimony to Alexandrov's depth of insight and care in exposition that this monograph is still the basic text in the subject, and that it finally has an English translation (by Dairbekov, Kutateladze, and Sossinsky) more than fifty years after its writing. What is most compelling about this book is the choice of method — a polyhedron is a geometric object of some simplicity, accessible by ideas from synthetic geometry and elementary topology, and demonstrating anew the power of these methods. The basic question of the book is to establish which data associated to a polyhedron determine it up to Euclidean equivalence. In two dimensions, for example, the conditions a + b > c, a + c > b and b + c > a determine the lengths of the sides of a triangle, which is unique up to isometry of the plane. The first such uniqueness theorem for polyhedra was given by Cauchy in 1813: two closed convex polyhedra composed of the same number of equal similarly situated faces are congruent (via a Euclidean motion of three-space). The combinatorial data that describe a polyhedron are given by a development. A development is a collection of polygons, together with a prescription for gluing them along their sides. The rules for gluing include the geometric assumptions that sides can be glued only if they have the same lengths, and that a side is glued to at most one other side. Such data also carry topological implications and conditions for convexity follow. To restore spatial geometry Alexandrov introduced the intrinsic metric of a development. Distances are measured along the faces of the polyhedron and so notions such as geodesics are defined. Viewing a closed, convex polyhedron as a sort of surface, the analogue of the Gauss map is a prescription of face directions. In this setting, Alexandrov develops and proves Minkowski's theorem, which states that two polyhedra with pairwise parallel faces of equal area are, in fact, parallel translates of each other. The failure of uniqueness for polyhedra with boundary and a given development is the notion of a flexible polyhedron. Alexandrov proved conditions equivalent to the existence of flexible polyhedra from which his rigidity theorems follow. The original editions of this book included conjectures and problems, many of which were solved in the ensuing decades. Before his death, Alexandrov, with the assistance of Zalgaller, prepared footnotes for this edition bringing it up to date and including an expanded bibliography. The richness and beauty of the mathematics of polyhedra is the main gift of this book. It is bound to influence another fifty years of research on this subject. John McCleary is Professor of Mathematics at Vassar College. Friday, April 1, 2005 Reviewable: Include In BLL Rating: A.D. Alexandrov Series: Springer Monographs in Mathematics Publication Date: 2005 Format: Hardcover Category: Monograph Tags: John McCleary 11/10/2005 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Content and Purpose of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Order and Character of the Exposition . . . . . . . . . . . . . . . . . . . . . . . 3 Remarks for the Professional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1 Basic Concepts and Simplest Properties of Convex Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1 Definition of a Convex Polyhedron . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Determining a Polyhedron from the Planes of Its Faces . . . . 16 1.3 Determining a Closed Polyhedron from Its Vertices . . . . . . . 21 1.4 Determining an Unbounded Polyhedron from Its Vertices and the Limit Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.5 The Spherical Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 1.6 Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 1.7 Topological Properties of Polyhedra and Developments . . . . 56 1.8 Some Theorems of the Intrinsic Geometry of Developments 72 1.9 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 2 Methods and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 2.1 The Cauchy Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 2.2 The Mapping Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 2.3 Determining a Polyhedron from a Development (Survey of Chapters 3, 4, and 5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 2.4 Polyhedra with Prescribed Face Directions (Survey of Chapters 6, 7, and 8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 2.5 Polyhedra with Vertices on Prescribed Rays (Survey of Chapter 9) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 2.6 Infinitesimal Rigidity Theorems (Survey of Chapters 10 and 11) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 2.7 Passage from Polyhedra to Curved Surfaces . . . . . . . . . . . . . . 136 2.8 Basic Topological Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 2.9 The Domain Invariance Theorem . . . . . . . . . . . . . . . . . . . . . . . 147 X Contents 3 Uniqueness of Polyhedra with Prescribed Development . 155 3.1 Several Lemmas on Polyhedral Angles . . . . . . . . . . . . . . . . . . . 155 3.2 Equality of Dihedral Angles in Polyhedra with Equal Planar Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 3.3 Uniqueness of Polyhedra with Prescribed Development . . . . 169 3.4 Unbounded Polyhedra of Curvature Less Than 2p . . . . . . . . 173 3.5 Polyhedra with Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 3.6 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 4 Existence of Polyhedra with Prescribed Development . . . 193 4.1 The Manifold of Developments . . . . . . . . . . . . . . . . . . . . . . . . . 193 4.2 The Manifold of Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 4.3 Existence of Closed Convex Polyhedra with Prescribed Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 4.4 Existence of Unbounded Convex Polyhedra with Prescribed Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 4.5 Existence of Unbounded Polyhedra Given the Development and the Limit Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 5 Gluing and Flexing Polyhedra with Boundary . . . . . . . . . . . 229 5.1 Gluing Polyhedra with Boundary . . . . . . . . . . . . . . . . . . . . . . . 229 5.2 Flexes of Convex Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 5.3 Generalizations of Chapters 4 and 5 . . . . . . . . . . . . . . . . . . . . . 261 6 Conditions for Congruence of Polyhedra with Parallel Faces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.1 Lemmas on Convex Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.2 On Linear Combination of Polyhedra . . . . . . . . . . . . . . . . . . . . 281 6.3 Congruence Conditions for Closed Polyhedra . . . . . . . . . . . . . 287 6.4 Congruence Conditions for Unbounded Polyhedra . . . . . . . . . 291 6.5 Another Proof and Generalization of the Theorem on Unbounded Polyhedra. Polyhedra with Boundary . . . . . . . . . 295 6.6 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 7 Existence Theorems for Polyhedra with Prescribed Face Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 7.1 Existence of Polyhedra with Prescribed Face Areas . . . . . . . . 311 7.2 Minkowski's Proof of the Existence of Polyhedra with Prescribed Face Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 7.3 Existence of Unbounded Polyhedra with Prescribed Face Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 7.4 The General Existence Theorem for Unbounded Polyhedra . 327 7.5 Existence of Convex Polyhedra with Prescribed Support Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Contents XI 7.6 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 8 Relationship Between the Congruence Condition for Polyhedra with Parallel Faces and Other Problems . . . . . 349 8.1 Parallelohedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 8.2 A Polyhedron of Least Area with Fixed Volume . . . . . . . . . . 359 8.3 Mixed Volumes and the Brunn Inequality . . . . . . . . . . . . . . . . 366 9 Polyhedra with Vertices on Prescribed Rays . . . . . . . . . . . . 377 9.1 Closed Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9.2 Unbounded Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 9.3 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 10 Infinitesimal Rigidity of Convex Polyhedra with Stationary Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 10.1 Deformation of Polyhedral Angles . . . . . . . . . . . . . . . . . . . . . . 404 10.2 The Strong Cauchy Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 10.3 Stationary Dihedral Angles for Stationary Planar Angles . . 415 10.4 Infinitesimal Rigidity of Polyhedra and Equilibrium of Hinge Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 10.5 On the Deformation of Developments . . . . . . . . . . . . . . . . . . . 425 10.6 Rigidity of Polyhedra with Stationary Development . . . . . . . 429 10.7 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 11 Infinitesimal Rigidity Conditions for Polyhedra with Prescribed Face Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 11.1 On Deformations of Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . 439 11.2 Infinitesimal Rigidity Theorems for Polyhedra . . . . . . . . . . . . 445 11.3 Relationship of Infinitesimal Rigidity Theorems with One Another and with the Theory of Mixed Volumes . . . . . . . . . . 453 11.4 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 12 Supplements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 12.1 Supplement to Chapter 3: Yu. A. Volkov. An Estimate for the Deformation of a Convex Surface in Dependence on the Variation of Its Intrinsic Metric . . . . . . . . . . . . . . . . . . . . . 463 12.2 Supplement to Chapter 4: Yu. A. Volkov. Existence of Convex Polyhedra with Prescribed Development. I . . . . . . . . 492 12.3 Supplement to Chapter 5: L. A. Shor. On Flexibility of Convex Polyhedra with Boundary . . . . . . . . . . . . . . . . . . . . . . . 506 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Publish Book: Modify Date: Friday, March 16, 2012
Getting A Vcf File From A Fasta Alignment 2 4 Entering edit mode 9.1 years ago Bioch'Ti ★ 1.1k Hi All, I'm wondering if anybody is aware of a tool that converts a fasta alignment (that includes fasta sequences of different individuals) into a vcf file? I know that the opposite is feasible (vcf to fasta) using the GATK dedicated tool, but I want to perform the opposite ! Thanks for your help, C. vcf fasta conversion alignment • 29k views 0 Entering edit mode what's your input ? a CLUSTALW aln file ? 0 Entering edit mode The input file is a txt file (.fas for example) that contains only the aligned sequences (two lines per individual : one of comment, the second one for the sequence itself). Here is an example: >Ind1 ACGTGGCTAGATCA >Ind2 ACGTGGCTAGATCA >Ind3 ACGTGCCTAGATCA 0 Entering edit mode how are managed the indels ? 0 Entering edit mode It will be nice to manage the indels too to make the script more general. But in my case, I do not need to consider these markers. 0 Entering edit mode I added a support to MSA/Fasta. See below. 9 Entering edit mode 9.1 years ago I quickly wrote something for a CLUSTALW alignment (not fasta). See: http://lindenb.github.io/jvarkit/MsaToVcf.html It seems to work with the following clustalw input: CLUSTAL W (1.81) multiple sequence alignment gi|6273285|gb|AF191659.1|AF191 TATACATTAAAGAAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAA gi|6273284|gb|AF191658.1|AF191 TATACATTAAAGAAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAA gi|6273287|gb|AF191661.1|AF191 TATACATTAAAGAAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAA gi|6273286|gb|AF191660.1|AF191 TATACATAAAAGAAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAA gi|6273290|gb|AF191664.1|AF191 TATACATTAAAGGAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAA gi|6273289|gb|AF191663.1|AF191 TATACATTAAAGGAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAA gi|6273291|gb|AF191665.1|AF191 TATACATTAAAGGAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAA ******* **** ************************************* gi|6273285|gb|AF191659.1|AF191 TATATA----------ATATATTTCAAATTTCCTTATATACCCAAATATA gi|6273284|gb|AF191658.1|AF191 TATATATA--------ATATATTTCAAATTTCCTTATATACCCAAATATA gi|6273287|gb|AF191661.1|AF191 TATATA----------ATATATTTCAAATTTCCTTATATATCCAAATATA gi|6273286|gb|AF191660.1|AF191 TATATA----------ATATATTTATAATTTCCTTATATATCCAAATATA gi|6273290|gb|AF191664.1|AF191 TATATATATA------ATATATTTCAAATTCCCTTATATATCCAAATATA gi|6273289|gb|AF191663.1|AF191 TATATATATA------ATATATTTCAAATTCCCTTATATATCCAAATATA gi|6273291|gb|AF191665.1|AF191 TATATATATATATATAATATATTTCAAATTCCCTTATATATCCAAATATA ****** ******** **** ********* ********* gi|6273285|gb|AF191659.1|AF191 AAAATATCTAATAAATTAGATGAATATCAAAGAATCCATTGATTTAGTGT gi|6273284|gb|AF191658.1|AF191 AAAATATCTAATAAATTAGATGAATATCAAAGAATCTATTGATTTAGTGT gi|6273287|gb|AF191661.1|AF191 AAAATATCTAATAAATTAGATGAATATCAAAGAATCTATTGATTTAGTGT gi|6273286|gb|AF191660.1|AF191 AAAATATCTAATAAATTAGATGAATATCAAAGAATCTATTGATTTAGTGT gi|6273290|gb|AF191664.1|AF191 AAAATATCTAATAAATTAGATGAATATCAAAGAATCTATTGATTTAGTGT gi|6273289|gb|AF191663.1|AF191 AAAATATCTAATAAATTAGATGAATATCAAAGAATCTATTGATTTAGTAT gi|6273291|gb|AF191665.1|AF191 AAAATATCTAATAAATTAGATGAATATCAAAGAATCTATTGATTTAGTGT ************************************ *********** * gi|6273285|gb|AF191659.1|AF191 ACCAGA gi|6273284|gb|AF191658.1|AF191 ACCAGA gi|6273287|gb|AF191661.1|AF191 ACCAGA gi|6273286|gb|AF191660.1|AF191 ACCAGA gi|6273290|gb|AF191664.1|AF191 ACCAGA gi|6273289|gb|AF191663.1|AF191 ACCAGA gi|6273291|gb|AF191665.1|AF191 ACCAGA ****** $curl https://raw.github.com/biopython/biopython/master/Tests/Clustalw/opuntia.aln" |\ java -jar dist/biostar94573.jar ##fileformat=VCFv4.1 ##Biostar94573CmdLine= ##Biostar94573Version=ca765415946f3ed0827af0773128178bc6aa2f62 ##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Approximate read depth"> ##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"> ##INFO=<ID=DP,Number=1,Type=Integer,Description="Approximate read depth."> ##contig=<ID=chrUn,length=156> #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT gi|6273284|gb|AF191658.1|AF191 gi|6273285|gb|AF191659.1|AF191 gi|6273286|gb|AF191660.1|AF191 gi|6273287|gb|AF191661.1|AF191 gi|6273289|gb|AF191663.1|AF191 gi|6273290|gb|AF191664.1|AF191 gi|6273291|gb|AF191665.1|AF191 chrUn 8 . T A . . DP=7 GT:DP 0:1 0:1 1:1 0:1 0:1 0:1 0:1 chrUn 13 . A G . . DP=7 GT:DP 0:1 0:1 0:1 0:1 1:1 1:1 1:1 chrUn 56 . ATATATATATA ATA,A,ATATA . . DP=7 GT:DP 1:1 2:1 2:1 2:1 3:1 3:1 0:1 chrUn 74 . TCA TAT . . DP=7 GT:DP 0:1 0:1 1:1 0:1 0:1 0:1 0:1 chrUn 81 . T C . . DP=7 GT:DP 0:1 0:1 0:1 0:1 1:1 1:1 1:1 chrUn 91 . T C . . DP=7 GT:DP 1:1 1:1 0:1 0:1 0:1 0:1 0:1 chrUn 137 . T C . . DP=7 GT:DP 0:1 1:1 0:1 0:1 0:1 0:1 0:1 chrUn 149 . G A . . DP=7 GT:DP 0:1 0:1 0:1 0:1 1:1 0:1 0:1 ## Edit I added a support for fasta/MSA $ cat input.msa >Ind1 ACGTGGCTAGATCA >Ind2 ACGTGGCTAGATCA >Ind3 ACGTGCCTAGATCA get the output: \$ cat input.msa | java -jar dist/biostar94573.jar ##fileformat=VCFv4.1 ##Biostar94573CmdLine= ##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"> ##contig=<ID=chrUn,length=14> #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Ind1 Ind2 Ind3 chrUn 6 . G C . . DP=3 GT:DP 0:1 0:1 1 0 Entering edit mode Hi Pierre, I am looking for a program for converting fasta -> vcf and I found this discussion and your code very helpful. However, I have a couple of questions regarding this output - #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Ind1 Ind2 Ind3 chrUn 6 . G C . . DP=3 GT:DP 0:1 0:1 1 Shouldn't the genotype for Ind1 = 0/0 and Ind2 = 0/0 and Ind3 = 1/1? Also does this program allow input as a consensus sequence - for instance, there could be sites such Y and R in the sequence and the vcf could translate these as a A/T or C/G? 0 Entering edit mode you're right:I've fixed the bug! thank you #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Ind1 Ind2 Ind3 chrUn 6 . G C . . DP=3 GT:DP 0/0:1 0/0:1 1/1:1 consensus sequence: no(I don't have time for this) 0 Entering edit mode Thanks! This is very helpful. 0 Entering edit mode Hi Pierre, I have many vcf files now, generated courtesy of your script. I only need SNP counts for the purposes of my analyses. Is there a way to get SNP counts (only) for all my vcf files? I thought to use grep, but I noticed I was getting reports with indels, too. 1 Entering edit mode Use bcftools filter with TYPE="snp" http://samtools.github.io/bcftools/bcftools.html#expressions 0 Entering edit mode [filter.c:1298 filters_init1] Error: the tag "INFO/snp" is not defined in the VCF header error in vcf file generated by your script, when using bcftools filter with TYPE="snp". 0 Entering edit mode https://vcftools.github.io/man_latest.html vcftools --vcf input_file.vcf --remove-indels --recode --recode-INFO-all --out SNPs_only I finally fix it! 0 Entering edit mode hrUn 14005 . TCG TCA,TAG ##actually 2 snp chrUn 28819 . TTCA TGAA,TTCC ##actually 3snp Could you separate them, not together? 0 Entering edit mode This can be done with other tools - you want to decompose complex variants into simple variants. I've never tried, but I think e.g. bcftools norm could be used for this job. I think also vcflib should have some tool for this. 0 Entering edit mode Pierre, many thanks for providing this script, works very well. I used this script for one fasta alignment, but I would like to use it for thousands of fasta alignments. Is there a way to do this separately for each locus all at once? 0 Entering edit mode just concatenate • the files with cat? • or merge the vcf later? 0 Entering edit mode Right, but I think that would be violating homology, no? Desired workflow: fasta > vcf > snp count (for each locus/alignment). Many thanks in advance. 0 Entering edit mode Apologies if I am not making any sense. Either way, thank you for providing script. 0 Entering edit mode Should I use a for loop? 0 Entering edit mode Hi Pierre, Thanks for this! Would it be possible to output EVERY position on the reference as VCF format? (no coverage = ./., and if same as reference = 0/0)? Otherwise I can't merge VCF files of different samples together.... 0 Entering edit mode ask this as an issue at https://github.com/lindenb/jvarkit/issues . Anyway, I away from my sources for a few days. 0 Entering edit mode I think this should work fine with bcftools merge. as long as the VCFs are all called against the same reference. 0 Entering edit mode I try to run this: cat merged_fasta.aln | java -jar dist/biostar94573.jar but it gives me this: Unable to access jarfile dist/biostar94573.jar 0 Entering edit mode Do you have the program from github in the path you specified (i.e. dist/biostar94573.jar)? Github link is in the first post.. 0 Entering edit mode Do you have the program from github in the path you specified the updated doc is http://lindenb.github.io/jvarkit/MsaToVcf.html 0 Entering edit mode And it says deprecated in favour of snp-sites from Sanger-pathogens ;) Btw, I have a slightly different problem that doesn't work well with the two tools - I have one fasta genome (not alignment) that I need to convert to VCF for merging with my other VCF dataset. Do you know any tool that can convert fasta of one sample to VCF? (It's a chimp mapped & called against hg19, I need it as outgroup in my hg19-based dataset.) 1 Entering edit mode hard to say without see+understand the input data. You should ask this as a new question with a link to the current question explaining why those tools don't fulfill your needs. 2 Entering edit mode 9.1 years ago Bioch'Ti ★ 1.1k Hi, I also found that PGDSpider (http://www.cmpg.unibe.ch/software/PGDSpider/) is able to convert fasta alignment into a vcf file. The problem is that it does not handle concatened alignements in a single fasta file (eg sequences of different sizes). hope this helps Best, C. 0 Entering edit mode I was looking into this tool in the past and found I would have to probably merge the fasta into a single long sequence. I could do that, but it sounded insane, so I've found a different solution to my problem at the time (don't remember now what it was). I guess the same holds for my current problem :D :/
### 3415 An image-based method for undistorted image estimation from distorted brain EPI image with field inhomogeneity Seiji Kumazawa1, Takashi Yoshiura2, Takumi Tanikawa1, and Yuji Yaegashi1 1Hokkaido University of Science, Sapporo, Japan, 2Kagoshima University, Kagoshima, Japan ### Synopsis Our purpose was to develop an image-based method for undistorted image estimation from the distorted EPI image using T1 weighted image. Our basic idea to estimate the field inhomogeneity map is to reproduce the distorted EPI image, and estimates the undistorted image using the estimated field inhomogeneity map based on the signal equation in a single-shot EPI k-space trajectory. The value of the NRMSE between the measured EPI and synthesized EPI was 0.017, and both images were in good agreement. Results demonstrate that our proposed method was able to perform a reasonable estimation of the field inhomogeneity map and undistorted EPI image. ### INTRODUCTION Magnetic field inhomogeneities caused by susceptibility differences result in significant geometric distortion in single-shot echo planar imaging (EPI) images. Many methods have been proposed for correcting the distortion in EPI. Among the distortion correction methods, the most commonly used method is the field-map-based approach, which requires additional acquisition to obtain the field map1 and has the problem of registering the field map to the EPI image.2 As an alternative approach, non-rigid registration methods have been proposed, which register the distorted EPI image into an undistorted reference image. However, it is known that the deformations are described by many parameters to control the geometric transformation, and the accuracy of such methods depends on how the deformations are constrained. In this study, our purpose was to develop an image-based method for undistorted image estimation from the distorted EPI image using T1 weighted image (T1WI) based on MR imaging physics. ### METHODS We propose the iterative-method to estimate an undistorted image and its associated field inhomogeneity map using conjugate gradient algorithm. Our basic idea to estimate the field inhomogeneity map in EPI is to reproduce the distorted EPI image based on the signal equation in a single-shot EPI k-space trajectory. To synthesize the distorted EPI image, we make use of the image-based method3 which can reduce the computation time compared to the k-space texture method4. The MR signal in synthesized EPI was calculated on a voxel-by-voxel basis using the estimated T2 value and field inhomogeneity ΔB by the following equation: $$I(x,y,T2, \Delta B)=\sum_{u=0}^{M-1} \sum_{v=0}^{N-1} A(u,v)\cdot exp(-\frac{TEeff}{T2(u,v)})\cdot sinc (\pi (x-u-\frac{\Delta B(u,v)}{\Delta x \cdot Gx})) \cdot sinc (\pi (y-v-\frac{\Delta B(u,v)}{\Delta y \cdot Gy}))$$ where Δx and Δy are pixel spacing in x and y direction, respectively. Gx is the gradient in the x direction, and Gy = Gbτ /Δty (Gb is the average blip gradient in the y direction during the duration τ, and Δty is the time intervals between adjacent points in the phase-encoding directions in k-space). To estimate T2 value and ΔB at each point, we defined the cost function using the synthesized EPI image and the measured EPI image with geometric distortion as follows. $$J(T2, \Delta B) = \sum_{x=0}^{M-1}\sum_{y=0}^{N-1}|Y(x,y)-I(x,y,T2, \Delta B) |^2 +\beta_1 R_1(\Delta B)+\beta_2 R_2 (T2)$$ where Y(x,y) is the measured EPI image, R1(ΔB(x,y)) and R2(T2(x,y)) are regularization terms. To result in a relatively smooth field inhomogeneity map, R1(ΔB(x,y)) penalizes the roughness of the estimated field inhomogeneity map. To obtain the structural information in the distorted area in the measured EPI image, we used the same slice of the T1WI as the measured EPI image. We generated the initial estimated T2map by applying the gray-scale inversion and histogram specification5 to the T1WI. We used Tikhonov regularization for R2(T2(x,y)), the initial estimated T2map was used as the reference image in Tikhonov regularization. The estimation of T2 and ΔB maps was performed to minimize the cost function using an iterative conjugate gradient algorithm. The spin echo (SE) EPI and T1WI data of a healthy volunteer were acquired using a 1.5-tesla clinical Siemens scanner. The SE EPI data was obtained by a single-shot EPI pulse sequence (FOV: 230 mm, TR=8600ms, TE=119ms, 128×128 in-plane resolution, 3 mm thickness). Three dimensional T1WI covering the same area in EPI was obtained by MPRAGE sequence (FOV: 230 mm, TR=2090ms, TE=3.93ms, TI=1100ms, FA=15°, 256×256 in-plane resolution, 1 mm thickness). To evaluate the performance of our methods, we used the normalized root mean square error (NRMSE) between the measured EPI and synthesized EPI. ### RESULTS and DISCUSSION Figure 1 shows the estimation process in our method and illustrates the convergence of the cost function over iteration. The computation time was 18938.7 sec. Figure 2(a), (b) and (c) shows the measured EPI image, the synthesized EPI image and the absolute difference image between them. The value of the NRMSE was 0.017. These results demonstrate that both images were in good agreement. Figure 3(a), (b), (c) and (d) shows T1WI, initial T2map, estimated undistorted EPI image, and estimated ΔB map. As shown in Fig.3(c), geometric distortion in the frontal lobe of the brain, where the field inhomogeneity was high in Fig.3(d), was reduced significantly by proposed method. ### CONCLUSION We have presented an image-based method for undistorted image estimation from the distorted EPI image using T1WI based on MR imaging physics. Our results demonstrate that our proposed method was able to perform a reasonable estimation of the field inhomogeneity map and undistorted EPI image. ### Acknowledgements This work was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number 15K08695 and 26293278. ### References 1. Jezzard P, Balaban RS. 1995. Correction for geometric distortion in echo planar images from B0 field variations. Magn Reson Med 34:65-73. 2. Sutton BP, Noll DC, Fessler JA, 2004. Dynamic field map estimation using a spiral-in/spiral-out acquisition. Magn Reson Med. 51:1194-1204. 3. Kumazawa S, Yoshiura T, Kikuchi A, et al. 2017. An improved image-based method for field inhomogeneity map in distorted brain EPI image. Proc ISMRM, 1526. 4. Kumazawa S, Yoshiura T, Honda H. 2015. Image-based estimation method for field inhomogeneity in brain echo-planar images with geometric distortion using k-space textures. Concept Magn Reson B. 45:142-152. 5. Coltuc D, Bolon P, Chassery JM. 2006. Exact histogram specification. IEEE Trans Image Process. 15:1143-1152. ### Figures Figure 1: The convergence of the cost function over iteration in our estimation process. Figure 2: (a) Measured EPI image from a healthy volunteer, (b) the synthesized EPI image by proposed method, and (c) the absolute difference image between them. Figure 3: (a) The same slice of the T1 weighted image as the measured EPI image. (b) the initial estimated T2map by applying the gray-scale inversion and histogram specification to the T1WI, (c) estimated undistorted EPI image, and (d) the estimated field inhomogeneity map by proposed method. Proc. Intl. Soc. Mag. Reson. Med. 26 (2018) 3415
# Contract Semantics¶ ## Assume-guarantee contracts¶ This section discusses the semantics of contracts, and in particular modes, in Kind 2. For details regarding the syntax, please see the Contracts section. An assume-guarantee contract (A,G) for a node n is a set of assumptions A and a set of guarantees G. Assumptions describe how n must be used, while guarantees specify how n behaves. More formally, n respects its contract (A,G) if all of its executions satisfy the temporal LTL formula $\square A \Rightarrow \square G$ That is, if the assumptions always hold then the guarantees hold. Contracts are interesting when a node top calls a node sub, where sub has a contract (A,G). From the point of view of sub, a contract ({a_1, ..., a_n}, {g_1, ..., g_m}) represents the same verification challenge as if sub had been written node sub (...) returns (...) ; let ... assert a_1 ; ... assert a_n ; --%PROPERTY g_1 ; ... --%PROPERTY g_m ; tel The guarantees must be invariant of sub when the assumptions are forced. For the caller however, the call sub(<params>) is legal if and only if the assumptions of sub are invariants of top at call-site. The verification challenge for top is therefore the same as node top (...) returns (...) ; let ... sub(<params>) ... --%PROPERTY a_1(<call_site>) ; ... --%PROPERTY a_n(<call_site>) ; tel ## Modes¶ Kind 2 augments traditional assume-guarantee contracts with the notion of mode. A mode (R,E) is a set R or requires and a set E of ensures. A Kind 2 contract is therefore a triplet (A,G,M) where M is a set of modes. If M is empty then the semantics of the contract is exactly that of an assume-guarantee contract. ### Semantics¶ A mode represents a situation / reaction implication. A contract (A,G,M) can be re-written as an assume-guarantee contract (A,G') where $G' = G\ \cup\ \{\ \bigwedge_i r_i \Rightarrow \bigwedge_i e_i \mid (\{r_i\}, \{e_i\}) \in M \}$ For instance, a (linear) contract for non-linear multiplication could be node abs (in: real) returns (res: real) ; let res = if in < 0.0 then - in else in ; tel node times (lhs, rhs: real) returns (res: real) ; (*@contract mode absorbing ( require lhs = 0.0 or rhs = 0.0 ; ensure res = 0.0 ; ) ; mode lhs_neutral ( require not absorbing ; require abs(lhs) = 1.0 ; ensure abs(res) = abs(rhs) ; ) ; mode rhs_neutral ( require not absorbing ; require abs(rhs) = 1.0 ; ensure abs(res) = abs(lhs) ; ) ; mode positive ( require ( rhs > 0.0 and lhs > 0.0 ) or ( rhs < 0.0 and lhs < 0.0 ) ; ensure res > 0.0 ; ) ; mode pos_neg ( require ( rhs > 0.0 and lhs < 0.0 ) or ( rhs < 0.0 and lhs > 0.0 ) ; ensure res < 0.0 ; ) ; *) let res = lhs * rhs ; tel Motivation: modes were introduced in the contract language of Kind 2 to account for the fact that most requirements found in specification documents are actually implications between a situation and a behavior. In a traditional assume-guarantee contract, such requirements have to be written as situation => behavior guarantees. We find this cumbersome, error-prone, but most importantly we think some information is lost in this encoding. Modes make writing specification more straightforward and user-friendly, and allow Kind 2 to keep the mode information around to • improve feedback for counterexamples, • generate mode-based test-cases, and • adopt a defensive approach to guard against typos and specification oversights to a certain extent. This defensive approach is discussed in the next section. ### Defensive check¶ Conceptually modes correspond to different situations triggering different behaviors for a node. Kind 2 is defensive in the sense that when a contract has at least one mode, it will check that the modes account for all situations the assumptions allow before trying to prove the node respects its contract. More formally, consider a node n with contract $(A, G, \{(R_i, E_i)\})$ The defensive check consists in checking that the disjunction of the requires of each mode $\mathsf{one\_mode\_active} = \bigvee_i (\bigwedge_j r_{ij})$ is an invariant for the system $A \wedge G \wedge (\bigwedge r_i \Rightarrow \bigwedge e_i)$ If one_mode_active is indeed invariant, it means that as long as • the assumptions are respected, and • the node is correct w.r.t. its contract then at least one mode is active at all time. Kind 2 follows this defensive approach. If a mode is missing, or a requirement is more restrictive than it should be then Kind 2 will detect the modes that are not exhaustive and provide a counterexample. This defensive approach is not as constraining as it first appears. If one wants to leave some situation unspecified on purpose, it is enough to add to the current set of (non-exhaustive) modes a mode like mode base_case ( require true ; ) ; which explicitly accounts for, and hence documents, the missing cases.
# $X \sim U(0, \theta^*]$, how to find the MLE? [closed] Given $$X \sim U(0, \theta ^*]$$. How can I show that $$\frac{1}{12}max_{1 \leq i \leq n}X_i^2$$ is an MLE of $$Var(X)$$? If $$X \sim \mathcal{U}]0,\theta_0]$$ then $$\text{Var}(X)=\frac{\theta_0^2}{12}$$. So you what you need is the MLE for $$\theta_0$$. So let $$X_1,\dots,X_n \sim \mathcal{U}[0,\theta_0]$$ iid. The likelihood of this sample is \begin{align*} L(\theta) &= \prod_{i=1}^n f(X_i ; \theta) \\ &= \prod_{i=1}^n \frac{1}{\theta} I( X_i \leq \theta) \\ &= \frac{1}{\theta^n} \prod_{i=1}^n I( X_i \leq \theta) \end{align*} The MLE is the value $$\hat \theta$$ that maximizes $$L$$. If $$\theta < \max X_i$$, then there is at least one $$i$$ such that $$I(X_i \leq \theta) =0$$ and thus $$L(\theta) = 0$$. Now if $$\theta \geq \max X_i$$, $$\prod_{i=1}^n I( X_i \leq \theta) =1$$ and $$L(\theta) = \frac{1}{\theta^n}$$ which is decreasing in $$\theta$$. Thus $$L$$ reaches its maximum at the value $$\hat \theta = \max X_i$$. So the MLE for the variance of $$X$$ is $$\frac{ \hat \theta^2}{12} = \frac{ (\max X_i) ^2}{12} = \frac{ \max X_i^2}{12} \quad (\text{since} \ X_i \geq 0)$$
Is this RNA-seq data reliable? 2 0 Entering edit mode 3.7 years ago ashish ▴ 640 I have wheat transcriptome data generated by illumina and I wish to know if this data is reliable or not. Wheat genome size is 17Gb, number of reads in each fastq file is ~32 million. Its a paired end sequencing. The problem is when I identify SNPs using this data, read depth is very less. many have 0,1 or 2 as read depth. Average of all the DP values is around 3 which seems very less. I also mapped the reads on to the wheat genome. These are the mapping statistics generated by STAR. Number of input reads | 28480666 Average input read length | 200 Uniquely mapped reads number | 20282536 Uniquely mapped reads % | 71.22% Average mapped length | 194.59 Number of splices: Total | 998719 Number of splices: Annotated (sjdb) | 604635 Number of splices: GT/AG | 759478 Number of splices: GC/AG | 25924 Number of splices: AT/AC | 3437 Number of splices: Non-canonical | 209880 Mismatch rate per base, % | 0.69% Deletion rate per base | 0.01% Deletion average length | 1.72 Insertion rate per base | 0.02% Insertion average length | 1.92 Number of reads mapped to multiple loci | 5096669 % of reads mapped to multiple loci | 17.90% Number of reads mapped to too many loci | 445314 % of reads mapped to too many loci | 1.56% % of reads unmapped: too many mismatches | 0.00% % of reads unmapped: too short | 9.21% % of reads unmapped: other | 0.12% Thanks rna-seq SNP • 1.9k views 3 Entering edit mode I have few suggestions. If you are interrogating transcriptome data via RNA-Seq then it is designed specifically to quantify transcriptomes and not genomes where you will target variants a.k.a SNPs as you are calling them. • You should better look for expression estimates and make a study of gene expression and downstream functional studies. • If you are interested in doing variant analysis then better to go for whole genome or whole exome. Depth in WGS/WES matters for the variant analysis. What have you employed for variant calling with this data, like the tools and the workflow? What parameters? • Any particular reason you want to use RNA-Seq for variant analysis and not WGS/WES? • If it's a budget constraint and you want to find both from RNA-Seq then you have to make a trade off. Your depth is not very high to fish out very reliable SNPs from your data however you can try with GATK variant calling from RNASeq. Pretty much detailed and informative and will be a good start. • Transcript depth and single nucleotide estimates are not the same. So understand that difference. If you read carefully and understand your motivation of what you want to do probably you will have better way to deal with this data. 0 Entering edit mode 1. My aim is just to identify some candidate differentially expressed genes which I will use for wet lab experiments after validation by real time PCR. 1. I did SNP analysis because I did not know any other way to check read depth at every locus. SNP or variant identification is not my motive with this data. 2. If this data is not good for identifying DEGs, I cannot get it sequenced again. So yes its a budget constraint also. How can I make the most of this data? 3. I will try the tool you suggested on another data set. Thank you suggesting. 0 Entering edit mode I have given a reply to the below answer, take a look there. Having said if data is not good for identifying DEGs at this stage is pretty strong apriori decision. Yes you need the ploidy status for your data if you have them with you and that can be pretty informative for designing the RNA-Seq analysis for DE analysis. Just to make sure from the beginning, when you have the counts file from your samples and make a PCA, what do you observe? do they get distributed in space according to your condition or strong by another variability. This gives you a biological inference as well. Alternatively, perform the same with normalized expression data. Pretty amazing workflows are suggested by limma, edgeR and DESeq2. There is a crude approach to still get it done but not very accurate but gives you a flavor of ploidy status based on read coverage. Check this. Good luck 0 Entering edit mode I've not created counts table yet. Will do PCA afterward and see. Also, thank you for this link, its a really interesting point I learned from this discussion. 2 Entering edit mode What do you mean with reliable? Per definition, RNA-seq is not well suited for variant calling. 0 Entering edit mode By reliable I want to know the quality of this data or know some way of doing it. 0 Entering edit mode 4 Entering edit mode 3.7 years ago I think the main problem is with the polyploid nature of the wheat genome if I am not wrong. So better to cross check with variant caller, how it handles the polyploid genomes. GATK well suited for diploid genomes. You can do variant calling using RNA-Seq but it depends on what you want to get out of the data. Are you looking at coding variants or something else ? If you are looking at DEG, you can quantify the genes using featureCounts and do a differential analysis. Reliability of the differential genes depends on how many replicates you have per condition. As you have plans to validate the results by PCR, you can take top most DE genes and do a validation to make sure the data is enough. But given that 17GB genome and around 90K genes, it may not be enough. I did SNP analysis because I did not know any other way to check read depth at every locus You have bedtools genomeCov for that. I would suggest you to first understand the complexity of plant genomes and consider the ploidy and repetitive nature etc to think about analyzing the data. 0 Entering edit mode This is a very relevant point about the ploidy status that has been suggested. • Read depth can be measured by what is suggested. • Yes, I agree GATK is very much well suited for diploid genomes. • Are you aware of the ploidy status probably this link might shed some more information? Check this biostars link and see that RNA-Seq is not the correct way for estimating the ploidy status based on coverage. 0 Entering edit mode I got my answer and with 4 replicates I am starting to feel that this data will be good enough. 0 Entering edit mode Thank you geek_y this genomeCov has solved my problem. and just because you mentioned it , wheat is hexaploid with around 114k genes. 2 Entering edit mode 3.7 years ago prior alignment you could also use fastqc : https://www.bioinformatics.babraham.ac.uk/projects/fastqc/ to assess the quality of your reads 0 Entering edit mode yes, I used that but it wasn't telling me read depth which I initially wanted to know. 0 Entering edit mode Intrinsic to RNA-seq is the very variable read depth - because of variable expression. Read depth is not an important metric in RNA-seq, but the total number of reads (32M) is more relevant. 0 Entering edit mode Is there any criteria to predict how many reads means good quality. Number of reads must be dependent on transcriptome size. Say my aim is not to find any novel transcripts, how many reads do you think are enough for wheat RNA-seq. 0 Entering edit mode Our paper will tell you ;) search RNAonthebecnh NAR paper. Sorry for short promotion ;) . However it also depends fairly on the species. For humans it's 19M paired end data in RNAseq. 1 Entering edit mode I haven't read the paper - but that probably also depends on what you want to quantify and how lowly abundant some transcripts are. I don't remember exactly but another paper stated that "new discoveries" in RNA-seq (particularly low expressed genes) only saturate at ~400M reads. But that's of course nonsense for any typical application. 1 Entering edit mode wow thats a lot, yes that is why I wanted to say, what we did was quantification on human and we wanted to understand read length, coverage, SE vs PE impact on transcript abundance and DE analysis and that was pretty interesting find. There we found 19M as a plateau (PE) good enough for DE analysis at the gene level. If one is interested in novel transcripts and alternative splicing sites, it is not. However, it is also dependent to what extent this will be applicable in the wheat transcriptome. We are aware of rough protein coding genes that we can obtain for humans but is it the case of the wheat as well? Sorry, my knowledge is limited to humans and mice as of now. Probably the OP needs to check some papers of wheat transcriptome analysis. Also probably the OP might ask the wet lab person and the sequencing facility how they zeroed on this coverage. These will be plausible things to take into account. 0 Entering edit mode I read your paper and also checked it on github. It's a good one and cleared many questions I previously had.
# How to disable leading space from hyperref? I defined the following command: \newcommand{\reqWVSDataSource}[0]{ \hyperref[req:wvsDataSource]{\textit{Req01}}} If I use it like this: (\reqWVSDataSource) hyperref adds a leading space. Thus the result is "( Req01)" instead of "(Req01)". How can I disable this leading space? Thanks! - \newcommand{\reqWVSDataSource}[0]{% EDIT2: As your new command takes no argument, the optional count argument [0] in the definition may be omitted.
# Altered marginpar conflicts with biblatex citation Consider the MWE below: \documentclass[10pt,twoside,onecolumn,openright,draft]{memoir} \usepackage{fontspec} \usepackage{filecontents} \usepackage{calc} \usepackage[french]{babel} \usepackage{mparhack} \usepackage{csquotes} \usepackage[authordate,strict,backend=biber,babel=other]{biblatex-chicago} %\let\oldmarginpar\marginpar %\renewcommand\marginpar[1]{\-\oldmarginpar[\raggedleft\footnotesize\hspace{0pt}#1]% %{\raggedright\footnotesize\hspace{0pt}#1}} \begin{filecontents}{references.bib} @book{Thomsen:SL, author = {Thomsen, Marie-Louise}, title = {The Sumerian Language}, subtitle = {An Introduction to its History and Grammatical Structure}, shorttitle = {Sumerian Language}, shorthand = {SL}, edition = {2}, series = {Mesopotamia Copehagen Studies in Assyriology}, number = {10}, location = {Copenhagen}, date = {1987}, year = {1987} } \end{filecontents} \bibliography{references.bib} \begin{document} Texte\marginpar{\cite[§228]{Thomsen:SL}} et encore. \end{document} This compiles fine. However, when I uncomment the redefinition of marginpar (which I've used successfully for a few years), I get ./minimal:33: Argument of \blx@citeargs@i has an extra } Either the redefinition is problematic or else this causes a conflict of some kind with biblatex-chicago. Any thoughts? - You hit a problem if there is ever a ] in the argument as then parsing for the optional argument goes wrong, so you need a {} group to hide the inner ]: \let\oldmarginpar\marginpar
# Arrange the following bonds according to their Question: Arrange the following bonds according to their average bond energies in descending order : $\mathrm{C}-\mathrm{Cl}, \mathrm{C}-\mathrm{Br}, \mathrm{C}-\mathrm{F}, \mathrm{C}-\mathrm{I}$ 1. $\mathrm{C}-\mathrm{I}>\mathrm{C}-\mathrm{Br}>\mathrm{C}-\mathrm{Cl}>\mathrm{C}-\mathrm{F}$ 2. $\mathrm{C}-\mathrm{Br}>\mathrm{C}-\mathrm{I}>\mathrm{C}-\mathrm{Cl}>\mathrm{C}-\mathrm{F}$ 3. $\mathrm{C}-\mathrm{F}>\mathrm{C}-\mathrm{Cl}>\mathrm{C}-\mathrm{Br}>\mathrm{C}-\mathrm{I}$ 4. $\mathrm{C}-\mathrm{Cl}>\mathrm{C}-\mathrm{Br}>\mathrm{C}-\mathrm{I}>\mathrm{C}-\mathrm{F}$ Correct Option: , 3 Solution: Bond length order in carbon halogen bonds are in the order of $\mathrm{C}-\mathrm{F}<\mathrm{C}-\mathrm{Cl}<\mathrm{C}-\mathrm{Br}<\mathrm{C}-\mathrm{I}$ Hence, Bond energy order $\mathrm{C}-\mathrm{F}>\mathrm{C}-\mathrm{Cl}>\mathrm{C}-\mathrm{Br}>\mathrm{C}-\mathrm{I}$
• Subject: ... • Chapter: ... (a) Why is the ozone layer required in the stratosphere? How does it get degraded? Explain. (b) Why is the ozone depletion a threat to mankind?                                                       5mark Explanation: (a) (i) Ozone is a form of oxygen. The molecule of ozone contains three oxygen atoms $\left({O}_{3}\right)$. In the stratosphere, ozone is continuously formed by the action of UV rays on molecular oxygen. (ii) There is a 'good ozone' found in the upper part of the atmosphere and absorbs ultraviolet radiation from the Sun. (iii) Ozone gas is continuously formed by the action of UV rays on molecular oxygen and also degraded into molecular oxygen in the stratosphere. Of late, the balance has been disrupted due to enhancement of ozone degradation by chlorofluorocarbons (CFCs). These are used as refrigerants. CFCs discharged in the lower part of atmosphere, move upward and reach stratosphere. UV rays act on them releasing Cl atoms. Cl degrades ozone releasing molecular oxygen, with these atoms acting merely as catalysts; Cl atoms are not consumed in the reaction. Hence whatever CFCs are added to the stratosphere, they have permanent and continuing affects on ozone levels.Although ozone depletion is occurring widely, it is particularly marked over Antarctic region. It has resulted in formation of large area of thinned ozone layer, commonly called as ozone hole. (b) Threats of ozone depletion to mankind: The thin layer of ozone around the atmosphere that prevents entry of harmful ultra-violet rays (UV-rays) is called ozone shield. Methane and chloro-fluorocarbons are the two ozone depleting substances. Chlorofluorocarbons release active chlorine when UV rays act on them in the stratosphere. Cl-atoms degrade ozone releasing molecular oxygen. Depletion of ozone allows the entry of UV radiation to the earth. UV-B rays damage DNA and may also cause mutation. UV-radiation causes aging of skin, damages skin cells and causes various types of cancers. It also leads to inflammation of cornea called snow-blindness, cataract etc.                                                                                   5 mark BOARD
The graph of six different polar equations are shown. Use the drop-down boxes to match each equation with its polar graph. Question: The graph of six different polar equations are shown. Use the drop-down boxes to match each equation with its polar graph. 5/10 =   /6  is 5/10 and 3/6 equivalent fraction.  if so, please explain 5/10 =   /6  is 5/10 and 3/6 equivalent fraction.  if so, please explain... Based on past experience, the main printer in a university computer centre is operating properly 90% of the time. Suppose inspections are made at 10 randomly selected times. A) What is the probability that the main printer is operating properly for exactly 9 inspections. B) What is the probability that the main printer is operating properly for at least 3 inspections? C) What is the expected number of inspections in which the main printer is operating properly? Based on past experience, the main printer in a university computer centre is operating properly 90% of the time. Suppose inspections are made at 10 randomly selected times. A) What is the probability that the main printer is operating properly for exactly 9 inspections. B) What is the probability t... What is a good alternate title for to kill a mockingbird? What is a good alternate title for to kill a mockingbird?... What letter comes after A? What letter comes after A?... Si una persona da 5 vueltas en un círculo de 5 metros cuál es es la distancia recorrida ? ocupo una respuesta urgente​ si una persona da 5 vueltas en un círculo de 5 metros cuál es es la distancia recorrida ? ocupo una respuesta urgente​... The lung are responsible for remove oxygen from the air we___and transferring it to our blood where it can be sent to our cells. A.breathe B.inhale C.absorb D.exhale the lung are responsible for remove oxygen from the air we___and transferring it to our blood where it can be sent to our cells. A.breathe B.inhale C.absorb D.exhale... N automated search feature used by search engines to find results that match your search terms is called a spider or _______. A. bug B. crawler C. creeper D. bot n automated search feature used by search engines to find results that match your search terms is called a spider or _______. A. bug B. crawler C. creeper D. bot... QUESTION 4 Why do product developers spend time and money building product loyalty? It discourages repeat buying of the product. It makes customers feel good. It forces customers to try new products. It increases sales and profits. 3 points Save Answer QUESTION 5 The American eagle is the kind of symbol that _____. can have many different meanings has only a single meaning has no meaning at all only Americans can understand QUESTION 4 Why do product developers spend time and money building product loyalty? It discourages repeat buying of the product. It makes customers feel good. It forces customers to try new products. It increases sales and profits. 3 points Save Answer QUESTION 5 The American eagle is the kind of ... Can someone help me solve this ?? can someone help me solve this ??... Which statement best describes Edward Hopper's approach to painting? Nighthawks, painting by Edward Hopper, portraying people sitting in a downtown diner late at night A)Hopper focused on creating scenes that depicted American achievement and success. B)Hopper rendered figures with lifelike accuracy, reflecting Renaissance traditions. C)Hopper used softer colors to create a sense of sadness and isolation in his paintings. D)Hopper painted in a representational style, Which statement best describes Edward Hopper's approach to painting? Nighthawks, painting by Edward Hopper, portraying people sitting in a downtown diner late at night A)Hopper focused on creating scenes that depicted American achievement and success. B)Hopper rendered figures with... Explain why cross-referencing a topic is a good idea when conducting research. because it allows students to research different kinds of data bases and see what the rest of the publishing world thinks because it allows a student to explore different points of view and come up with an angle because narrowing a topic is difficult and students always need to do more research because narrowing a topic is one of the primary skills that all students must master before writing a paper Explain why cross-referencing a topic is a good idea when conducting research. because it allows students to research different kinds of data bases and see what the rest of the publishing world thinks because it allows a student to explore different points of view and come up with an angle because n... The nile river delta is a flat area with fertile soil located in A. Lower Egypt B. Middle Egypt C. Upper Egypt the nile river delta is a flat area with fertile soil located in A. Lower Egypt B. Middle Egypt C. Upper Egypt... A piece of cardboard has the dimensions (x + 15) inches by (x) inches with the area of 60 in 2 . Write the quadratic equation that represents this and show your work. Then find the possible value(s) for x, and find the actual dimensions of the postcard. A piece of cardboard has the dimensions (x + 15) inches by (x) inches with the area of 60 in 2 . Write the quadratic equation that represents this and show your work. Then find the possible value(s) for x, and find the actual dimensions of the postcard.... If anyone is good at math do you mind helping me ty. If anyone is good at math do you mind helping me ty.... How do mountains and bodies of water affect patterns of heating and cooling how do mountains and bodies of water affect patterns of heating and cooling... The graph shows exports and imports between the united states and cuba in the late 1800s what does the graph suggest about the United States’ involvement in the Spanish-American War? - the united states wanted to support the revolution in order to gain an economic foothold in cuba -the us wanted to support the spanish government to boost american exports into cuba -the us had no interest in taking part in the conflict bc there was no potential for economic gain -the us was interested in pro the graph shows exports and imports between the united states and cuba in the late 1800s what does the graph suggest about the United States’ involvement in the Spanish-American War? - the united states wanted to support the revolution in order to gain an economic foothold in cuba -the us wante... Stress surrounds us on a daily basis. Which of the following statements BEST describes ways to reduce or handle stress? A. Handling stress effectively involves ignoring stressors and remaining silent about stressful issues. B. Handling stress effectively involves recognizing what your stressors are, developing healthy behaviors to minimize stress, and adopting positive coping skills. C. Handling stress effectively involves recognizing what stressors are and visiting a psychologist weekly to disc Stress surrounds us on a daily basis. Which of the following statements BEST describes ways to reduce or handle stress? A. Handling stress effectively involves ignoring stressors and remaining silent about stressful issues. B. Handling stress effectively involves recognizing what your stressors are,...
# Violation of the proportional hazard assumption, interaction with time. Am I taking the correct steps? I will try to keep my question as short as possible. For my thesis I am researching if a risk score can predict graft failure in a cohort of $$596$$ patients over the course of $$10$$ years. (The variable is not time-varying) I want to do a Cox regression, however the Shoenfeld residuals test is significant ($$0.038$$). Which means that the proportional hazard assumption has been violated. I have tried to solve this by adding an interaction term with the log of time as shown below: .stcox t_risk10_perc risk10_perc t_risk10_perc = log(time variable of follow-up) * risk10_perc My questions are: • Am I doing this correctly? • Should I look at the t_risk10_perc or at the risk10_perc hazard ratio? Unless the syntax of your software differs substantially from that of the coxph function in the R survival package, then your approach is not correct. You are, however, in very extensive company in trying to fix a proportional-hazard issue this way. A simple modification can correctly accomplish what you desire, at least with coxph. t_risk10_perc = log(time variable of follow-up) * risk10_perc simply multiplies, for each case, the value of the covariate risk10_perc by the survival/censoring time for that case. As the vignette on "Using Time Dependent Covariates" in the R survival package puts it: This mistake has been made often enough th[at] the coxph routine has been updated to print an error message for such attempts. The issue is that the above code does not actually create a time dependent covariate, rather it creates a time-static value for each subject based on their value for the covariate time; no differently than if we had constructed the variable outside of a coxph call. This variable most definitely breaks the rule about not looking into the future, and one would quickly find the circularity: large values of time appear to predict long survival because long survival leads to large values for time. As explained in the vignette, the survival package allows for a time-transform functionality, with which you can define an arbitrary function of continuous time (not just of the single observed event/censoring time) and covariate values to accomplish this type of analysis. This will provide estimates of coefficient values both for the covariate and for your function of time, both of which you will need to interpret appropriately. You will have to check your software to see if it provides a similar time-transform functionality.
Math Help - can any one help me with thissss ????? 1. can any one help me with thissss ????? find the value of "a" and "b" that makes this function continous 5b-3aX -4≤X<-2 F(X)= 4X+1 -2≤X≤2 aX²+17b 2<X≤4 my exam is tomorrow plz wish me good luck 2. Hello,Mohamed Safy Originally Posted by mohamedsafy find the value of "a" and "b" that makes this function continous 5b-3aX -4≤X<-2 F(X)= 4X+1 -2≤X≤2 aX²+17b 2<X≤4 my exam is tomorrow plz wish me good luck for the function to b continious graph should have no breaks so $ 5b-3a~*~-2= 4~*~-2~+~1 $ And $ 4~*~2~+~1=a~*~2^2~+~17b $ AND ALL THE BEST for tommorow 3. thanks soo much rally thank u but... i didnt get it clearly but iknow ifwe want to fnd 2nkowns "a" and "b" we have to find two equations and solve them together what i wanted to get is how can i get these two equations ??? and everything ill be easy to me 4. oh oh now i know what u wanted to say i didnt get it coz te sympols but i still didnt get how did u find those equations 5. Originally Posted by mohamedsafy oh now i know what u wanted to say i didnt get it coz te sympols but i still didnt get how did u find those equations First equation: You require the value of 5b-3aX at x = -2 to be the same as the value of 4X+1 at x = -2. Second equation: You require the value of 4X+1 at x = 2 to be the same as the value of aX²+17b at x = 2. 6. thx thank u mr fantastic u r really fantastic
# This code is part of Qiskit. # # (C) Copyright IBM 2019, 2020. # # obtain a copy of this license in the LICENSE.txt file in the root directory # # Any modifications or derivative works of this code must retain this # copyright notice, and modified files need to carry a notice indicating # that they have been altered from the originals. """Interface for all objects that have a parent QuadraticProgram.""" from typing import Any """Interface class for all objects that have a parent QuadraticProgram.""" [Doku] def __init__(self, quadratic_program: Any) -> None: """ Initialize object with parent QuadraticProgram. Args: Raises: """ @property Returns: """ Args:
# Leap Years: When and Why Last week we looked at some questions that arose leading up to the year 2000, triggered by the 20th anniversary of that event. Now we’ll start from a different question about that year, to look at the story of leap years. ## Was 2000 a leap year, or not? Here is the question from 1999: Year 2000 Why isn't the year 2000 a leap year? Since every year is 365.26 days long, wouldn't every 100 years be a double leap year? Does it have to do with some weird mathematical process? Thanks. There are a couple misunderstandings here to be untangled! Hi, Ben. I don't know where you got your information, but 2000 _will_ be a leap year. The years 1900 and 1800 and 1700 were not leap years, but 1600 was, and 2000 will be. That is, 2000 was special, not because it was not a leap year, but because, unlike other century years, it was! Here is an interesting Web site with information about the Gregorian calendar (which we use now) and its leap-year rule, which I quote: The Julian and the Gregorian Calendars, by Peter Meyer http://www.magnet.ch/serendipity/hermetic/cal_stud/cal_art.htm "In the Gregorian Calendar a year is a leap year if either (i) it is divisible by 4 but not by 100 or (ii) it is divisible by 400. In other words, a year which is divisible by 4 is a leap year unless it is divisible by 100 but not by 400 (in which case it is not a leap year). Thus the years 1600 and 2000 are leap years, but 1700, 1800, 1900 and 2100 are not." (The link in the original answer is dead, but I have updated it here to its current location; the quote below is no longer present, probably replaced by more detailed data.) You can think of the rule this way: 1. Start with a basic calendar in which every year has 365 days: no leap years 2. Change every year divisible by 4 to a leap year: leap years 1600, 1604, 1608, 1612, 1616, …, 1696, 1700, 1704, … 3. Now change every year divisible by 100 back to a normal year: leap years 0000, 1604, 1608, 1612, 1616, …, 1696, 0000, 1704, … 4. Finally, change every year divisible by 400 to a leap year again: leap years 1600, 1604, 1608, 1612, 1616, …, 1696, 0000, 1704, … So 2000 was the first application of this last rule since the Gregorian calendar began; but that means that my generation missed its only chance to see a multiple-of-4 year that wasn’t a leap year. It’s sort of an “Oops, nothing special here, folks!” sort of rule. This Web site also differs with you about the length of a year, stating: "The mean solar year during the last 2000 years is 365.242 days (to three decimal places)." It is because this figure is slightly _less_ than 365 1/4 days (not greater, as you stated) that it is necessary to _omit_ occasional leap days (rather than add any). To be precise, 3 days are omitted every 400 years. The average length of a calendar year is thus 365 1/4 - 3/400 = 365.25 - 0.0075 = 365.2425, which matches the astronomical figure given above pretty well. So rule 2 above would raise the length of a year to 365.25 (by adding 1/4 = 0.25 day per year), and rule 3 reduces that by 1/100 = 0.01 day, to 365.24. Then rule 4 raises it by 1/400 = 0.0025 day, to 365.2425 days. To match the current length of a year, it should be 365.2424, so this is good enough for now. (See below for more!) If the year were 365.26 days long, your calculation would be correct, we would need to insert an extra leap day every 100 years. The calculations I've described don't seem too weird to me, but the matter of defining exactly what a year is turns out to be pretty complicated, as you will see from this Web site. Be sure to read the whole linked page if you are interested! ### Different ways to measure a year Ben replied: Thanks, I am skilled in Math and I didn't really mean weird, sorry 'bout that. I always thought years were 365.26 so it's good to know that they are 365.242. Thanks again. Doctor Rick explained: Hi again, Ben. I took no offense at the word "weird"; in fact I wanted to acknowledge that the definitions of year and day do get pretty complex and even weird - though they make sense when you get to know them. I did a quick Web search to verify my hunch about your figure for the length of a year. One site I found has a long list of definitions of various astronomical periods: PREDICTABLE PERIODIC EVENTS (Jan Curtis, Alaska Climate Research Ctr.) http://climate.gi.alaska.edu/Curtis/astro1.html This time the link is still good. I will quote the relevant sections: "Earth's Tropical year 365.24219 Days "Interval for Earth to return to same equinox. This explains why leap years exist. Leap years also occur only in years when centuries are evenly divisible by four (e.g., 1600, 2000, 2400, etc.). The Gregorian calendar therefore is equal to 365 days 5 hours 49 minutes 12 seconds. "Earth's Sidereal year 365.25636 Days "Earth's Anomalistic year 365.25964 Days "Interval for Earth to orbit the Sun as measured from its closest point (perihelion) to its return back. This period is slightly less than five minutes longer than the sidereal year because the position of the perihelion point moves along the Earth's orbit by about 1.1 minutes of arc yearly. During this current epoch, the Earth is closest the Sun just after the new year. It will take about 12,500 years for this date to advance six months." In other words, the length of a year depends on whether you are measuring the time for the earth to return to the same place in orbit relative to the stars (sidereal year), or relative to the direction of the earth's tilt (tropical year), or relative to the perihelion of the earth's orbit (anomalistic year). The figure relevant to the calendar is the tropical year, because it relates to the seasons. The figure you know is correct, but it's one of the other kinds of "year." The tropical year is relevant because the purpose of the calendar is to keep the equinoxes (and therefore the seasons) aligned with the calendar. (The distance from the earth to the sun has only a small effect on the seasons; so at most the change in the perihelion will only make the northern hemisphere summer a little warmer. The bigger effect is that, because the earth moves faster in orbit near perihelion, northern hemisphere winter is currently shorter than southern hemisphere winter, and that will reverse in 12,500 years! For a nice explanation from an Australian perspective, see here! For a Maine perspective, see here.) ## The reason for leap years Now let’s back up to a 1998 question about leap years in general, for a good summary: Why Do We Have Leap Year? Dear Dr. Math, Why do we need to have one extra day each 4 years? Thanks, Pat This is an astronomical question, but I think I know the answer. In short, the reason is to preserve the alignment of dates on the calendar with the seasons of the year. As the Earth revolves around the Sun, it rotates on its axis. When it has made exactly one orbit around the Sun, it has made 366.2422 rotations on its axis. One of those rotations is accounted for by its revolving about the Sun. (Think of a planet like Mercury for which one side always faces the Sun. After one revolution, it has made one rotation, but the Sun has never set on one side of Mercury, and never risen on the other.) That means that 365.2422 days have elapsed. An ordinary year contains 365 days, not 365.2422 days. Since .2422 is about 1/4, every four years we have fallen behind by almost a full day. If we didn't do anything about this, after 700 years we would have Summer in January and Winter in July! As a result, we insert an extra day, 29 February, to make a Leap Year. This arrangement results in what is called the Julian Calendar, supposedly invented by Julius Caesar (more likely just decreed by him). The average year is 365.25 days under this calendar. This covers my rules 1 and 2. If you thought the mention of 366.2422 was wrong, and still don’t get it after it was explained, see here: One Circle Revolving Around Another Now we need rules 3 and 4: Of course .2422 is not exactly 1/4, so we will be drifting a little, even with Leap Years. As a result, every year divisible by 100 is declared *not* to be a leap year. 1900 was not a leap year under this calendar. That means that the average year is 365.24 days, still a little off. To be even more accurate, every year divisible by 400 is declared to be a leap year, after all! Thus 2000 will be a leap year. This system is called the Gregorian calendar, since it was established by order of Pope Gregory in 1582. This was only adopted in English-speaking countries in 1752, however, to be made retroactive. In the Gregorian calendar, the average year is 365.2425, which is off only 3 days every 10000 years. No doubt someone will make more rules to fix even that slight deviation sometime in the future. If you think this is complicated, you should see how the date of Easter is calculated! Now, how about a Rule 5 to handle that extra little drift? ## Will the Gregorian calendar last forever? Here’s a question about the future of leap years, from 2004: Will Zeller's Rule Work Indefinitely? I showed an equation involving Zeller's Rule to a college teacher and he told me the equation may not work for very distant years. He said that it may be impossible to write an equation relating the day of the week to a given year; month; and day of month because the exact value relating the two may have irrational numbers involved. His statement came as a shock to me; I thought that since the use of the rounding down function is used throughout "Zeller's Rule," the equation would work indefinitely. Who's right? Is the equation sure to work in 4561? Indefinitely? Zeller’s Rule is a formula for finding the day of the week, which encapsulates the rules of the Gregorian calendar. We’ll be examining Zeller’s Rule next week; the question here is really about the calendar itself, not the formula. I replied to this intriguing question: Hi, Hunter. Zeller's formula exactly corresponds to the Gregorian calendar, and will work as long as that calendar is used. It is true that, over a very long time, that calendar would need further adjustment, just as the Julian calendar did; but until a new calendar is defined, you can't say the formula is wrong! The point is that a calendar is not a measurement of reality, but a legal concept established by law, and therefore it remains valid as long as, but ONLY as long as, the law is in effect. So the formula you have does not refer to the astronomically measured length of a year, and does not depend on physical reality for its accuracy; on the other hand, it depends on the whim of governments, so no one can really say how long it will remain valid! There are several layers of reality here. The rules for the calendar, as we’ve seen, are intended to keep the calendar in sync with the solar system; and the formula is just an embodiment of those rules. The solar system gives us certain parameters that we can’t change, which accounts for the need for an approximation that, in its several rules, works something like a decimal approximation to an irrational number, each digit (or rule) getting us closer to what we need, and keeping it accurate to within a day for longer and longer time periods. But beneath this we have the fact that those parameters gradually change, so that we might eventually need to change the calendar because the year is no longer 365.2422 days long. That’s a separate issue I didn’t touch, because it’s about reality, not math. A calendar is only a way to fit a whole number of days into each year, while staying as close as possible to what is, as you were told, a theoretically irrational number of days per year astronomically (and also somewhat variable, and subject to errors in measurement). Therefore no completely regular calendar can be exactly correct forever; but then, in a sense, no calendar is really exact anyway, since the whole point is to approximate the year with whole numbers. It's just that an extra day will have to be added or dropped eventually. What is surprising is that such a simple set of rules (and therefore a simple calculation) happens to be able to do such a good job of approximating the physical length of a year. It’s easy to imagine a world in which we’d need a leap day, say, after 3 years, and then again after 4 years, and then an extra leap day after 17 years, or whatever! All the 4’s and 100’s make it impressively simple. The second link below has been taken down (ironically, because “astronomy and astrophysics knowledge evolves”), so it’s good that I copied the part that mattered: Our Calendar FAQ has links to several sites about calendars; here is one that explains the Gregorian calendar with links to other details, historical and astronomical: Gregorian Calendar http://scienceworld.wolfram.com/astronomy/GregorianCalendar.html This site summarizes how the rules were changed from the Julian to the Gregorian calendar, and mentions a proposed additional rule that would keep it accurate for 20,000 years: Calendars http://csep10.phys.utk.edu/astr161/lect/time/calendars.html However, the Julian year still differs from the true year of 365.242199 days by 11 minutes and 14 seconds each year, and over a period of 128 years even the Julian Calendar was in error by one day with respect to the seasons. By 1582 this error had accumulated to 10 days and Pope Gregory XIII ordered another reform: 10 days were dropped from the year 1582, so that October 4, 1582, was followed by October 15, 1582. In addition, to guard against further accumulation of error, in the new Gregorian Calendar it was decreed that century years not divisible by 400 were not to be considered leap years. Thus, 1600 was a leap year but 1700 was not. This made the average length of the year sufficiently close to the actual year that it would take 3322 years for the error to accumulate to 1 day. The Julian calendar, with its average year of 365.25 days, was off by 0.007801 = 1/128 year, so it gained a day every 128 years. The Gregorian calendar averages 365.2425 days, and is therefore off by 0.000301 = 1/3322 years. ### Making it last A further modification to the Gregorian Calendar has been suggested: years evenly divisible by 4000 are not leap years. This would reduce the error between the Gregorian Calendar Year and the true year to 1 day in 20,000 years. However, this last proposed change has not been officially adopted; there is plenty of time to consider it, since it would not have an effect until the year 4000. That is, the length of a year in the Julian calendar was 365 + 1/4 - 1/100 = 365.24 (off by 0.002199 days, or 1 day in 454 years) and in the Gregorian calendar is 365 + 1/4 - 1/100 + 1/400 = 365.2425 (off by 0.000301, or 1 day in 3322 years) while with the 4000 year rule it will be 365 + 1/4 - 1/100 + 1/400 - 1/4000 = 365.24225 (off by 0.000051, or 1 day in 19,607 years) So there’s our Rule 5: 5. change every year divisible by 4000 back to a normal year: leap years 4000, 4004, 4008, 4012, 4016, …, 4096, 0000, 4104, … For a reference to this “Herschel proposal”, see Wikipedia. Given that such a simple addition (which has not been made only because it is not needed yet) would fix the Gregorian calendar so effectively, we can safely say that you can use Zeller's formula up to the year 4000. After that--if the change is actually made in law-- you can just add a term to the formula and keep it correct. But this assumes the earth’s orbit has not changed much in that time; that’s for astrophysicists, not mathematicians, to discuss. ## A puzzle We can close with this little puzzle, whose answer relates to leap years: When Were They Born? Two people celebrate their birthdays on the same day this June. One of them is exactly 2555 days older than other. In what years were they born? Doctor Ian provide a hint: Hi Tina, Note that 2555 = 7 * 365. What this means is that there are no leap years between their birthdays. When is it possible to go seven years without a leap year? You should be able to answer that now… This site uses Akismet to reduce spam. Learn how your comment data is processed.
Question about velocity-dependent Lagrangian involving magnetic fields 1. Feb 9, 2013 anton01 1. The problem statement, all variables and given/known data The Lagrange method does work for some velocity dependent Lagrangian. A very important case is a charged particle moving in a magnetic field. The magnetic field can be represented as a "curl" of a vector potential ∇B = ∇xA . A uniform magnetic field B0 corresponds to a vector potential ∇A = 1/2 B0 x r. (a) Check that B0 = ∇xA (b) From the Lagrangian $\frac{1}{2}mv^{2}+e\overline{v}.\overline{A}$ show that the EOM derived are identical to the classical Newton's law with the Lorentz force F = ev x B . 2. Relevant equations Euler-Lagrange equations: $\frac{d}{dt}(\frac{\partial L}{\partial \dot{s_{j}}})$=$\frac{\partial L}{\partial s_{j}}$ Triple product: a.(b x c)=b.(c x a) a x (b x c)=(a.c)b - (a.b)c 3. The attempt at a solution For a, I have tried using the triple product: ∇ x A = 1/2 ∇x(Bo x r) = 1/2 [Bo(∇.r)-(∇.Bo)r] Since Bo is uniform, its divergence is zero and so: ∇ x A =1/2 Bo(∇.r) From here, I guess ∇.r=2 for the proof to work, but I really don't see it. For part b, I am not even sure where to start it. As in how can I apply the E-L equations to it? What scares me the most is how exactly do I take the derivatives on a Lagrangian involving vectors. Any help is welcome, thank you! 2. Feb 10, 2013 Oxvillian Re: Question about velocity-dependent Lagrangian involving magnetic fi (a) Looks like you have ∇.B where you should have B.∇ in your vector identity. (b) The Lagrangian is still a scalar - it's just that it's been written with one piece that's a scalar product of two vectors. So just expand out that product and do Euler-Lagrange stuff as usual.
# Help w/ 1 more limit 1. Dec 16, 2004 ### SomeRandomGuy lim as x approaches 0 from the right of x^(tan(x)) I took the ln and got tan(x)ln(x), then made it ln(x)/(1/tan(x)) which = ln(x)/(cot(x)) and I could use L' Hospital's rule. I got (1/x)/(csc^2(x)) and made that (1/x)/(1/sin^2(x)). I then made that sin^2(x)/x and used L' Hospital's rule again to get the lim as x approaches 0 from the right of 2cos(x) which = 2. Am I right or wrong? :/ 2. Dec 16, 2004 ### Hurkyl Staff Emeritus You've made at least one mistake. 3. Dec 16, 2004 ### Hurkyl Staff Emeritus 0^0 is not 1. :tongue2: It's an indeterminate form. 4. Dec 16, 2004 ### Euclid Write $$L=\lim_{x\rightarrow 0} x^{\tan x}$$. Then $$\log L =\lim_{x\rightarrow 0} \tan x\log x = \lim_{x\rightarrow 0} \frac{\tan x}{\log x}= 0$$ So L = e^0 = 1. There is no use for L'Hopitals rule here. 5. Dec 16, 2004 ### Cantari 0/0 is another indeterminate form. When you took the derivative of sin^2 x, you messed up (as well as forgot a negative sign which doesnt effect it). So instead of your top going to 2 it should go to 0. Last edited: Dec 16, 2004 6. Dec 16, 2004 ### dextercioby U've written something wrong. Limit and logaritm commute.Use this property to show that: $$\lim_{x\rightarrow 0} \ln(x^{\tan x}) = \lim_{x\rightarrow 0}\tan x\ln x$$ I hope u can show that the logaritm of the limit is zero,and hence the initial limit is 1. Daniel. 7. Dec 16, 2004 ### dextercioby Not to perform any differentiations on the sine squared use this trick $$\lim_{x \rightarrow 0} \frac{\sin x}{x} =1$$. Daniel. 8. Dec 16, 2004 ### Cantari Also I am not sure that you know that when you take the ln of this, you need to take the ln of the other side of the equals sign. So just assume that it equals y, then the answer is ln y = 0, which would be 1. 9. Dec 16, 2004 ### dextercioby Which other side??Of course u apply logarithm on the both sides of the equation: $$\lim_{x\rightarrow 0} x^{\tan x} = L$$ ,where L is the limit and is the unknown.When u take logarithm,u should be stating: $$\ln L=...=0$$ ,from where u get your result... Daniel. 10. Dec 16, 2004 ### Cantari ... my comment was directed to the original poster as he said he got an answer of 2 and assumed that was the final answer. Which would not be the case even if 2 was what the right side came out to be due to the fact he took the log of the problem in the first step.