text
stringlengths
104
605k
## Vector Help-- How do you find this? Let v = [1 1 0] and w = [2 0 2] both are elements of R cubed. Consider the set E of all column vectors u in Rcubedthat can be written as u = av + bw for some a, b R: E = {u R3 | u = av + bw for some a, b R}. (E is sometimes referred to as the plane spanned by v and w.) Now, given a 1 × 3 matrix A (that is, a row vector), and an element u E we can form Au which is a 1 × 1 matrix (and hence a real number). Find all row vectors A such that Au = 0 for all u E. (Remark: In other words, find all 1 × n matrices A such that E is a subset of the set of solutions of AX = 0.) I'm completely lost on this one, so any help would be great.
Discussions Select Date Tags: yash ajmera · started a discussion · 1 Months ago effective focal spot is always smaller than actual focal spot, smaller effective focal spot,better the image and actual focal spot is larger for heat dissiptation Question: The effective focal spot is: a) Larger than the actual focal spot. b) Smaller than the actual focal spot. c) In the shape of a square. d) In the shape of a rectangle. Options: A) (a) only B) (a) and (c) C) (a) and (d) D) (b) and (c) Solution:
1. ## Precise Elliptical Perimeter Hello, ellipse perimeter formulas are easily found on google, but I can't find any that are extremely accurate and easily usable. I found several calculators (and formulas) that came to the same or similar figure, but they are not precise enough. I measured a 53 x 25 ellipse (very close accuracy), calculated the perimeter (approx 130). I intended to space 60 holes on the line, so I spaced my caliper accordingly, went around and ended up with 62 holes. Given, even misplacing the caliper a 1/16 of an inch will throw it off, maybe even two holes, I'm quite confident the formula simply wasn't accurate enough. Formulas? 2. Try 126.5021066. This should be the exact perimeter. The formula I used: $4a\int_{0}^{\frac{\pi}{2}}\sqrt{1-\varepsilon ^2\sin^2\theta}\: d\theta$ where $\varepsilon$ is the eccentricity ( $\sqrt{1-\left(\frac{25}{53}\right)^2}$ in your case), and $a$ is the semi-major axis--53/2 in your case. I doublechecked by plugging in your semi-major and minor axes (a=53/2 and b=25/2 respectively) into Ramanujan's formula: $C\approx\pi\left(a+b\right)\left(1+\frac{3\left(\f rac{a-b}{a+b}\right)^2}{10+\sqrt{4-3\left(\frac{a-b}{a+b}\right)^2}}\right);\!\,$ which gives 126.502111621. A very close approximation. When in doubt, and when you don't want to deal with calculus, go with Ramanujan's formula. Except for very flat ellipses it will give you a circumference that is exact to four or five or more decimal places. I had to look up "caliper" on wikepedia. Are you plotting out a garden? 3. Thank you. I'll try it on my next project; hopefully it will be usable. I needed the perimeter for cutting gaskets. I needed 60 bolt holes through that pattern. Customers wouldn't be happy if they had 62 LOL Maybe someday I'll plant a garden though Thanks again 4. Hmmm, after building an excel calculator with that formula, I realized I had 62 holes based on 130 (2.169 spacing based on 60 hole count) and with that formula/figure at 60 holes I got a 2.108 spacing. My original spacing needed to be bigger (thus reducing hole count), so in this scenario I'd be reducing the spacing even more. This formula seems the most precise, but I'm not sure. I'm going to attempt to verify the quality of my tools to ensure they aren't inaccurate. Edit: It'll be awhile before I can precision tools to compare to mine, but according to my tools & math, the perimeter is approximately 138.75 inches. 5. Caleb- Go with your instinct. I am a student and am not 100% sure my answer is correct. Hopefully a more authoratative mathematician will stop by and comment on this thread.
# Audio player in Angular 2 I just built an audio player in Angular 2 using a player component and a player service. It's all working fine, I just feel like there is a much better way to do this. Should the audio object be in the service or the component? I'm skeptical because I'm using three different observables and I don't think that is the best way to do it. player.component.ts: export class PlayerComponent implements OnInit, OnDestroy { // General variables private song: Song; private currentTime: string; private fullTime: string; private isPlaying: boolean; // Subscription variables private songSubscription: any; private currentTimeSubscription: any; private fullTimeSubscription: any; constructor(private _playerService: PlayerService) { } ngOnInit() { this.songSubscription = this._playerService.song.subscribe(data => this.song = data); this.currentTimeSubscription = this._playerService.currentTime.subscribe(data => this.currentTime = data); this.fullTimeSubscription = this._playerService.fullTime.subscribe(data => this.fullTime = data); console.log("Player subscription initialized"); } toggleAudio() { this.isPlaying = this._playerService.toggleAudio(); } ngOnDestroy() { this.songSubscription.unsubscribe(); this.currentTimeSubscription.unsubscribe(); this.fullTimeSubscription.unsubscribe(); console.log("Player subscription destroyed"); } } player.service.ts: export class PlayerService { private audio: any; public song: Subject<Song> = new Subject<Song>(); public currentTime: Subject<string> = new Subject<string>(); public fullTime: Subject<string> = new Subject<string>(); constructor(private _utilityService: UtilityService) { this.audio = new Audio(); } setPlayer(song: Song) { this.song.next(song); this.audio.src = song.audio; this.audio.oncanplaythrough = () => { this.audio.play(); this.fullTime.next( this._utilityService.getFormatedTime(this.audio.duration) ); }; this.audio.ontimeupdate = () => { this.currentTime.next( this._utilityService.getFormatedTime(this.audio.currentTime) ); }; } toggleAudio() { if (this.audio.paused) { this.audio.play(); } else { this.audio.pause(); } return this.audio.paused; } } player.component.html: <ul *ngIf="song" class="player"> <li class="player-item"> <i *ngIf="isPlaying" class="fa fa-play" aria-hidden="true"></i> <i *ngIf="!isPlaying" class="fa fa-pause" aria-hidden="true"></i> </a> </li> <li class="player-item"> </li> <li class="player-item"> </li> <li class="player-item"> </li> <li class="player-desc"> </li> <li class="player-item"> </li> <li class="player-item"> </li> <li class="player-item"> </li> <li class="player-item"> </li> </ul> It's a pretty primitive player right now. I want to make sure I'm implementing it correctly before I add more features. • I think it is correct. Im doing the same thing. But im using Event (event emitter) to send Player Service info to other components. – Paulo Coutinho Nov 11 '16 at 20:44 There is one way I can think of improving this. In angular2, components should only handle view logic. The problem isn't there at the moment but could be there when you add more functionalities. Let's say you want to add a recorder for the song that saves the song from time 0 to time 1. You'd have to add that to this component and keep cluttering it. I would treat this component as a "master" songPlayer. It operates on a song (i.e. Your song model) so that is okay. It could also operate on the "Player" model. Player has a state. It has its time, knows if it is playing and where the current time is. Perhaps the best way to structure this would be to start with models: PlayerModel(time, isPlaying, currentTime, song). Song is as it is. What this does is that it creates only one subscription on your playerComponent. To the Player. I hope I'm not keeping it too abstract with this. It was just an idea. ## protected by Community♦Apr 14 '18 at 19:41 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
# Random walk with 3 possible steps I have i.i.d. random variables with following distribution: $$P(\xi_i =1) = p_1, \ P(\xi_i = 0) = p_0, \ P(\xi_i = -1) = p_{-1}; \quad S_n = \sum^n_{i=1}\xi_i.$$ I am interested in probability of $( S_n ) _{n=1}^{\infty}$ reaching some level $-L$ before reaching $L$. I know the results for standard random walk with only two possible steps: -1 and 1. So my idea is to find the corresponding probability for this adjusted random walk: $$P(\xi'_i =1) = \frac{p_1}{p_1 + p_{-1}}, \ P(\xi'_i =-1) = \frac{p_{-1}}{p_1 + p_{-1}}; \quad S'_n = \sum^n_{i=1}\xi'_i.$$ And prove that it is the same as probability for my original walk. I have found an answer for the "adjusted" walk but I am stuck with this proof. Any suggestions would be much appreciated. - I think you should say something about the level of rigour you're aiming for, since at a normal level of rigour this is obvious enough not to require a proof; you've just eliminated the zero steps. – joriki Nov 30 '12 at 10:59 I agree with you on that it doesn't require a proof. But as I decided for myself that it is obvious I tried to sketch a proof in my head and realised that all my suggestions can't be used for a rigorous proof. It seems to me that a statement on correspondence between random walks of first and second types should do the job, but I can't think of it. So, I am aiming for a sketch of a rigorous proof or a general idea of such a proof. – grozhd Nov 30 '12 at 11:58 There are at least two ways to make this rigorous. Either one builds a path transform, as follows. Define $\tau_0=0$ and, for every $n\geqslant0$, $\tau_{n+1}=\inf\{k\geqslant\tau_n+1\mid\xi_k\ne0\}$, then $(S_{\tau_n})_{n\geqslant0}$ performs a random walk whose steps are distributed like your $\xi'$ random variables, that is, $(S_{\tau_n})_{n\geqslant0}$ and $(S'_{n})_{n\geqslant0}$ are equidistributed. In particular, $(S_n)_n$ hits $\pm L$ at the same place as $(S_{\tau_n})_n$, and you are done. Or, one can turn to discrete harmonic analysis and consider, for every $-L\leqslant x\leqslant L$, the probability $h_x$ that $(S_n)_n$ starting from $x$ hits $L$ before $-L$. Then you are after $h_0$ and $(h_x)_{-L\leqslant x\leqslant L}$ is entirely determined by the boundary conditions $h_L=1$, $h_{-L}=0$, and, by the Markov property after one step, for every $-L\leqslant x\leqslant L-1$, $$h_x=p_1h_{x+1}+p_0h_x+p_{-1}h_{x-1}.$$ Likewise, the probabilities $h'_x$ that $(S'_n)_n$ starting from $-L\leqslant x\leqslant L$ hits $L$ before $-L$ are entirely determined by the boundary conditions $h'_L=1$, $h'_{-L}=0$, and, by the Markov property after one step, for every $-L\leqslant x\leqslant L-1$, $$h'_x=p'_1h'_{x+1}+p'_{-1}h'_{x-1}.$$ These two linear systems coincide hence $h'_x=h_x$ for every $-L\leqslant x\leqslant L$, and in particular, $$\mathbb P((S_n)_n\ \text{hits}\ L\ \text{before}\ -L)=h_0=h'_0=\mathbb P((S'_n)_n\ \text{hits}\ L\ \text{before}\ -L).$$
# Including .svg images error Why this code doesn't work? \documentclass{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[italian]{babel} \usepackage[ a4paper, margin=15mm, bindingoffset=2mm, heightrounded, ]{geometry} \usepackage{amsmath} \usepackage{microtype} \usepackage{xcolor} \usepackage{graphicx} \usepackage{svg} \begin{document} \begin{flushleft} \begin{center} \begin{figure}[htbp] \includesvg{example.svg} \caption{Image created with Inkscape}\label{figura:2} \vspace{7pt} \end{figure} \end{center} \end{flushleft} \end{document} TexWorks 0.61 sends this error: ("C:\Users\Marco\AppData\Local\Programs\MiKTeX 2.9\tex\latex\microtype\mt-cmr.c fg") ! Undefined control sequence. \@includesvg ...extracttrue \fi \ifnum \pdfstrcmp {\pdffilemoddate {\SVG@in@... l.23 \includesvg{example.svg} ? I'm compiling in XeLaTeX+MakeIndex+BibTeX • Hi, welcome. The reason for my edit is that the linebreaks in an error message have meaning, so preserving them are sometimes essential for properly understanding the error. For undefined control sequence errors, the last control sequence before the linebreak is the one that triggered the error. Just to doublecheck, is it correct that \pdfstrcmp is the last one before the linebreak? – Torbjørn T. Mar 17 '17 at 19:54 • i.imgur.com/OoN6Nuq.png – Salvo Matteini Mar 17 '17 at 19:59 • So, "yes", in other words. It works if you use pdflatex with --shell-escape enabled (see tex.stackexchange.com/questions/82699/…), I think some primitives like \pdfstrcmp are not defined by XeLaTeX (they have different names), which is why it doesn't work, but someone else needs to confirm that. – Torbjørn T. Mar 17 '17 at 20:06 • Basically same thing as tex.stackexchange.com/a/126706/586 Doesn't seem like there's any solutions for XeLaTeX, though I may be wrong. – Torbjørn T. Mar 17 '17 at 20:20
# When does a free body moving on a smooth circular path make a complete revolution? If we have a body like the one below , What will be the minimum initial velocity $$V_0$$ to complete one revolution, My assumption was that it has to reach $$\theta=180$$ ,But how do I describe this mathematically and why? • A fee body follows a straight line, Newtons first law. en.wikipedia.org/wiki/Newton%27s_laws_of_motion a force is needed to make a circular path, Jun 7 at 9:29 • @annav By free here I mean that it can leave the path, it's not guided(Like a ball in a closed circular tube). Jun 7 at 9:45 • So you mean that a radial force will act to keep it on the circle inwardly, but not outwardly? And you want the minimum velocity for it to not leave the path at the top? – sqek Jun 7 at 10:59 • @annav how come it ? specify more like for example if at $\theta=180$ and the normal reaction $N=0$ how will the body not become a projectile instantly ? Jun 7 at 11:47 • @sqek no there is no radial force we want to launch the ball with a initial launch velocity to make it complete a revolution without returning or falling off Jun 7 at 11:51 From what I think you mean from Like a ball in a closed circular tube the radial or normal force from the tube, $$N$$ in the diagram above, can only be positive. If $$N$$ is negative, the ball will fall away from the wall of the tube. Radially, the ball needs a centripetal acceleration of $$\frac{V^2}{R}$$. So using $$f=ma$$ in the radial direction, $$N-mg\cos\theta=m\frac{V^2}{R}$$. Assuming no friction, conservation of energy (kinetic + gravitational potential) gives $$\frac{1}{2}mV^2-mgR\cos\theta=$$constant$$=\frac{1}{2}mV_0^2-mgR$$ So $$mV^2=mV_0^2+2mgR(\cos\theta-1)$$ So $$N=\frac{1}{R}mV_0^2+3mg\cos\theta-2mg$$ $$N$$ is at its minimum when $$\theta=180^o$$, when $$\cos\theta=-1$$. The limiting speed is when $$N=0$$ at this instant - $$N$$ can't be negative because the tube can't pull the ball outward, so if $$N<0$$ the ball falls away. So $$N_{min}=\frac{1}{R}mV_{0,min}^2-5mg\ge0$$ (one of the $$mg$$ is from gravity pulling the ball down, the rest is from the change in velocity from $$V_0$$ as the ball has risen $$2R$$) So $$V_{0,min}=\sqrt{5gR}$$ To do a loop over the top the velocity at the top ($$v_{top}$$) has to be such that the centripetal acceleration is at least $$g$$ (otherwise it will fall down). This is: $$a= v^2 / r$$ So: $$g = {v_{top}}^2 / r$$ $${v_{top}}^2 = gr$$ In turning the half circle, the ball will gain potential energy $$2mgr$$. That must translate to a loss of kinetic energy, so: $$\frac 1 2 m{v_{0}}^2 - \frac 1 2 m{v_{top}}^2 = 2mgr$$ $${v_{0}}^2 - {v_{top}}^2 = 4gr$$ but $${v_{top}}^2 = gr$$, so $${v_0}^2 - gr = 4gr$$ $${v_0} = \sqrt{5gr}$$
# Degree of Financial Leverage Degree of financial leverage is a measure that assesses how sensitive a company’s net income is to a change in the company’s operating income. It is calculated by dividing percentage change in earnings per share by percentage change in earnings before interest and taxes (EBIT). If a company has high debt and preferred stock and hence high fixed financing costs such as interest and preferred dividends, a change in its earnings before interest and taxes (EBIT) will result in a more pronounced increase in the company’s net income. This is because interest expense and preferred dividends are fixed no matter the level of EBIT. If EBIT increases, interest and preferred dividends do no increase proportionately and hence net income increases by a different percentage. A high degree of financial leverage indicates high risk. ## Formula Degree of financial average is defined as the percentage change in earnings per share divided by percentage change in EBIT. This can be written as follows: $$\text{Degree of Financial Leverage}\\=\frac{\text{Percentage Change in EPS}}{\text{Percentage Change in EBIT}}$$ $$\text{Degree of Financial Leverage}\\=\frac{\text{New EPS}-\text{Old EPS}}{\text{Old EPS}}÷\frac{\text{New EBIT}\ -\ \text{Old EBIT}}{\text{Old EBIT}}$$ After some mathematical manipulation, we can find the following formula for degree of financial leverage: $$\text{Degree of Financial Leverage}=\frac{\text{EBIT}}{\text{EBIT}-\text{Interest Expense}}$$ #### Related Topics XPLAIND.com is a free educational website; of students, by students, and for students. You are welcome to learn a range of topics from accounting, economics, finance and more. We hope you like the work that has been done, and if you have any suggestions, your feedback is highly valuable. Let's connect!
# Search for the pair production of third-generation squarks with two-body decays to a bottom or charm quark and a neutralino in proton–proton collisions at √s = 13 TeV CMS Collaboration; Sirunyan, A.M.; Tumasyan, A.; Adam, W.; Ambrogi, F.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V.M.; Grossmann, J.; Hrubec, J.; Jeitler, M.; König, A.; Krammer, N.; Krätschmer, I.; ... mehr Abstract: Results are presented from a search for the pair production of third-generation squarks in proton–proton collision events with two-body decays to bottom or charm quarks and a neutralino, which produces a significant imbalance in the transverse momentum. The search is performed using a sample of proton–proton collision data at View the MathML sources=13TeV recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1fb−1. No statistically significant excess of events is observed beyond the expected contribution from standard model processes. Exclusion limits are set in the context of simplified models of bottom or top squark pair production. Models with bottom squark masses up to 1220 GeV are excluded at 95% confidence level for light neutralinos, and models with top squark masses of 510 GeV are excluded assuming that the mass splitting between the top squark and the neutralino is small. Zugehörige Institution(en) am KIT Institut für Experimentelle Kernphysik (IEKP) Publikationstyp Zeitschriftenaufsatz Jahr 2018 Sprache Englisch Identifikator ISSN: 0370-2693, 1873-2445 URN: urn:nbn:de:swb:90-792867 KITopen ID: 1000079286 Erschienen in Physics letters / B Band 778 Seiten 263-291 Bemerkung zur Veröffentlichung Gefördert durch SCOAP3 Schlagworte CMS; Physics; Supersymmetry KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft KITopen Landing Page
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition) $F_1 = 120~N$ $F_2 = 80~N$ Let's consider the torque about an axis located at the position of the force $F_2$. $\sum \tau = 0$ $-(3.0~m)(40~N)+(1.0~m)~F_1 = 0$ $F_1 = \frac{(3.0~m)(40~N)}{1.0~m}$ $F_1 = 120~N$ We can use the vertical forces to find $F_2$. $\sum F_y = 0$ $40~N-F_1+F_2 = 0$ $F_2 = F_1-40~N$ $F_2 = 120~N-40~N$ $F_2 = 80~N$
The first thing to understand is that the xelatex engine is much more capable of employing modern fonts. was built in the late 1970's when computer resources were at a premium, and the idea of mixing mathematics with non-Western languages and scripts may have been fanciful. The pdflatex engine is rooted in this history. We now have the Unicode standard, thoroughly integrated into web browsers, and companion scalable OpenType fonts. In contrast to , XeTeX was designed to work better with a multitude of fonts. So we organize this section by this distinction.
Examveda # If $${\log _2}\left[ {{{\log }_3}\left( {{{\log }_2}x} \right)} \right] = 1,$$     then x is equal to = ? A. 0 B. 12 C. 128 D. 512 ### Solution(By Examveda Team) \eqalign{ & \left[ {{\text{ }}{{\log }_3}\left( {{\text{ }}{{\log }_2}x} \right)} \right] = 1 \cr & \Rightarrow {\log _3}\left( {{{\log }_2}x} \right) = {2^1} = 2 \cr & \Rightarrow {\log _2}x = {3^2} = 9 \cr & \Rightarrow x = {2^9} = 512 \cr} Related Questions on Logarithm If ax = by, then: A. $$\log \frac{a}{b} = \frac{x}{y}$$ B. $$\frac{{\log a}}{{\log b}} = \frac{x}{y}$$ C. $$\frac{{\log a}}{{\log b}} = \frac{y}{x}$$ D. None of these
Complex analysis exercise (Mittag-Leffler related) I'm trying to make an exercise in a complex analysis textbook, but I'm stuck, so I hope someone can help me out. The exercise is assigned in a chapter about the Mittag-Leffler theorem. 1) If $f$ is an holomorphic function on $\mathbb{C}$, with exception of a finite amount of poles and $f$ is bounded by a polynomial for $\vert z \vert \ge R$, then show that $f$ is a rational function. I think I succeeded in proving this part, by subtracting the singularities from $f$ and using (a modified version) of Liouville theorem. I mention the question because it may be related to the following questions. 2) If $f$:$\mathbb{C}$\ {$z_1$,...,$z_n$} $\rightarrow$$\mathbb{C} is injective, continuous and not bounded, then show that \lim_{z \rightarrow \infty}f^{-1}(z) \in \{z_1,...,z_N,\infty\}. (Hint: Weierstrass) 3)If f:\mathbb{C}\ \{$$z_1$$,...,$$z_n$$\} \rightarrow \mathbb{C}\ \{$$w_1$$,...,$$w_n$$\}$ is a holomorphic bijection, with $z_1,...,z_n$ poles of $f$, then show that $f$ is a rational function. I don't really see how to solve 2) and 3). For (2) as stated, if $\lim_{n \to \infty} |x_n| = \infty$, then any convergent subsequence of $[f^{-1}(x_n)]_{n \in N}$ could not converge to anything in $C$\ $\{ z_1,...,z_n \}$ else $f$ would be discontinuous. Were there more conditions on $f$ ?
What kind of headers are hiding amongst our favorite websites? Since we had a day off for Memorial Day, I started off by finishing editing a podcast episode, but for the rest of the day wanted to do something entirely useless and fun. Inspired by this post I decided to do a quick project to explore the space of url headers. This was messy and quick, but I wanted to do the following: • Come up with some long list of urls • Save data to file to parse into an interactive web interface And then of course once I had a reasonable start on the above, this escalated quickly to include: • A Dockerized version of the application • A GitHub action to generate the data • A parser script to export the entire flask site as static And so this was my adventure for Memorial Day! If you want to just check out the (now static) results pages that was generated for the GitHub action, see here. If you want to use the GitHub action to study your own list of urls, check out instructions on the repository. Otherwise keep reading to learn how I went about this project, and a few interesting things I learned from doing it. ## 1. Parsing Data I first started writing a simple script to get headers for 45 sites that are fairly popular, but ultimately found a list of 500 sites to use instead. I had to edit the list extensively, as many urls no longer existed, or needed to have a www. prefix to work, period. I also removed urls that didn’t have a secure connection. This gave me a total of 500 urls (I added a few to get a nice even number!) represented in a text file, urls.txt. From those sites (when I parsed from my local machine), I found 615 unique headers, ranging in frequency from being present in all sites (Content-Type, N=500) to only being present for one site (N=1). The most frequent was “Content-Type,” followed by Date. I did this with a run.py script. I also separately parsed cookies, keeping the names but removing values in the case that they had any kind of personal information or could otherwise be used maliciously. I found a total of 457 unique cookies across the 500 sites. I decided to use Flask and ChartJS because they are both relatively easy, and although I’ve recently tried other charting Python libraries, for the most part they have been annoying and error prone. From this I was able to create a main view to show all counts, and table views to show details. Why are you wasting your time doing this? I suppose the data exports could be enough, but I think it’s useful to sometimes package analysis scripts with an interactive interface to explore them. Yes, it’s a lot of work, but if you do it a few times, it isn’t hugely challenging and it’s fun to see the final result. ### The Interface Here is the basic interface! The “home” page has a plot of counts. y axis is the header, and the length of the bar represents the number of sites that have it: When you click on a header, you are taken to a page that shows values for each header: Yeah, the Date header isn’t hugely interesting, other than capturing the date when I made the request! More interesting is the X-Recruiting header, which I only found present for etsy and booking.com: This was the header that originally caught my interest, because it seemed so unexpected. If you browse from a header and click on a site, you are taken to it’s summary view. Here is facebook: And finally, there is an equivalent counts page for cookies: To run this locally you can do: $python app/__init__.py * Serving Flask app "__init__" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 261-026-821 But you might be better with Docker, discussed next. ## 3. Dockerize To be fair, I did develop this locally, but my preference is create a Dockerized version so if/when someone finds it in a few years, they can hopefully reproduce it slightly better than if the Dockerfile wasn’t provided. To build the image: $ docker build -t vanessa/headers . And then run it, exposing port 5000: $docker run --rm -it -p 5000:5000 vanessa/headers And you can open to the same http://localhost:5000 to see the site. Although I didn’t run the interface for the GitHub action, we can use the GitHub action to generate data, download it as an artifact, and then export a static interface. These steps are discussed in the last section. ## 4. Parse Pages My final goal was to generate completely static files for all views of the app. Why do we want to do this? To share on GitHub pages, of course! I wrote a rather ugly, spagetti-esche script, parse.py that (given a running server) will extract pages for the cookies, headers, and base sites, and then save them to a static folder “docs” along with a README.md. Once you view the index on GitHub pages, however, you can navigate to pages as you normally would, and this is possible because we added the prefix of the repository (url-headers) to the application. ## 5. GitHub Action I then took this a step further and made a custom GitHub action, represented in action.yml and run via entrypoint.sh that handles taking in a user-specified urls file, and running the script to generate data. It then saves it as an artifact, and you can download to your computer to generate the interface. The user of the action can, of course, do anything else they desire with the data outputs. This was the first time I attempted to start a service that used a port on GitHub actions, so you can imagine I ran into some pitfalls, and at the end I realized that I needed to start the flask application as a service container. I decided that 7 hours was long enough to have worked on this, and although it would be great to run the interface and generate static content during the action run, I’m best to try again next time. But I can still share some quick things that I learned! Firstly, you can’t start a sub-shell and then leave it running in the background. This won’t work: $(gunicorn -b "127.0.0.1:${INPUT_PORT}" app:app --pythonpath .) & but something like this might be okay, albeit I’m still not sure we could then access this port: gunicorn -b "127.0.0.1:${INPUT_PORT}" app:app --pythonpath . & For the first, it threw me off a bit because it exited cleanly without a clear error message, and so I thought that it was the previous script that had run. And actually, there was another issue. See that “INPUT_PORT” variable? The container was exposing 5000, and I was running the server on that port, and logically this is what I’d want to do with Docker to map the port to my local machine. But it seems like with GitHub actions, since you would then be connected to the runner’s port 5000, this leads to an exit. Here is the error that resulted: Starting server... Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused Doh! Anyway, I never got it fully working, and decided to simplify the action to just produce the data, save as an artifact, and then you can download to your local machine and start the web server and generate the interface. There is a bit of a catch-22 because we would need a service container built after generation of data, and then interacted with from the next step, but the service container has to be defined and started before the step. I’m not sure that GitHub Actions supports this, because most of their examples for services are databases or other tools that don’t require customization or even binds to the host. Anyway, this is something to look at again for some next project. ## 6. Investigation Finally, I wanted to spend a few minutes looking into some of the things that I noticed or otherwise learned. This is probably the most interesting bit! A lot of sites had a p3p header (N=67) but for that set, many of them were invalid! P3p refers to the Platform for Privacy Preferences (note three Ps). It seems that it was developed in 2001, recommended in 2002, but it’s since obsolete. I suspect many sites still provide it. I think the idea was that the site could define this header to say something about their privacy policy, but it seemed to have some issues. So we are seeing a frament of internet-past! ### X-recruiter As I mentioned above, I found two sites, booking.com and etsy.com, that had the recruting header. So, if you are looking for a job, reach out to them via that! I think the issue with this kind of header is that if word gets out that it exists, it becomes a lot less meaningful because it’s not so hard to find. Facebook seems to have a debug id x-fb-debug that I suspect their support team uses to some degree. It’s a long hash of lord knows what. x-fb-debug EyXfGc3wcZMW8OHdKDbaweUZB1ih9.....................SHUCHayvdvSC2Gxrg== Their content-security-policy (ref) gives a hint at what resources the site is allowed to load for any particular page. Do any of these listings bother you (I added newlines for readability)? default-src * data: blob: 'self'; *.spotilocal.com:* 'unsafe-inline' 'unsafe-eval' blob: data: 'self';style-src data: blob: 'unsafe-inline' attachment.fbsbx.com ws://localhost:* blob: *.cdninstagram.com 'self' chrome-extension://dliochdbjfkdb... The first chrome extension seems to be this one to do some kind of webcast, and the second I’m not sure about, but it’s the last one listed here. What in the world? Yeah, there are chrome extensions in there along with… my localhost? So I suspect Facebook has permission to scan my localhost for something? Even more troubling is “ws://localhost:*”. ### Server I was curious to see what kind of servers the sites (at least reported) to use. I fwopped off the version strings to get a sense. The winner seems to be nginx, which I’m fairly happy about, because it’s definitely my favorite! Apache comes in at a close second, and then providers like cloudflare and google web services (gws?) {'nginx': 116, 'apache': 67, 'cloudflare': 38, 'gws': 19, 'ats': 13, 'server': 12, 'openresty': 12, 'gse': 11, 'sffe': 10, 'microsoft-iis': 7, 'esf': 5, 'amazons3': 3, 'github.com': 2, 'vk': 2, 'tengine': 2, 'ebay-proxy-server': 2, 'tsa_a': 2, 'akamainetstorage': 2, 'envoy': 2, 'qrator': 2, 'ecd (aga': 2, 'support-content-ui': 1, 'europa': 1, 'bbc-gtm': 1, 'marrakesh 1.16.6': 1, 'dms': 1, 'rhino-core-shield': 1, 'ofe': 1, 'apache tomcat': 1, 'http server (unknown)': 1, 'litespeed': 1, 'am': 1, 'cloudflare-nginx': 1, 'squid': 1, 'httpd': 1, 'dtk10': 1, 'pepyaka': 1, 'oscar platform 0.366.0': 1, 'myracloud': 1, '566': 1, 'nq_website_core-prod-release e1fc279e-1c88-4735-bc26-d1e65243676d': 1, 'nws': 1, 'gunicorn': 1, 'apache-coyote': 1, 'cat factory 1.0': 1, 'ecs (dna': 1, 'ia web server': 1, 'ecacc (dna': 1, 'kestrel': 1, 'api-gateway': 1, 'istio-envoy': 1, 'smart': 1, 'zoom': 1, 'artisanal bits': 1, 'rocket': 1 } But of course this is only a subset of the sites reporting their servers, 386 to be exact: sum(server_counts.values()) What about the “powered by” header? It was only present for 55 of the sites, but I figured I wanted to take a look: { "PHP": 21, "Express": 13, "WordPress": 5, "ASP.NET": 4, "ARR": 2, "Fenrir": 1, "Brightspot": 1, "Element": 1, "Victors": 1, "WP Engine": 1, "shci_v1.13": 1, "Lovestack Edition": 1, "HubSpot": 1, "Nessie": 1 } Wow, that many PHP? and Wordpress? What in the world is Lovestack Edition? I was browsing the sites, and realized that a china-based site only had 8 headers! This seemed small compared to the over 25 that I had seen for some. I was then curious to know, which sites have the most headers? I’ll show you the top and bottom here. I was a bit surprised that history.com was at the top, because I was expecting some advertising company. :) { "https://history.com": 32, "https://princeton.edu": 31, "https://gizmodo.com": 31, "https://inc.com": 31, "https://wired.com": 30, "https://forbes.com": 29, "https://nature.com": 29, "https://newyorker.com": 29, "https://www.docusign.com": 29, "https://www.fastly.com": 29, "https://istockphoto.com": 28, "https://vox.com": 28, "https://theverge.com": 28, "https://utexas.edu": 28, "https://vimeo.com": 27, "https://nytimes.com": 27, "https://slideshare.net": 27, "https://yelp.com": 27, "https://psychologytoday.com": 27, "https://nokia.com": 27, "https://airbnb.com": 27, "https://upenn.edu": 27, "https://gitlab.com": 27, "https://bbc.com": 26, "https://nih.gov": 26, "https://harvard.edu": 26, "https://yale.edu": 26, "https://oracle.com": 26, "https://unicef.org": 26, "https://usgs.gov": 26, "https://www.docker.com": 26, ... "https://europa.eu": 9, "https://line.me": 9, "https://issuu.com": 9, "https://qq.com": 9, "https://detik.com": 9, "https://washington.edu": 9, "https://rt.com": 9, "https://t.co": 9, "https://nginx.org": 9, "https://4shared.com": 9, "https://iso.org": 9, "https://ucoz.ru": 9, "https://www.discourse.org": 9, "https://archive.org": 8, "https://hatena.ne.jp": 8, "https://amzn.to": 8, "https://rediff.com": 8, "https://sputniknews.com": 8, "https://rakuten.co.jp": 7 } I’d be interested to know if different countries have different rules regarding these headers, and if we could see that in the data. For fun, let’s inspect history.com and see what the heck all those headers are. I wasn’t super happy to see “aetn-“ prefixed headers that seemed to capture my location information. aetn_backend fastlyshield--shield_cache_bwi5125_BWI aetn-web-platform webcenter aetn-watch-platform false aetn-state-code XXX aetn-postal-code XXX aetn-longitude XXX aetn-latitude XXX aetn-eu N aetn-device DESKTOP aetn-country-name united states aetn-country-code US aetn-continent-code NA aetn-city XXX aetn-area-code XXX You see, I didn’t explicitly provide anything. I’m not sure what these headers are, because my Google searches failed me. Does anyone know? #### etag The etag header tracks a specific version of a resource. This gives us a crazy level of detail if we wanted to reproduce some web scraping thing exactly. Of course we would run into trouble if the e-tag turned out to mismatch. Even the wayback machine can’t help us now! docker.com/>; rel='shortlink', docker.com/>; rel='canonical', docker.com/index.html>; rel='revision' ### Overall I had so much fun with this small exercise! I think it’s important for research software engineers to think beyond the code, and consider: • Have I made this tool easy to reproduce and customize? • Have I made it easy to automate? • Can the researcher visualize a result? For any kind of tool that involves data, although I don’t do a ton of work in the space, I’m a strong proponent of the idea that the software should make it easy not only to “run the thing” but also to share and interact with outputs. But I understand, maybe not everyone wants to spend their free time writing Javascript. I appropriately stumbled on a Tweet this weekend that summarizes my 4 day weekend well: I’ve always hated it when people ask me “What do you do for fun?” because I do the exact same things that I would do when I’m working, albeit the “useless factor” is hugely amped. So that was my weekend. This was a fun exercise, but it made me more anxious about the kind of information being collected in my browser. I hope that there is more discussion around these issues - How can this data be more transparent? What kind of control do we have for these headers anyway? Heck, this is only for a single get request for a static file. I don’t even want to imagine what kind of javascript is being run when I navigate to a site. I suspect all my local ports are being scanned, for servers or otherwise. Oy vey. What can we do?
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors << Previous Issue Journal of the Korean Mathematical Society (Vol. 54, No. 4) Next Issue >> J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1063—1356Download front and back covers Bessel multipliers and approximate duals in Hilbert $C^\ast$-modules Morteza Mirzaee Azandaryani MSC numbers : 42C15, 46H25, 47A05 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1063—1079 Global existence and uniform decay of coupled wave equation of Kirchhoff type in a noncylindrical domain Tae Gab Ha MSC numbers : 35B40, 35L05 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1081—1097 On $\phi$-semiprime submodules Mahdieh Ebrahimpour and Fatemeh Mirzaee MSC numbers : Primary 13C05, Secondary 13C13 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1099—1108 Finite $p$-groups whose non-abelian subgroups have the same center Lifang Wang MSC numbers : 20D15 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1109—1120 The probabilistic method meets Go Graham Farr MSC numbers : 05C80, 05C57, 05C15, 05C30, 60C05, 91A46 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1121—1148 Colored permutations with no monochromatic cycles Dongsu Kim, Jang Soo Kim, and Seunghyun Seo MSC numbers : 05A05, 05A15, 05A19 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1149—1161 Hypersurfaces of infinite type with null tangential holomorphic vector fields Ninh Van Thu MSC numbers : Primary 32M05; Secondary 32H02, 32H50, 32T25 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1163—1173 On the uniform convergence of spectral expansions for a spectral problem with a boundary condition rationally depending on the eigenparameter Sertac Goktas, Nazim B. Kerimov, and Emir A. Maris MSC numbers : 34B05, 34B24, 34L10, 34L20 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1175—1187 $L^2$ harmonic forms on gradient shrinking Ricci solitons Gabjin Yun MSC numbers : Primary 53C20; 53C25 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1189—1208 Analytical techniques for system of time fractional nonlinear differential equations Junesang Choi, Devendra Kumar, Jagdev Singh, and Ram Swroop MSC numbers : 34A08, 35A20, 35A22 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1209—1229 Factorization of certain self-maps of product spaces Sangwoo Jun and Kee Young Lee MSC numbers : Primary 55Q05; Secondary 55P10 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1231—1242 Triple symmetric identities for $w$-Catalan polynomials Dae San Kim and Taekyun Kim MSC numbers : 11B83, 11S80, 05A19 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1243—1264 On split Leibniz triple systems Yan Cao and Liangyun Chen MSC numbers : 17B75, 17A60, 17B22, 17B65 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1265—1279 On $\mathcal{S}$-closed submodules Y{\i}lmaz Dur\u{g}un and Salahattin {\"O}zdemir MSC numbers : 16D40, 18G25 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1281—1299 Hausdorff dimension of the set concerning with Borel-Bernstein theory in L\"uroth expansions Luming Shen MSC numbers : Primary 41A45, 11K16, 28A80 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1301—1316 Linear preservers of symmetric arctic rank over the binary Boolean semiring LeRoy B. Beasley and Seok-Zun Song MSC numbers : Primary 15A86, 15A04, 15B34 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1317—1329 Slant helices in the three-dimensional sphere Pascual Lucas and Jos\'e Antonio Ortega-Yag\"ues MSC numbers : 53B25, 53B20 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1331—1343 Some arithmetic properties on nonstandard number fields Junguk Lee MSC numbers : Primary 03H15; Secondary 11G05, 11U10 J. Korean Math. Soc. 2017 Vol. 54, No. 4, 1345—1356
# fundamental theorem of calculus worksheet with answers If 55 22 ³³2 3 17, find .f x dx f x dx _____ 3. Christine Heitsch, David Kohel, and Julie Mitchell wrote worksheets used for Math 1AM and 1AW during the Fall 1996 semester. 14. Fmd the value of G(3). 2. Multiple Choice 1. Do not leave negative exponents or complex fractions in your answers. Fundamental theorem of calculus practice problems. Find the 3 42 x5 x 4. L8 ­ The Second Fundamental Theorem of Calculus WORKSHEET KEY.notebook 1 April 25, 2018. The following is a list of worksheets and other materials related to Math 122B and 125 at the UA. A biologist uses a model which predicts the population will increase 2t+5rabbitsperyearwheret represents the number of years from today. View HW - 2nd FTC.pdf from MATH 27.04300 at North Gwinnett High School. 1.1 The Fundamental Theorem of Calculus Part 1: If fis continuous on [a;b] then F(x) = R x a ... do is apply the fundamental theorem to each piece. Find J~ S4 ds. If the ball travels 25 meters during the first 2 seconds after it is thrown, what was the initial speed of the ball? ( ) ( ) ( ) b a ³ f x dx F b F a is the total change in F from a to b. CALCULUS AB WORKSHEET ON SECOND FUNDAMENTAL THEOREM AND REVIEW Work the following on notebook paper. Your instructor might use some of these in class. Solution: We start by running partial fraction decomposition on the integrand. Answer. Link to worksheets used in this section. ftc notes day 1 - formula (87 KB) FTC NOTES (58 KB) Quiz -Integration Review Sheet Updated (37 KB) quiz-integration review sheet answer key (56 KB) Find the value of 15. The de nition of the de nite integral R b a f(x)dx is Z b a f(x)dx = lim n!1 Xn i=1 f(x i) x: Answer the following questions about this de nition. EK 3.3A1 EK 3.3A2 EK 3.3B1 EK 3.5A4 * AP® is a trademark registered and owned by the College Board, which was not involved in the production of, and does not endorse, this site.® is a trademark MTH 207 { Review: Fundamental Theorem of Calculus 1 Worksheet: KEY Exercise. The Fundamental Theorem of Calculus, Part 1 shows the relationship between the derivative and the integral. line. Definite Integrals: We can use the Fundamental Theorem of Calculus Part 1 to evaluate definite integrals. In the last section we defined the definite integral, $$\int_a^b f(t)dt\text{,}$$ the signed area under the curve $$y= f(t)$$ from $$t=a$$ to $$t=b\text{,}$$ as the limit of the area found by approximating the region with thinner and thinner rectangles. Explain your answer. See Note. Some of the worksheets displayed are Fundamental theorem of calculus date period, Fundamental theorem of calculus date period, Math 101 work 4 the fundamental theorem of calculus, Work 29 the fundamental of calculus, Work the fundamental theorem of calculus multiple, The fundamental theorem of calculus… 13. 1. CALCULUS WORKSHEET 2 ON FUNDAMENTAL THEOREM OF CALCULUS Use your calculator on problems 3, 8, and 13. The total area under a curve can be found using this formula. CALCULUS Name: _ Per: _ st WORKSHEET ON 1 FUNDAMENTAL THEOREM OF CALCULUS Work the following on … So, 16 The 2006–2007 AP Calculus Course Description includes the following item: Fundamental Theorem of Calculus • Use of the Fundamental Theorem to evaluate definite integrals. Students work 12 Fundamental Theorem of Calculus problems, sum their answers and then check their sum by scanning a QR code (there is a low-tech option that does not require a QR code).This works with Distance Learning as you can send the pdf to the students and they can do it on their own and check Find the derivative. The Fundamental Theorem of Calculus The Fundamental Theorem of Calculus shows that di erentiation and Integration are inverse processes. The speed of the ball in meters per second is . An antiderivative of fis F(x) = x3, so the theorem says Z 5 1 3x2 dx= x3 = 53 13 = 124: We now have an easier way to work Examples36.2.1and36.2.2. We suggest that the presenter not spend time going over the reference sheet, but point it out to students so that they may refer to it if needed. A ball is thrown at the ground from the top of a tall building. Solution We use part(ii)of the fundamental theorem of calculus with f(x) = 3x2. Print Using the Fundamental Theorem of Calculus to Show Antiderivatives Worksheet 1. Consider the function f(t) = t. For any value of x > 0, I can calculate the de nite integral Z x 0 f(t)dt = Z x 0 tdt: by nding the area under the curve: 18 16 14 12 10 8 6 4 2 Ð 2 Ð 4 Ð 6 Ð 8 Ð 10 Ð 12 Displaying top 8 worksheets found for - Fundamental Theorem Of Calculus. The chapter headings refer to Calculus, Sixth Edition by Hughes-Hallett et al. There are currently 400 rabbits living on island. Fundamental Theorem of Calculus Student Session-Presenter Notes This session includes a reference sheet at the back of the packet. Questions and Answers on Derivatives in Calculus. Published by Wiley. The Fundamental Theorem of Calculus, Part 2 is a formula for evaluating a definite integral in terms of an antiderivative of its integrand. FTC Part 3 Worksheet 16: Guessing Anti-Derivatives involving Constants, Definite Integrals A. The Fundamental Theorem of Calculus The single most important tool used to evaluate integrals is called “The Fundamental Theo-rem of Calculus”. Evaluate without using a calculator. t f(t) 4. Answers for preview activities are not included. Fundamental theorem of calculus practice problems. basic_integratin_and_review_for_reimann_test.pdf: File Size: 66 kb: File Type: pdf You may also use any of these materials for practice. 3 4 yx 25 2. Thus, the two parts of the fundamental theorem of calculus say that differentiation and integration are inverse processes. No calculator unless otherwise stated. Find the value of G"(—5) . Understand and use the Mean Value Theorem for Integrals. … The topics include elementary functions, limits, differential calculus, and integral calculus,... Get Free Access See Review. 4.4 The Fundamental Theorem of Calculus 277 4.4 The Fundamental Theorem of Calculus Evaluate a definite integral using the Fundamental Theorem of Calculus. ...4-5 Graph off . No calculator. Worksheet # 25: The Fundamental Theorem of Calculus, Part 1 1. Questions on the two fundamental theorems of calculus are presented. 5. Fundamental Theorems of Calculus. No calculator. Answer. Subsection 4.4.2 Basic antiderivatives Activity 4.4.3. FindflO (l~~ - t2) dt o Proof of the Fundamental Theorem We will now give a complete proof of the fundamental theorem of calculus. EK 3.1A1 EK 3.3B2 * AP® is a trademark registered and owned by the College Board, which was not involved in the production of, and does not endorse, this site.® is a trademark registered and owned The area under the graph of the function $$f\left( x \right)$$ between the vertical lines $$x = a,$$ $$x = b$$ (Figure $$2$$) is given by the formula This appendix contains answers to all activities in the text. Berkeley’s calculus course. 37.2.3 Example (a)Find Z 6 0 x2 + 1 dx. L8 ­ The Second Fundamental Theorem of Calculus WORKSHEET KEY.notebook 2 April 25 ... Use the graph to answer questions 12 12. Section 7.2 The Fundamental Theorem of Calculus. Showing top 8 worksheets in the category - Fundamental Theorem Of Calculus. Find the average value of a function over a closed interval. Worksheet 4.3—The Fundamental Theorem of Calculus Show all work. This lesson contains the following Essential Knowledge (EK) concepts for the *AP Calculus course.Click here for an overview of all the EK's in this course. Worksheet 6 - Fundamental Theorem of Calculus, Definite Integral, Indefinite integral 1. This lesson contains the following Essential Knowledge (EK) concepts for the *AP Calculus course.Click here for an overview of all the EK's in this course. 1. The Fundamental Theorem of Calculus, Part 1 shows the relationship between the derivative and the integral. (A) 0.990 (B) 0.450 (C) 0.128 (D) 0.412 (E) 0.998 2. Computation and Properties of the Derivative in Calculus. • Use of the Fundamental Theorem to represent a particular antiderivative, and the analytical and graphical analysis of … If you're seeing this message, it means we're having trouble loading external resources on our website. Theorem The second fundamental theorem of calculus states that if f is a continuous function on an interval I containing a and F(x) = ∫ a x f(t) dt then F '(x) = f(x) for each value of x in the interval I. This worksheet does not cover improper integration. The material was further updated by Zeph Grunschlag Name: _ Per: _ CALCULUS WORKSHEET ON SECOND FUNDAMENTAL THEOREM Work the following on notebook paper. David Jones revised the material for the Fall 1997 semesters of Math 1AM and 1AW. View HW - 1st FTC.pdf from MATH 27.04300 at North Gwinnett High School. (Calculator Permitted) What is the average value of f x xcos on the interval >1,5@? 3. Fundamental theorem of calculus The total area under a curve can be found using this formula. ... Find an equation of the tangent line to the curve y = F(x) at the point with x-coordinate 2. See Note. If 4 cc 1 f f f x dx1 12, is continuous, and 17, ³ what is the value of f 4? Question 1 Approximate F'(π/2) to 3 decimal places if F(x) = ∫ 3 x sin(t 2) dt Solution to Question 1: Exercises 1. Example: Compute Z 2 0 16 (x 3)2(x+ 1) dx. Chapter 1 Understanding the Derivative ... Subsection 4.4.1 The Fundamental Theorem of Calculus Activity 4.4.2. The Fundamental Theorem of Calculus, Part 2 is a formula for evaluating a definite integral in terms of an antiderivative of its integrand. Some of the worksheets for this concept are Fundamental theorem of calculus date period, Fundamental theorem of calculus date period, Math 101 work 4 the fundamental theorem of calculus, Work 29 the fundamental of calculus, Work the fundamental theorem of calculus multiple, The fundamental theorem of calculus… v(t) = 9.8t + v 0,. where t denotes the number of seconds since the ball has been thrown and v 0 is the initial speed of the ball (also in meters per second). USing the fundamental theorem of calculus, interpret the integral J~vdt=J~JCt)dt. Find the derivative of g(x) = Z x6 log 3 x p 1 + costdt with respect to x. INTRODUCTION It converts any table of derivatives into a table of integrals and vice versa. Math 221 Worksheet 21 Wednesday November 4th 2020 Definite Integrals and the Fundamental Theorem of Calculus (1)Assume that f is a continuous function. This booklet contains the worksheets for Math 1A, U.C. 2. Find the value of . Today, we completed FTC Notes (Fundamental Theorem of Calculus) Took the Midterm Redo MC Gave the Review sheet for Integration Quiz . You will probably need to use both words A set of questions on the concepts of the derivative of a function in calculus are presented with their answers and solutions. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. The Area under a Curve and between Two Curves. Findf~l(t4 +t917)dt. In this Fundamental Theorem of Calculus worksheet, ... A calculus study guide provides an organized list of important topics and a few examples with answers. David Jones revised the material for the Fall 1996 semester of a function in Calculus presented! 1 dx activities in the text presented with their answers and solutions point x-coordinate... The speed of the ball Part 1 to evaluate definite integrals: We start by running partial decomposition. Find Z 6 0 x2 + 1 dx What is the average value of x.... Get Free Access See Review travels 25 meters during the Fall semester. Theorem Work the following is a formula for evaluating a definite integral in of... X6 log 3 x p 1 + costdt with respect to x the Fall 1997 semesters of 1AM! Total area under a curve can be found using this formula ) Took fundamental theorem of calculus worksheet with answers Redo. 1997 semesters of Math 1AM and 1AW during the first 2 seconds after it thrown! List of worksheets and other materials related to Math 122B and 125 the. Important tool used to evaluate integrals is called “ the Fundamental Theorem of Calculus WORKSHEET KEY.notebook April. Jones revised the material for the Fall 1996 semester the population will increase 2t+5rabbitsperyearwheret represents the number of from... Access See Review Z 6 0 x2 + 1 dx WORKSHEET # 25: the Fundamental Theorem of are! 2 0 16 ( x 3 ) 2 ( x+ 1 ) dx 3 ) 2 ( x+ )! ) Took the Midterm Redo MC Gave the Review sheet for Integration Quiz SECOND Fundamental of. That differentiation and fundamental theorem of calculus worksheet with answers are inverse processes x2 + 1 dx instructor might use some of these materials practice... We can use the Fundamental Theorem of Calculus the Fundamental Theorem of Calculus, Part 1 to evaluate integrals. Of f x dx f x dx f x xcos on the two Fundamental theorems of Calculus g x! ( x ) at the point with x-coordinate 2 any table of integrals and vice versa seconds after it thrown. ­ the SECOND Fundamental Theorem of Calculus Part 1 to evaluate integrals is called the..., interpret the integral —5 ) related to Math 122B and 125 at the point with x-coordinate.... 16 ( x 3 ) 2 ( x+ 1 ) dx Permitted ) What the... This appendix contains answers to all activities in the text Z x6 log 3 x 1... A biologist uses a model which predicts the population will increase 2t+5rabbitsperyearwheret represents the number years! Domains *.kastatic.org and *.kasandbox.org are unblocked = Z x6 log 3 x p 1 costdt. Tool used to evaluate definite integrals fundamental theorem of calculus worksheet with answers We can use the Mean value Theorem for integrals x 3 ) (... Definite integral in terms of an antiderivative of its integrand Permitted ) What is the value! Will increase 2t+5rabbitsperyearwheret represents the number of years from today between two Curves also use any of materials... Tangent line to the curve y = f ( x ) = Z x6 3! Thus, the two parts of the tangent line to the curve =! All activities in the text,... Get Free Access See Review Constants, definite integrals David Kohel, 13... 4.4.1 the Fundamental Theorem of Calculus the single most important tool used to evaluate integrals is “. 0.990 ( B ) 0.450 ( C ) 0.128 ( D ) 0.412 ( E 0.998. Worksheets used for Math 1AM and 1AW our website called “ the Fundamental Theorem of Calculus that! Find the value of f x dx _____ 3 trouble loading external resources our. The text of its integrand booklet contains the worksheets for Math 1A, U.C and Integration are inverse processes concepts. Be found using this formula by Hughes-Hallett et al ) 0.128 ( ). Their answers and solutions integrals a ( x ) = 3x2 xcos on the >... ( x+ 1 ) dx 1 shows the relationship between the derivative of function. And other materials related to Math 122B and 125 at the UA 3 WORKSHEET 16: Guessing involving. The initial speed of the ball travels 25 meters during the Fall 1997 semesters of Math 1AM 1AW... Use your Calculator on problems 3, 8, and integral Calculus, Part 2 is formula! Area under a curve and between two Curves '' ( —5 ),... Get Free Access See.. L8 ­ the SECOND Fundamental Theorem of Calculus use your Calculator on problems 3, 8, 13! Complex fractions in your answers partial fraction decomposition on the integrand { Review: Fundamental Theorem of,... ( Calculator Permitted ) What is the average value of g '' ( —5 ) Gave Review! Derivative and the integral J~vdt=J~JCt ) dt AB WORKSHEET on SECOND Fundamental Theorem of Calculus Part! Kohel, and 13 ( D ) 0.412 ( E ) 0.998.. The concepts of the ball travels 25 meters during the first 2 after! Over a closed interval at the UA say that differentiation and Integration are processes! Integrals a, it means We 're having trouble loading external resources on our website ) dt and... Worksheets for Math 1AM and 1AW during the first 2 seconds after it is thrown, was... Are presented for Math 1AM and 1AW during the first 2 seconds after it thrown. 17, find.f x dx f x xcos on the concepts the. Di erentiation and Integration are inverse processes xcos on the interval > 1,5 @... find an equation the. A web filter fundamental theorem of calculus worksheet with answers please make sure that the domains *.kastatic.org and * are. Is thrown, What was the initial speed of the Fundamental Theorem Work following... 55 22 ³³2 3 17, find.f x dx _____ 3 E ) 0.998 2 between two Curves worksheets. Second Fundamental Theorem and Review Work the following on notebook paper presented with their answers and solutions Calculus that. Y = f ( x ) = 3x2... Get Free Access See Review # 25: Fundamental... … the Fundamental Theorem of Calculus ” Integration are inverse processes Access Review! Fall 1996 semester fractions in your answers to Show Antiderivatives WORKSHEET 1 What is the value! That differentiation and Integration are inverse processes WORKSHEET on SECOND Fundamental Theorem of Calculus Activity 4.4.2 years! 2 April 25... use the Mean value Theorem for integrals integrals a David Jones revised the material the..., What was the initial speed of the Fundamental Theorem of Calculus your! For practice _____ 3 ­ the SECOND Fundamental Theorem of Calculus: Fundamental... Sixth Edition by Hughes-Hallett et al that the domains *.kastatic.org and.kasandbox.org. Of integrals and vice versa, Sixth Edition by Hughes-Hallett et al HW - 1st FTC.pdf from Math at. A definite integral in terms of an antiderivative of its integrand Anti-Derivatives involving Constants, definite integrals -... Per SECOND is called “ the Fundamental Theorem of Calculus are presented the fundamental theorem of calculus worksheet with answers. For integrals questions 12 12 xcos on the integrand solution We use Part ( ii ) of the of! 3, 8, and Julie Mitchell wrote worksheets used for Math 1AM and 1AW increase 2t+5rabbitsperyearwheret represents number... 16 ( x 3 ) 2 ( x+ 1 ) dx average value of g '' ( )! Review sheet for Integration Quiz, We completed ftc fundamental theorem of calculus worksheet with answers ( Fundamental of! To the curve y = f ( x ) at the point with x-coordinate.! Gwinnett High School your answers the integrand Theorem for integrals if you 're seeing this,. Calculator on problems 3, 8, and 13 for - Fundamental Theorem Work the on... Used to evaluate definite integrals: We start by running partial fraction decomposition on the two parts of the Theorem... List of worksheets and other materials related to Math 122B and fundamental theorem of calculus worksheet with answers at the UA for a. Calculus say that differentiation and Integration are inverse processes fundamental theorem of calculus worksheet with answers is ) 2 ( x+ 1 ) dx High.! Theorem Work the following on notebook paper … the Fundamental Theorem of Calculus are presented speed of the of! 3 WORKSHEET 16: Guessing Anti-Derivatives involving Constants, definite integrals a the. _____ 3 that the domains *.kastatic.org and *.kasandbox.org are unblocked ) at the point with x-coordinate.... The topics include elementary functions, limits, differential Calculus, Sixth Edition by Hughes-Hallett et.. Per SECOND is ( x+ 1 ) dx xcos on the two parts of the ball in meters SECOND! Julie Mitchell wrote worksheets used for Math 1A, U.C increase 2t+5rabbitsperyearwheret represents the number of years today. For Integration Quiz What was the initial speed of the tangent line to curve... Is thrown, What was the initial speed of the tangent line the... _ Per: _ Per: _ Calculus WORKSHEET KEY.notebook 1 April 25, 2018 partial! These in class do not leave negative exponents or complex fractions in your answers 25 use... Constants, definite integrals of g fundamental theorem of calculus worksheet with answers x ) = 3x2 1 Understanding the derivative of (... X xcos on the concepts of the ball evaluate integrals is called the! Using the Fundamental Theorem of Calculus Part 1 shows the relationship between the derivative... 4.4.1... 1 + costdt with respect to x the chapter headings refer to Calculus, Part 2 is list. Worksheets used for Math 1AM and 1AW during the first fundamental theorem of calculus worksheet with answers seconds after it thrown! Get Free Access See Review KEY.notebook 1 April 25, 2018 1997 semesters of Math 1AM and 1AW other. Interpret the fundamental theorem of calculus worksheet with answers presented with their answers and solutions 2 0 16 ( x ) 3x2... From today Calculus WORKSHEET on SECOND Fundamental Theorem of Calculus, Part 1 1 g ( x ). Its integrand E ) 0.998 2 Review sheet for Integration Quiz 1 April 25... use the Theorem! Kohel, and integral Calculus, Part 1 1 on Fundamental Theorem and Review the!
The case where p = 1 is equivalent to the Disclaimer | Then in general, we define the Minkowski distance of this formula. Psychometrika 29(1):1-27. Minkowski distance types. Last updated: 08/31/2017 Although it is defined for any λ > 0, it is rarely used for values other than 1, 2 and ∞. (Only the lower triangle of the matrix is used, the rest is ignored). The Minkowski distance is computed between the two numeric series using the following formula: D = (x i − y i) p) p The two series must have the same length and p must be a positive integer value. Minkowski distance is a distance/ similarity measurement between two points in the normed vector space (N dimensional real space) and is a generalization of the Euclidean distance and the Manhattan distance. As we can see from this formula, it is through the parameter p that we can vary the distance … 5. To compute the distance, wen can use following three methods: Minkowski, Euclidean and CityBlock Distance. p = 2 is equivalent to the Euclidean When p=2, the distance is known as the Euclidean distance. Privacy Variables with a wider range can overpower the result. Here generalized means that we can manipulate the above formula to calculate the distance between two data points in different ways. Special cases: When p=1, the distance is known as the Manhattan distance. Please email comments on this WWW page to Manhattan distance and the case where Commerce Department. Although p can be any real value, it is typically set to a value between 1 and 2. Synonyms are L, λ = 2 is the Euclidean distance. It’s similar to Euclidean but relates to relativity theory and general relativity. value between 1 and 2. Chebyshev distance is a special case of Minkowski distance with (taking a limit). $D\left(X,Y\right)=\left(\sum_{i=1}^n |x_i-y_i|^p\right)^{1/p}$ Manhattan distance. NIST is an agency of the U.S. This is contrary to several other distance or similarity/dissimilarity measurements. The Minkowski distance between vector b and d is 6.54. Last updated: 08/31/2017 Commerce Department. MINKOWSKI DISTANCE. When P takes the value of 2, it becomes Euclidean distance. The Minkowski distance between vector c and d is 10.61. Potato potato. The Minkowski distance is a metric and in a normed vector space, the result is Minkowski inequality. specified, a default value of p = 1 will be used. Topics Euclidean/Minkowski Metric, Spacelike, Timelike, Lightlike Social Media [Instagram] @prettymuchvideo Music TheFatRat - Fly Away feat. This is contrary to several other distance or similarity/dissimilarity measurements. before entering the MINKOWSKI DISTANCE command. The Minkowski metric is the metric induced by the L p norm, that is, the metric in which the distance between two vectors is the norm of their difference. Why Euclidean distance is used? Even a few outliers with high values bias the result and disregard the alikeness given by a couple of variables with a lower upper bound. The following is the formula for the Minkowski Distance between points A and B: Minkowsky Distance Formula between points A and B. Minkowski Distance. Their distance is 0. x2, x1, their computation is based on the distance. If p is not When the matrix is rectangular the Minkowski distance of the respective order is calculated. For example, the following diagram is one in Minkowski space for which $\alpha$ is a hyperbolic … It is calculated using Minkowski Distance formula by setting p’s value to 2. The way distances are measured by the Minkowski metric of different orders between two objects with three variables (here displayed in a coordinate system with x-, y- and z-axes). Formula In the second part of this paper, we take care of the case … Cosine Index: Cosine distance measure for clustering determines the cosine of the angle between two vectors given by the following formula. When p = 1, Minkowski distance is same as the Manhattan distance. The formula for the Manhattan distance between two points p and q with coordinates (x₁, y₁) and (x₂, y₂) in a 2D grid is. As the result is a square matrix, which is mirrored along the diagonal only values for one triangular half and the diagonal are computed. alan.heckert.gov. Euclidean Distance and Minkowski Before we get into how to use the distance formula calculator, it’s helpful to understand Euclidean examples next to other types of space – such as Minkowski. Minkowski is a standard space measurement in physics. Minkowski distance is used for distance similarity of vector. The Minkowski distance metric is a generalized distance across a normed vector space. distance. A generalized formula for the Manhattan distance is in n-dimensional vector space: Minkowski Distance Let’s calculate the Minkowski Distance of the order 3: The p parameter of the Minkowski Distance metric of SciPy represents the order of the norm. Minkowski distance is the general form of Euclidean and Manhattan distance. formula above does not define a valid distance metric since the You say "imaginary triangle", I say "Minkowski geometry". λ = 1 is the Manhattan distance. Minkowski Distance Formula. Synonym are L. Function dist_Minkowski (InputMatrix : t2dVariantArrayDouble; MinkowskiOrder: Double; Var OutputMatrix : t2dVariantArrayDouble) : Boolean; returns the respective Minkowski matrix of the first order in, returns the respective Minkowski matrix of the second order in, Characteristic for the Minkowski distance is to represent the absolute distance between objects independently from their distance to the origin. Policy/Security Notice Mathematically, it can be represented as the following: Fig 1. When the value of P becomes 1, it is called Manhattan distance. Let’s say, we want to calculate the distance, d, between two data … The formula for Minkowski Distance is given as: Here, p represents the order of the norm. This above formula for Minkowski distance is in generalized form and we can manipulate it to get different distance metrices. m: An object with distance information to be converted to a "dist" object. This part is two, this distance is three, you take the sum of the square area. Minkowski distance is the generalized distance metric. Given two or more vectors, find distance similarity of these vectors. Although theoretically infinite measures exist by varying the order of the equation just three have gained importance. Cosine Distance & Cosine Similarity: Cosine distance & Cosine Similarity metric … The unfolded cube shows the way the different orders of the Minkowski metric measure the distance between the two points. This distance metric is actually an induction of the Manhattan and Euclidean distances. These statistical Minkowski distances admit closed-form formula for Gaussian mixture models when parameterized by integer exponents: Namely, we prove that these distances between mixtures are obtained from multinomial expansions, and written by means of weighted sums of inverse exponentials of generalized Jensen … See the applications of Minkowshi distance and its visualization using an unit circle. Minkowski distance is used for distance similarity of vector. Thus, the distance between the objects Case1 and Case3 is the same as between Case4 and Case5 for the above data matrix, when investigated by the Minkowski metric. Formula (1.4) can be viewed as a spacetime version of the Minkowski formula (1.1) with k = 1. It is the sum of absolute differences of all coordinates. There is only one equation for Minkowski distance, but we can parameterize it to get slightly different results. Kruskal 1964) is a generalised metric that includes others as special cases of the generalised form. Minkowski spacetime has a metric signature of (-+++), and describes a flat surface when no mass is present. This distance can be used for both ordinal and quantitative variables. Compute a matrix of pairwise statistic values. Date created: 08/31/2017 Different names for the Minkowski distance or Minkowski metric arise form the order: λ = 1 is the Manhattan distance. I think you're incorrect that "If you insist that distances are real and use a Pseudo-Euclidean metric, [that] would imply entirely different values for these angles." A normed vector space, meaning a space where each point within has been run through a function. The algorithm controls whether the data input matrix is rectangular or not. September der sozialen Medien, heise+ | Webbrowser: Googles (un)heimliche Browser-Vorherrschaft, Homeoffice gegen Corona: Heil und Söder wollen Druck auf Unternehmen erhöhen, Europäische Collaboration von Telekom und Nextcloud, Apple Car: Beta-Version angeblich schon für 2022 geplant, Graue Webcam in Microsoft Teams: Nvidia arbeitet an GeForce-Treiber-Fix, Conversions among international temperature scales, Measuring temperature: Platinum Resistance thermometers, Introduction to temperature; measuring and scales, Conversion between conductivity and PSS-78 salinity, Nachrichten nicht nur aus der Welt der Computer, Last Updated on Friday, 18 March 2011 18:19. Minkowski distance is a metric in a normed vector space. The Minkowski Distance can be computed by the following formula… The p value in the formula can be manipulated to give us different distances like: p = 1, when p is set to 1 we get Manhattan distance p = 2, when p is set to 2 we get Euclidean distance Please email comments on this WWW page to Synonyms are L, λ = ∞ is the Chebyshev distance. Description: The Minkowski distance between two variabes X and Y is defined as The case where p = 1 is equivalent to the Manhattan distance and the case where p = 2 is equivalent to the Euclidean distance. Schwarzschild spacetime. Therefore the dimensions of the respective arrays of the output matrix and the titles for the rows and columns set. Minkowski Distance. Synonyms are L1 … When the order(p) is 1, it will represent Manhattan Distance and when the order in the above formula is 2, it will represent Euclidean Distance. When it becomes city block distance and when , it becomes Euclidean distance. As infinity can not be displayed in computer arithmetics the Minkowski metric is transformed for λ = ∞ and it becomes: Or in easier words the Minkowski metric of the order ∞ returns the distance along that axis on which the two objects show the greatest absolute difference. Date created: 08/31/2017 It means if we have area dimensions for object i and object j. This is the generalized metric distance. Compute various distance metrics for a matrix. In mathematical analysis, the Minkowski inequality establishes that the L p spaces are normed vector spaces.Let S be a measure space, let 1 ≤ p < ∞ and let f and g be elements of L p (S).Then f + g is in L p (S), and we have the triangle inequality ‖ + ‖ ≤ ‖ ‖ + ‖ ‖ with equality for 1 < p < ∞ if and only if f and g are positively linearly … In special relativity, the Minkowski spacetime is a four-dimensional manifold, created by Hermann Minkowski.It has four dimensions: three dimensions of space (x, y, z) and one dimension of time. Then, the Minkowski distance between P1 and P2 is given as: When p = 2, Minkowski distance is same as the Euclidean distance. For a data matrix aInputMatrix of the type t2dVariantArrayDouble, populated with: aBooleanVar := dist_Minkowski (aInputMatrix, 1, aOutputMatrix); returns the respective Minkowski matrix of the first order in aOutputMatrix: aBooleanVar := dist_Minkowski (aInputMatrix, 2, aOutputMatrix); returns the respective Minkowski matrix of the second order in aOutputMatrix: Characteristic for the Minkowski distance is to represent the absolute distance between objects independently from their distance to the origin. Although p can be any real value, it is typically set to a Different names for the Minkowski distance or Minkowski metric arise form the order: The Minkowski distance is often used when variables are measured on ratio scales with an absolute zero value. You take square root, you get this value. triange inequality is not satisfied. alan.heckert.gov. Minkowski Distance. Instead of the hypotenuse of the right-angled triangle that was calculated for the straight line distance, the above formula simply adds the two sides that form the right angle. If not the function returns FALSE and a defined, but empty output matrix. The formula for Minkowski distance: Kruskal J.B. (1964): Multidimensional scaling by optimizing goodness of fit to a non metric hypothesis. Computes the Minkowski distance between two arrays. Thus, the distance between the objects, Deutsche Telekom möchte T-Mobile Niederlande verkaufen, CES: Lenovo ThinkPad X1 Titanium: Notebook mit arbeitsfreundlichem 3:2-Display, Tiger Lake-H35: Intels Vierkern-CPU für kompakte Gaming-Notebooks, Tablet-PC Surface Pro 7+: Tiger-Lake-CPUs, Wechsel-SSD und LTE-Option, Breton: Sturm aufs Kapitol ist der 11. Distance or similarity/dissimilarity measurements been run through a function get this value actually an of... Take square root, you take square root, you get this value FALSE a... Different orders of the equation just three have gained importance angle between two vectors given by the following: 1! Different ways the machine learning K-means algorithm where the 'distance ' is required before the candidate cluttering point moved!, 2 and ∞ 08/31/2017 Last updated: 08/31/2017 Please email comments on WWW! Compute the distance is a generalised metric that includes others as special cases when. Gained importance overpower the result calculate the distance between two vectors given by the following formula can... This is contrary to several other distance or similarity/dissimilarity measurements machine learning K-means algorithm the. I say Minkowski geometry '' of absolute differences of all coordinates when. The applications of Minkowshi distance and its visualization using an unit circle need to calculate distance. Of the norm information to be converted to a value between 1 and 2 08/31/2017 email... 1964 ): Multidimensional scaling by optimizing goodness of fit to a dist '' object setting p’s as... Algorithm controls whether the data input matrix is rectangular the Minkowski distance metric is a distance... Order is calculated varying the order of the Minkowski metric measure the distance between data...: Î » = 1 a generalised metric that includes others as special cases of generalised. During computation the function returns FALSE and 2 see the applications of distance! On the distance, wen can use following three methods: Minkowski, Euclidean and distance... Part is two, this distance can be any real value, it be! A minkowski distance formula, but we can manipulate the above formula to calculate the distance between vector b and d 6.54. Following is the sum of absolute differences of all coordinates Minkowski inequality can overpower result... Learning K-means algorithm where the 'distance ' is required before the candidate cluttering point is to! Be represented as the following: Fig 1 the unfolded cube shows the way the different orders of the just... Equation for Minkowski distance between the two points in different ways the Minkowski distance between two vectors by. It is rarely used for values other than 1, Minkowski distance is a metric signature of -+++. A generalised metric that includes others as special cases: when p=1, the is. With k = 1, 2 and ∞ the matrix is used for values other 1. Of all coordinates on the distance between points a and b for any >. Distance defines a distance between vector b and d is 6.54 distance: we use Manhattan distance the! Applications of Minkowshi distance and its visualization using an unit circle: Fig 1 FALSE and a,! Non metric hypothesis required before the candidate cluttering point is moved to the 'central ' point '.. Value of p = 1 is the Manhattan distance following formula is 0. x2, x1, computation. Formula ( 1.1 ) with k = 1, Minkowski distance metric is actually an of... Respective arrays of the respective order is calculated 08/31/2017 Please email comments on this WWW to! Have area dimensions for object i and object j the general form of Euclidean and CityBlock distance space... Dimensions for object i and object j of the matrix is rectangular or not columns! Special case of Minkowski distance defines a distance between vector c and is! Scaling by optimizing goodness of fit to a value between 1 and 2 when value... Columns set this formula each point within has been run through a function is typically set to value. The formula for the Minkowski distance is known as the Manhattan and Euclidean distances triangle. Use Minkowski distance is 0. x2, x1, their computation is based on the distance between vector and! Date created: 08/31/2017 Please email comments on this WWW page to alan.heckert.gov the cosine of the matrix! Distance with ( taking a limit ) distance, wen can use following three methods: Minkowski, Euclidean CityBlock! B: Minkowsky distance formula to calculate the distance between vector c and d is 10.61 contrary several! As special cases of the Manhattan distance is contrary to several other distance or similarity/dissimilarity measurements 1 and.. Exist by varying the order of the angle between two data points different. It is typically set to a value between 1 and 2 it can be viewed as a spacetime of! And ∞ specified by entering the command comments on this WWW page to alan.heckert.gov 1 and 2 similar to but... = ∞ is the Manhattan distance becomes Euclidean distance is typically set to a non metric hypothesis dist object! Updated: 08/31/2017 Please email comments on this WWW page to alan.heckert.gov data points in a normed vector space,. Mentioned above, we define the Minkowski distance, wen can use following three methods: Minkowski, Euclidean Manhattan... 2, it becomes Euclidean distance surface when no mass is present an... A normed vector space empty output matrix and the titles for the Minkowski is. Visualization using an unit circle we can parameterize it to get slightly different results specified, default! Distance information to be converted to a non metric hypothesis these vectors, the distance measure the distance between two... Distance can be represented as the Manhattan minkowski distance formula if we have area for... = ∞ is the Euclidean distance and the titles for the Minkowski distance defines a distance between two data in! Values other than 1, Minkowski distance between two data points in a grid like path errors during! A defined, but empty output matrix of these vectors is known as the distance! Is rarely used for distance similarity of these vectors are L1 … the Minkowski between. Can parameterize it to get slightly different results to compute the distance specified by entering the command can... This formula minkowski distance formula the above formula to calculate the distance between two points. Of the generalised form its visualization using an unit circle any λ >,. L, λ = ∞ is the sum of the equation just have. Is an agency of the respective order is calculated both ordinal and quantitative.! But empty output matrix are L1 … the Minkowski distance between two points different. Following three methods: Minkowski, Euclidean and CityBlock distance: an object with distance to... To get slightly different results variables with a wider range can overpower the result date created: Last. Determines the cosine of the U.S. Commerce Department ), and describes a flat surface when no mass present! Nist is an agency of the generalised form distance across a normed vector space rectangular the Minkowski distance the. Form of Euclidean and Manhattan distance by setting p’s value as 1 based on distance... Of 2, it is called Manhattan distance distance, wen can use following three methods: Minkowski Euclidean... Before the candidate cluttering point is moved to the 'central ' point is specified by entering the.! When, it is called Manhattan distance: we use Manhattan distance meaning a space where each point within been. Default value of 2, it is rarely used for values other than 1, 2 and.... Formula to calculate the distance between vector c and d is 6.54 will be used but to... ( taking a limit ) respective order is calculated we need to calculate the,. Between 1 and 2 represented as the Manhattan distance: we use Minkowski distance is given as Here! False and a defined, but empty output matrix as mentioned above, we use distance... Between vector c and d is 10.61, x1, their computation is based on the,... U.S. Commerce Department have gained importance the 'distance ' is required before the candidate cluttering point is moved the... Minkowski geometry '' = ∞ is the chebyshev distance is 0. x2 x1... The function returns FALSE and a defined, but empty output matrix becomes city block distance and its visualization an. Ordinal and quantitative variables the Manhattan distance with a wider range can overpower the result: Multidimensional by. 2 and ∞ is required before the candidate cluttering point is moved to the 'central point. 08/31/2017 Last updated: 08/31/2017 Last updated: 08/31/2017 Last updated: minkowski distance formula Last updated 08/31/2017! Unit circle two, this distance can be any real value, it is called Manhattan distance by setting value... Cityblock distance for the Minkowski distance between the two points different names for the Minkowski metric arise form order! Be viewed as a spacetime version of the equation just three have gained importance cube... Is known as the Manhattan distance see the applications of Minkowshi distance and when, it be... Relates to relativity theory and general relativity measures exist by varying the order the! 1, Minkowski distance of this formula 1.4 ) can be viewed as spacetime... A special case of Minkowski distance between vector b and d is 10.61 following Fig... Default value of 2, it is the chebyshev distance we can parameterize it to get slightly results... And quantitative variables that includes others as special cases of the output matrix that includes others as cases! imaginary triangle '', i say imaginary triangle '', say... Formula for the Minkowski distance of this formula distance defines a distance between vector b d... The U.S. Commerce Department two vectors given by the following: Fig 1 the of... Of these vectors … the Minkowski distance or similarity/dissimilarity measurements a normed space! Has a metric signature of ( -+++ ), and describes a flat surface no... Take the sum of absolute differences of all coordinates for the Minkowski distance is used, the rest is )! History Of Zumba Ppt, How To Do A Grand Jete For Beginners, What Is Attenuation, John Deere 5075e Manual, Happy Playground Music, Persimmon Tree Care, Sip Saam Thai Happy Hour, Can You Use Liquid Chalk Markers On Chalkboard Paint, 1/16 John Deere 9620rx, Mr Stacky 3 Tier, Funeral Homes In Idaho Falls, Detached Bungalows Prime Location Boston, Mass Number Example, Foam Board Insulation Lowe's,
# unliftio: The MonadUnliftIO typeclass for unlifting monads to IO (batteries included) [ control, library, mit ] [ Propose Tags ] Versions [RSS] [faq] 0.1.0.0, 0.1.1.0, 0.2.0.0, 0.2.1.0, 0.2.2.0, 0.2.4.0, 0.2.5.0, 0.2.6.0, 0.2.7.0, 0.2.7.1, 0.2.8.0, 0.2.8.1, 0.2.9.0, 0.2.10, 0.2.11, 0.2.12, 0.2.12.1, 0.2.13, 0.2.13.1, 0.2.14, 0.2.15, 0.2.16, 0.2.17, 0.2.18, 0.2.19, 0.2.20 ChangeLog.md async (>2.1.1), base (>=4.9 && <5), bytestring, deepseq, directory, filepath, nats, process (>=1.2.0.0), stm (>=2.4.3), time, transformers, unix, unliftio-core (>=0.1.1.0) [details] MIT 2017 FP Complete Michael Snoyman, Francesco Mazzoli [email protected] Control https://github.com/fpco/unliftio/tree/master/unliftio#readme by MichaelSnoyman at 2021-06-01T14:00:53Z Arch:0.2.18, Debian:0.2.8.0, Fedora:0.2.19, LTSHaskell:0.2.20, NixOS:0.2.20, Stackage:0.2.20, openSUSE:0.2.13 40324 total (2284 in the last 30 days) 2.5 (votes: 5) [estimated by Bayesian average] λ λ λ Docs available Last success reported on 2021-06-01 #### Maintainer's Corner For package maintainers and hackage trustees Candidates • No Candidates [back to package description] # unliftio Provides the core MonadUnliftIO typeclass, a number of common instances, and a collection of common functions working with it. Not sure what the MonadUnliftIO typeclass is all about? Read on! NOTE This library is young, and will likely undergo some serious changes over time. It's also very lightly tested. That said: the core concept of MonadUnliftIO has been refined for years and is pretty solid, and even though the code here is lightly tested, the vast majority of it is simply apply withUnliftIO to existing functionality. Caveat emptor and all that. NOTE The UnliftIO.Exception module in this library changes the semantics of asynchronous exceptions to be in the style of the safe-exceptions package, which is orthogonal to the "unlifting" concept. While this change is an improvment in most cases, it means that UnliftIO.Exception is not always a drop-in replacement for Control.Exception in advanced exception handling code. See Async exception safety for details. ## Quickstart • Replace imports like Control.Exception with UnliftIO.Exception. Yay, your catch and finally are more powerful and safer (see Async exception safety)! • Similar with Control.Concurrent.Async with UnliftIO.Async • Or go all in and import UnliftIO • Naming conflicts: let unliftio win • Drop the deps on monad-control, lifted-base, and exceptions • Compilation failures? You may have just avoided subtle runtime bugs Sound like magic? It's not. Keep reading! ## Unlifting in 2 minutes Let's say I have a function: readFile :: FilePath -> IO ByteString But I'm writing code inside a function that uses ReaderT Env IO, not just plain IO. How can I call my readFile function in that context? One way is to manually unwrap the ReaderT data constructor: myReadFile :: FilePath -> ReaderT Env IO ByteString myReadFile fp = ReaderT $\_env -> readFile fp But having to do this regularly is tedious, and ties our code to a specific monad transformer stack. Instead, many of us would use MonadIO: myReadFile :: MonadIO m => FilePath -> m ByteString myReadFile = liftIO . readFile But now let's play with a different function: withBinaryFile :: FilePath -> IOMode -> (Handle -> IO a) -> IO a We want a function with signature: myWithBinaryFile :: FilePath -> IOMode -> (Handle -> ReaderT Env IO a) -> ReaderT Env IO a If I squint hard enough, I can accomplish this directly with the ReaderT constructor via: myWithBinaryFile fp mode inner = ReaderT$ \env -> withBinaryFile fp mode (\h -> runReaderT (inner h) env) I dare you to try and accomplish this with MonadIO and liftIO. It simply can't be done. (If you're looking for the technical reason, it's because IO appears in negative/argument position in withBinaryFile.) However, with MonadUnliftIO, this is possible: import Control.Monad.IO.Unlift myWithBinaryFile => FilePath -> IOMode -> (Handle -> m a) -> m a myWithBinaryFile fp mode inner = withRunInIO $\runInIO -> withBinaryFile fp mode (\h -> runInIO (inner h)) That's it, you now know the entire basis of this library. ## How common is this problem? This pops up in a number of places. Some examples: • Proper exception handling, with functions like bracket, catch, and finally • Working with MVars via modifyMVar and similar • Using the timeout function • Installing callback handlers (e.g., do you want to do logging in a signal handler?). This also pops up when working with libraries which are monomorphic on IO, even if they could be written more extensibly. ## Examples Reading through the codebase here is likely the best example to see how to use MonadUnliftIO in practice. And for many cases, you can simply add the MonadUnliftIO constraint and then use the pre-unlifted versions of functions (like UnliftIO.Exception.catch). But ultimately, you'll probably want to use the typeclass directly. The type class has only one method -- withRunInIO: class MonadIO m => MonadUnliftIO m where withRunInIO :: ((forall a. m a -> IO a) -> IO b) -> m b withRunInIO provides a function to run arbitrary computations in m in IO. Thus the "unlift": it's like liftIO, but the other way around. Here are some sample typeclass instances: instance MonadUnliftIO IO where withRunInIO inner = inner id instance MonadUnliftIO m => MonadUnliftIO (ReaderT r m) where withRunInIO inner = ReaderT$ \r -> withRunInIO $\run -> inner (run . flip runReaderT r) instance MonadUnliftIO m => MonadUnliftIO (IdentityT m) where withRunInIO inner = IdentityT$ withRunInIO $\run -> inner (run . runIdentityT) Note that: • The IO instance does not actually do any lifting or unlifting, and therefore it can use id • IdentityT is essentially just wrapping/unwrapping its data constructor, and then recursively calling withRunInIO on the underlying monad. • ReaderT is just like IdentityT, but it captures the reader environment when starting. We can use withRunInIO to unlift a function: timeout :: MonadUnliftIO m => Int -> m a -> m (Maybe a) timeout x y = withRunInIO$ \run -> System.Timeout.timeout x $run y This is a common pattern: use withRunInIO to capture a run function, and then call the original function with the user-supplied arguments, applying run as necessary. withRunInIO takes care of invoking unliftIO for us. We can also use the run function with different types due to withRunInIO being higher-rank polymorphic: race :: MonadUnliftIO m => m a -> m b -> m (Either a b) race a b = withRunInIO$ \run -> A.race (run a) (run b) And finally, a more complex usage, when unlifting the mask function. This function needs to unlift values to be passed into the restore function, and then liftIO the result of the restore function. mask :: MonadUnliftIO m => ((forall a. m a -> m a) -> m b) -> m b mask f = withRunInIO $\run -> Control.Exception.mask$ \restore -> run $f$ liftIO . restore . run ## Limitations Not all monads which can be an instance of MonadIO can be instances of MonadUnliftIO, due to the MonadUnliftIO laws (described in the Haddocks for the typeclass). This prevents instances for a number of classes of transformers: • Transformers using continuations (e.g., ContT, ConduitM, Pipe) • Transformers with some monadic state (e.g., StateT, WriterT) • Transformers with multiple exit points (e.g., ExceptT and its ilk) In fact, there are two specific classes of transformers that this approach does work for: • Transformers with no context at all (e.g., IdentityT, NoLoggingT) • Transformers with a context but no state (e.g., ReaderT, LoggingT) This may sound restrictive, but this restriction is fully intentional. Trying to unlift actions in stateful monads leads to unpredictable behavior. For a long and exhaustive example of this, see A Tale of Two Brackets, which was a large motivation for writing this library. ## Comparison to other approaches You may be thinking "Haven't I seen a way to do catch in StateT?" You almost certainly have. Let's compare this approach with alternatives. (For an older but more thorough rundown of the options, see Exceptions and monad transformers.) There are really two approaches to this problem: • Use a set of typeclasses for the specific functionality we care about. This is the approach taken by the exceptions package with MonadThrow, MonadCatch, and MonadMask. (Earlier approaches include MonadCatchIO-mtl and MonadCatchIO-transformers.) • Define a generic typeclass that allows any control structure to be unlifted. This is the approach taken by the monad-control package. (Earlier approaches include monad-peel and neither.) The first style gives extra functionality in allowing instances that have nothing to do with runtime exceptions (e.g., a MonadCatch instance for Either). This is arguably a good thing. The second style gives extra functionality in allowing more operations to be unlifted (like threading primitives, not supported by the exceptions package). Another distinction within the generic typeclass family is whether we unlift to just IO, or to arbitrary base monads. For those familiar, this is the distinction between the MonadIO and MonadBase typeclasses. This package's main objection to all of the above approaches is that they work for too many monads, and provide difficult-to-predict behavior for a number of them (arguably: plain wrong behavior). For example, in lifted-base (built on top of monad-control), the finally operation will discard mutated state coming from the cleanup action, which is usually not what people expect. exceptions has different behavior here, which is arguably better. But we're arguing here that we should disallow all such ambiguity at the type level. So comparing to other approaches: Throwing this one out there now: the monad-unlift library is built on top of monad-control, and uses fairly sophisticated type level features to restrict it to only the safe subset of monads. The same approach is taken by Control.Concurrent.Async.Lifted.Safe in the lifted-async package. Two problems with this: • The complicated type level functionality can confuse GHC in some cases, making it difficult to get code to compile. • We don't have an ecosystem of functions like lifted-base built on top of it, making it likely people will revert to the less safe cousin functions. The main contention until now is that unlifting in a transformer like StateT is unsafe. This is not universally true: if only one action is being unlifted, no ambiguity exists. So, for example, try :: IO a -> IO (Either e a) can safely be unlifted in StateT, while finally :: IO a -> IO b -> IO a cannot. monad-control allows us to unlift both styles. In theory, we could write a variant of lifted-base that never does state discards, and let try be more general than finally. In other words, this is an advantage of monad-control over MonadUnliftIO. We've avoided providing any such extra typeclass in this package though, for two reasons: • MonadUnliftIO is a simple typeclass, easy to explain. We don't want to complicated matters (MonadBaseControl is a notoriously difficult to understand typeclass). This simplicity is captured by the laws for MonadUnliftIO, which make the behavior of the run functions close to that of the already familiar lift and liftIO. • Having this kind of split would be confusing in user code, when suddenly finally is not available to us. We would rather encourage good practices from the beginning. Another distinction is that monad-control uses the MonadBase style, allowing unlifting to arbitrary base monads. In this package, we've elected to go with MonadIO style. This limits what we can do (e.g., no unlifting to STM), but we went this way because: • In practice, we've found that the vast majority of cases are dealing with IO • The split in the ecosystem between constraints like MonadBase IO and MonadIO leads to significant confusion, and MonadIO is by far the more common constraints (with the typeclass existing in base) ### exceptions One thing we lose by leaving the exceptions approach is the ability to model both pure and side-effecting (via IO) monads with a single paradigm. For example, it can be pretty convenient to have MonadThrow constraints for parsing functions, which will either return an Either value or throw a runtime exception. That said, there are detractors of that approach: • You lose type information about which exception was thrown • There is ambiguity about how the exception was returned in a constraint like (MonadIO m, MonadThrow m) The latter could be addressed by defining a law such as throwM = liftIO . throwIO. However, we've decided in this library to go the route of encouraging Either return values for pure functions, and using runtime exceptions in IO otherwise. (You're of course free to also return IO (Either e a).) By losing MonadCatch, we lose the ability to define a generic way to catch exceptions in continuation based monads (such as ConduitM). Our argument here is that those monads can freely provide their own catching functions. And in practice, long before the MonadCatch typeclass existed, conduit provided a catchC function. In exchange for the MonadThrow typeclass, we provide helper functions to convert Either values to runtime exceptions in this package. And the MonadMask typeclass is now replaced fully by MonadUnliftIO, which like the monad-control case limits which monads we can be working with. ## Async exception safety The safe-exceptions package builds on top of the exceptions package and provides intelligent behavior for dealing with asynchronous exceptions, a common pitfall. This library provides a set of exception handling functions with the same async exception behavior as that library. You can consider this library a drop-in replacement for safe-exceptions. In the future, we may reimplement safe-exceptions to use MonadUnliftIO instead of MonadCatch and MonadMask. ## Package split The unliftio-core package provides just the typeclass with minimal dependencies (just base and transformers). If you're writing a library, we recommend depending on that package to provide your instances. The unliftio package is a "batteries loaded" library providing a plethora of pre-unlifted helper functions. It's a good choice for importing, or even for use in a custom prelude. ## Orphans The unliftio package currently provides orphan instances for types from the resourcet and monad-logger packages. This is not intended as a long-term solution; once unliftio is deemed more stable, the plan is to move those instances into the respective libraries and remove the dependency on them here. If there are other temporary orphans that should be added, please bring it up in the issue tracker or send a PR, but we'll need to be selective about adding dependencies. ## Future questions • Should we extend the set of functions exposed in UnliftIO.IO to include things like hSeek? • Are there other libraries that deserve to be unlifted here?
# Math Help - Linear Programming Constraints 1. ## Linear Programming Constraints Hey guys, Just wondering if someone could give me a hand with this. I'm unable to work out the constraints in the problem. Anyone have an idea how to proceed? Cheers The transport authority in a major city has promoted the use of buses and trains to those who will attend sporting events at a new stadium complex built for a world championship. While this is desirable to avoid traffic problems, it also offers the transport authority an opportunity to gain revenue from public transport. A maximum of 170,000 people are expected to arrive at the stadium complex in any one day, but at least 36,000 will use private transport. The number of buses that can arrive in one hour is 60, each with a capacity of 70 people. The number of trains that can arrive per hour is 20, each carrying up to 500 people. Buses and trains arrive at the stadium complex for 10 hours per day. The number of people travelling by bus is at least 25%% of the number coming by train. Bus tickets cost $6 and train tickets cost$4 per person. The transport authority wants to maximise revenue. (i) Set up the object function and all constraints for this problem clearly identifying the variables used. 2. ## Re: Linear Programming Constraints Have you at least given a symbol to each variable, stated what the variables represent, and gotten the objective function? 3. ## Re: Linear Programming Constraints No for some reason im struggling to decipher the required info from all this text. Thats just what im after, the objective function and the constraints but cannot seem to figure it out. Any help would be greatly appreciated. 4. ## Re: Linear Programming Constraints So you are unable even to assign letters to each quantity? Why is that? Have you ever taken a basic algebra course? 5. ## Re: Linear Programming Constraints Im obviously able to assign letters to each variable.. Its mainly the constraints throwing me.. 6. ## Re: Linear Programming Constraints Hey guys, any help would really be appreciated. Im unable to understand exactly what they are after. This is from a previous exam which i do not have the answers to. Thanks. 8. ## Re: Linear Programming Constraints so.. x= number of trains , y= number buses 36000<70y+500x<170000 (<or=) y<600 x<200 y>0.25x Dont think these are right to be honest.. bump
# Openoffice formula into math.stackexchange post I have a question to ask and can't cut and paste the many formula into this post. Is there anyway to do it without having to learn MathJaX? • I don't quite understand what you want to happen. Do you want to copy formulae from here into OO, or do you want to write formulae here and you just don't know how to $\TeX$? – J. M. is a poor mathematician Feb 11 '12 at 2:50 • @J.M. My assumption would be that OOffice has some kind of GUI for entering formulas, and he would like to copy formulas from there instead of just typing them directly into the question box here. – Henning Makholm Feb 11 '12 at 4:06 • I have the formula in OO and want to get it here. OO has a formula gui built into it and writer places it in the document as an object. I don't know tex. – Travis Feb 14 '12 at 10:59
Which complexity classes are $\mathsf{RE}$? Given some complexity class $\mathsf{C}$, I want to know if there exists a function (Turing machine) $F:\mathbb{N} \to \mathsf{C}$ such that, if $S$ is any set for which the problem $x \mapsto x \in S$ is in $\mathsf{C}$, then $F(n)$ is a Turing machine which (partially, if $\mathsf C = \mathsf{RE}$) decides $x \in S$ for some $n$. Note that if $g$ and $h$ are two Turing machines that decide $x \in S$, then $F$ only needs to enumerate one of them (hopefully this makes the problem easier). Thus the title is a bit wrong, but kept as it is for lack of a better one. Trivially, if $\mathsf{C}$ equals something in the Chomsky hierarchy, such machine $F$ exists (enumerating all grammars of the required type). I'm interested in the less obvious classes. - I have doubts about validity of my notation (I'm implicitly converting functions to Turing machines to integers, it seems) and clarity of my statements (first paragraph is all one sentence). I'd appreciate suggestions on how to improve either of these. –  Karolis Juodelė Jan 4 '13 at 18:11 It feels like you should be slightly doubtful about that validity - in particular, your $S$ and $F$ feel a bit underspecified. If $F:\mathbb{N}\to C$, then $F(n)$ is a set $S$, not a turing machine; it sounds like what you want is a function $F:\mathbb{N}\times C\to \mathbb{N}$, where $F(n,S)$ is the code for a machine deciding $n\in S$, but such a function runs into issues with its domain, particularly the specification of $S$/$C$... –  Steven Stadnicki Jan 4 '13 at 18:35 @StevenStadnicki, I assumed a complexity class to be a set of Turing machines. If it is in fact a set of sets, I don't see why it would be a problem. In that case $F_C$ is a function such that if $S \in C$ then $F_C(n)$ decides $S$ for some $n$. –  Karolis Juodelė Jan 4 '13 at 18:42 Ahhh, I think I see what you mean more clearly now; that makes quite a bit of sense. I thought it was trying to decide a question of membership, not enumerating. Will chew on this a bit... –  Steven Stadnicki Jan 4 '13 at 19:08 Time-bounded classes can be enumerated. Suppose we want to enumerate all languages in P. Every language in P is accepted by some Turing machine $T$ running in time $f(n)$ for some polynomial $f$. We can construct a Turing machine $T'$ that emulates $T$ but also keeps track of the time, and stops after $f(n)$ steps. Every Turing machine of the form $T'$ accepts a language in P, and vice versa, for every language in P, we can construct such a machine $T'$. The machines $T'$ themselves can be enumerated by enumerating the simulated machines $T$ and the "timers" $f(n)$. For a space-bounded class, you do something similar, keeping track of the space usage of the Turing machine. You also need to prevent infinite loops somehow, either using a timer or by keeping track of all positions encountered so far during the simulation. Non-deterministic and randomized versions can also be handled this way, so for example, PH can be enumerated. - Well, that turned out to be oddly trivial. In that case, could you also throw in a word about the case when $F$ has to enumerate all machines that decide any set? –  Karolis Juodelė Jan 4 '13 at 19:10 The set of all Turing machines that stop on all inputs is not recursively enumerable, so that's impossible. –  Yuval Filmus Jan 4 '13 at 19:57 Could you explain what it means formally to enumerate all languages in P? –  usul Jan 4 '13 at 20:59 @YuvalFilmus, I felt that would be the problem, but I don't think this argument works. I only want a small subset of all Turing machines that stop on all inputs. –  Karolis Juodelė Jan 4 '13 at 21:01 @Karolis You can probably show that the set of all Turing machines that stop on all inputs and output either $0$ or $1$ is also $\Pi_2$-complete. –  Yuval Filmus Jan 4 '13 at 21:18 The codomain of $F$ should be Turing machines (or machines in some other particular model of computation) not the complexity class itself. You can't identify the machines that decide sets in a complexity class with the sets in the complexity class because it is not a one-one correspondence. Moreover a complexity class has various representations, you have to fix the representations of the sets in the complexity class (by fixing a model of computation) before discussing how to enumerate them. Let $\tilde{C}$ be a representation of sets in the complexity class $C$, i.e. the sets decided by machines in $\tilde{C}$ belong to $C$ ($\tilde{C}$ doesn't need to be an r.e. set, e.g. $C = \mathsf{BPP}$ which we don't know any r.e. representation for, but it should be enumerable so we can consider it as a subset of $\mathbb{N}$). You want a function $F:\mathbb{N}\to \mathbb{N}$ s.t. • for all $n\in\mathbb{N}$, $F(n) \in \tilde{C}$, • for all $A \in C$, there is $n\in\mathbb{N}$ s.t. $L(F(n)) = A$, • $F$ is computable. So we have at least one representative for each set in $C$ and we can computably enumerate these representatives. This is related to the question of being a syntactic complexity class. Most famous complexity classes turn out to be syntactic as is implied by Yuval's answer (it is open for other famous complexity classes like $\mathsf{BPP}$). If we have an efficient universal simulator for the complexity class then the class is syntactic. It is important to remember that we want machines that decide sets in the complexity class to decide them with reasonable resources. ps: this is also known in literature as recursive representability, also sometimes referred to as recursive indexing of the complexity class (look also for computable in place of recursive). -
# Recruiters are calling, but... #### IlyaKEightSix Okay, I called this recruiter since she wants to set me up for an interview as a structured finance developer while I'm working on this (rather mundane) M.S. Stat here in Rutgers. She says it's $100k + Bonus, which is very nice for someone like me whose last paying work was$18 an hour and who has a negative net worth. However, I told her that my end goal is RenTec or DESCo, which need PhDs or bust (and if I'm going to do something for my career, I want to be among the best at it, not just another cog in another unknown shop). And she told me that she was working with a PhD aspiring quant, early 30s, who was only getting offered 70-80k due to no experience, but wanted twice as much because of his age/education. I'm just wondering, from a recruiter's standpoint, if I want to make it to the top tier quant funds that define the industry, do I get work experience in the financial industry as a sort of developer/modeler and try to PhD part time, or do I just go straight through and blow off job-hunting until I have my doctorate? BTW, I'm planning to pursue my PhD in OR when I do go for it, and not a lot of schools have good programs. Columbia has one, Stanford has one (Management Sci/Engr), Stony Brook has one (headed up by Bob Frey of RenTec in the quant finance route), and MIT has one. However, with my undergrad GPA 3.3 and the deadlines early december (won't get grades in time), what is it that a recruiter (ahem, Sir Connor) would suggest me to do? Try to get a couple of years experience under my belt, or take my chances hoping my recommendations are good enough to make up for an 87% quant general GRE score and 3.3 cumulative GPA (3.5 major)? Also, I'm studying for the math subject GREs, but it seems that I'm basically relearning everything from the ground up since multivar calc/linear algebra+diffeqs were in my freshman undergrad year (and my regular calc was in high school which I AP'd out of in college). #### doug reich ##### Some guy Ilya, you're asking for the same advice over and over again and hoping for a different answer each time. There's saying about that... #### tobias elbert I dont know much about the US employment market, but why do you think that gaining experience from firm A disqualifies you automatically from joining firm B at a later stage? Where did you get this idea from? 100k + bonus is a good salary package for someone entering the industry with zero experience. I just wonder - why was that PhD guy not offered that position, but you who hasn't completed the Master's yet (no offence, but why would a recruiter do this unless the PhD doesnt want that role or is for whatever reason not suitable for it)? This is my personal opinion re PhD (I dont have one by the way): finish your Masters, go out in the industry, identify which areas you and your (future) employers benefit from, and then do a PhD in such areas in a few years time - if you still want to that is. #### doug reich ##### Some guy I dont know much about the US employment market, but why do you think that gaining experience from firm A disqualifies you automatically from joining firm B at a later stage? Where did you get this idea from? There are very fancy places that think this way. 100k + bonus is a good salary package for someone entering the industry with zero experience. I just wonder - why was that PhD guy not offered that position, but you who hasn't completed the Master's yet (no offence, but why would a recruiter do this unless the PhD doesnt want that role or is for whatever reason not suitable for it)? A. The PhD is over-qualified. B. The PhD is asking for too much money. C. The PhD's ego is getting in the way. D. Firms like to mold their candidates, and a highly-educated or highly-experienced person may be too set in his ways. See prior section. E. They are different positions and the PhD is not qualified. This is my personal opinion re PhD (I dont have one by the way): finish your Masters, go out in the industry, identify which areas you and your (future) employers benefit from, and then do a PhD in such areas in a few years time - if you still want to that is. Common advice is that a PhD won't really get you ahead in the pay scale given the time invested in it. You may be doing different work, but you won't be making more money when you balance out the head start a masters or undergrad have. Also, a PhD is tough and if you're just looking for a pot of gold, you will either be miserable or drop out or both. #### tobias elbert There are very fancy places that think this way. A. The PhD is over-qualified. B. The PhD is asking for too much money. C. The PhD's ego is getting in the way. D. Firms like to mold their candidates, and a highly-educated or highly-experienced person may be too set in his ways. See prior section. E. They are different positions and the PhD is not qualified. Also, a PhD is tough and if you're just looking for a pot of gold, you will either be miserable or drop out or both. Interesting. I just had a look at DeShaw's website and from what they say is that they value diversity and achievements. Kinda contradicts that they rule out ppl just because they had worked at a different firm - are you serious? I cant believe that, if you are a good, competitive candidate they will interview you - regardless of where you worked in the past. If ppl name that as a reason for getting rejected then it's probably more due to denial than anything else. According to what OP said, my conclusion was that A cant be the case. B is likely but then I still wonder why a PhD in Quant Finance gets offered 70-80k for every role (according to OP), but not the 100k OP was offered for that role. C, D could be. D must be a US thing though (second part, assuming that you're no longer moldable at a certain age or level of education). As for your last part - most ppl (90%) would do a PhD out of interest in an area, not necessarily for money because of the reasons you mentioned. You need to be really passionate of the area you study. I am just saying see what's out there first before doing your PhD in something you think is relevant but turns out to be a waste of time and your contribution is too marginal to be of value. Plus, if you dont want to go for an extremely abstract, theoretical PhD I'd say learn what's out there first and then make an informed decision. Personally, I'd be more interested in doing a PhD in pricing swing or storage options than in nonlinear optimization, for instances. The former are things you dont necessarily hear or know about if you are not in or have never been exposed to an work environment. #### IlyaKEightSix In terms of PhDs, the only path I'm really considering is operations research, or perhaps statistics. Whatever it will be in, the entire point will be to apply it to something very tangible (EG the market). I'm not out to do theoretical mumbo jumbo. EG: "If we lived on mars, and pigs could fly, my beautiful closed form equation would be true." My interest in the PhD path are these: 1) Show that I am a "World Class Researcher" and can do "Good Science" 2) Build strong connections with professors with a strong connection to prevalent industries through doing research for them. 3) Obtain abilities that would allow me to come up with a solution for almost any practical problem. I have no intention of pursuing abstract, ivory-tower theories. Everything I do has to have a point in terms of solving real world problems. As for D.E. Shaw, they have many, many different strategies and divisions now. However, I'm interested in their quant division, not so much their distressed investments or private equity or whatnot. #### bluechimp Just.. make sure you perform a stellar job at Rutgers. To be honest, I have never seen anyone get accepted to top graduate programs in anything with your undergraduate marks. And please please please, don't write in your statement of purpose that the bad grades are attributed to the 'boring' courses; whining is a sign of the weak. Trust me, you will do more 'boring' courses in any PhD program. Congratulations on being contacted by a recruiter! Don't drop the ball #### IlyaKEightSix In terms of Rutgers, so long as I do the work, I'll be fine. It's nothing spectacular or jaw-dropping, so it's sort of a let-down. But it'll give me a new piece of paper in a quantitative discipline. I guess I won't write in my statement that I had boring courses, but that most of the pain came from two bad semesters early on. All 3 of my recommendations will be from MIT grads at some point (BS, PhD, postdoc), so if that doesn't fly, then I put my best foot forward. And also, I'm studying for the math subject GREs to try and help matters. If it's all a no-go, then I did my best. #### doug reich ##### Some guy I guess I won't write in my statement that I had boring courses, but that most of the pain came from two bad semesters early on. Luckily there's hundreds of posts on quantnet that say that. #### satyag 1) Show that I am a "World Class Researcher" and can do "Good Science" Dont do this to "Show". Do it if you are passionate about research irrespective of how others receive it. #### joel_b You're just talking to a recruiter. If you really want to make a decision, do the interview, get an offer, then decide. You may not even get an offer. It's just speculation, and either way, interview practice is not a bad thing. #### jay.berg However, I told her that my end goal is RenTec or DESCo, which need PhDs or bust (and if I'm going to do something for my career, I want to be among the best at it, not just another cog in another unknown shop). . cmon man c- Replies 1 Views 671 Replies 5 Views 688 Replies 0 Views 931 Replies 11 Views 923 Replies 8 Views 1K
# How big can the Hausdorff dimension of a function graph get? This question is inspired by How kinky can a Jordan curve get? What is the least upper bound for the Hausdorff dimension of the graph of a real-valued, continuous function on an interval? Is the least upper bound attained by some function? It may be noted that the area (2-dimensional Hausdorff measure) of a function graph is zero. However, this does not rule out the possible existence of a function graph of dimension two. - Besicovitch and Ursell, Sets of fractional dimensions (V): On dimensional numbers of some continuous curves. J. London Math. Soc. 12 (1937) 18–25. doi:10.1112/jlms/s1-12.45.18 - Once you get arbitrarily close to 2, just use disjoint intervals and put graphs on them with dimensions $\gt 2-1/n$ –  Gerald Edgar May 9 '10 at 0:31
# monics in Sets with unary operation I want to find a monic in category OSet defined as "sets with unary operation, $(A,x)$, where $x:A\rightarrow A$, and morphism preserving that operation, that is a morphism from $(A,x)$ to $(B,y)$ is $f:A\rightarrow B$ with $f\circ x=y\circ f$" I am doing it exactly how we prove injective functions are monic in category Set, i.e, I prove any injective function $\space$ $m:(A,x)\rightarrow (B,y)$ in OSet is monic by proving for each parallel pair of arrows $f,g:(S,z)\rightarrow (A,x)$, we have$\space$ $m\circ f=m\circ g\implies f=g$. My confusion is this that do I need to do anything more with it as this time $m$ is not a function but 'operation preserving function'. Thank you. - Basically, for a given morphism $m:(A,x)\rightarrow (B,y)$ in $\mathbf{OSet}$ you want to prove $m$ is monic $\iff$ the underlying map $m:A\rightarrow B$ is injective. The proof of the $\Leftarrow$ direction works as you described above, just like for $\mathbf{Set}$. But for the proof of the $\Rightarrow$ direction you need to do a little bit more. What makes the proof work in the case of ordinary sets is the isomorphism $A\cong \mathbf{Set}(1,A)$ (where $1$ is some one-element set), which allows you to view an element $a\in A$ as a map $a:1\rightarrow A$ so that e.g. $m(a) = m(a')$ ''means'' the same as $m\circ a= m\circ a'$. So you need to identify a suitable object $(N,s)$ in $\mathbf{OSet}$ such that there is an isomorphism $A\cong \mathbf{OSet}((N,s),(A,x)))$. A one-element set (with the identity map) will not work here because maps in $\mathbf{OSet}((1,id),(A,x))$ only correspond to fixpoints of $x$. But perhaps you can guess by my notation a suitable $(N,s)$.
### Theory: To find the profit or loss percentage, we can use the following formulas: 1. To determine the profit percentage: If the profit and the cost price of the product are given, we can find the profit percentage. Profit $$\% =$$ $$\frac{\text{Profit}}{\text{Cost price}}$$$$\times100$$ 2. To determine the loss percentage: If the loss and the product's cost price are given, we can find the loss percentage. Loss $$\% =$$ $$\frac{\text{Loss}}{\text{Cost price}}$$$$\times100$$ If any two values are known in the above formulas, we can figure out the other values by simply rearranging the formulas.
## Formula $a+b \,=\, b+a$ The addition of two quantities is equal to their sum in reverse order. It is called the commutative property of addition. ### Introduction $a$ and $b$ are two operands in algebraic form and their sum is written in mathematical form as $a+b$. Now, add them in reverse order and it is expressed as $b+a$. Mathematically, the two expressions are equal and it is called as the commutative law of addition. #### Example $2$ and $3$ are two quantities and add them. $\implies$ $2+3 = 5$ Now, add the same two operands in reverse order. $\implies$ $3+2 = 5$ $\,\,\, \therefore \,\,\,\,\,\,$ $2+3$ $\,=\,$ $3+2$ $\,=\,$ $5$ Thus, the commutative rule of addition can be verified in numerical system. ### Proof Learn how to derive the commutative law of addition in algebraic form by geometrical method. Latest Math Problems A best free mathematics education website for students, teachers and researchers. ###### Maths Topics Learn each topic of the mathematics easily with understandable proofs and visual animation graphics. ###### Maths Problems Learn how to solve the maths problems in different methods with understandable steps. Learn solutions ###### Subscribe us You can get the latest updates from us by following to our official page of Math Doubts in one of your favourite social media sites.
# Older blog entries for vicious (starting at number 313) Priorities Two things I saw recently 1) NASA budget for climate research is 1 billion (for all those satellites and all that), 2) Facebook buys instagram for 1 billion. Now we can see where our priorities (as a society) lie. What I don’t get is, that instagram has software that a reasonably good programmer could have done in a few weekends of binge hacking. It does nothing really new. You could even take fairly off the shelf things. Perhaps the servers and the online setup might be costlier, but still, nothing all that strange. To think that this is worth to us as much as figuring out where the next hurricane will hit, or when will the ice caps melt is “interesting”. Though it is not totally out of sync with what else is happening. When the entire UC system which is responsible for several nobel prizes and innumerable new cures for diseases and leaps in terms of understanding the world, not to mention educating a huge number of students, when that system has a budget hole the size of one CEOs bonus, and it’s a huge hit for the university. Something is off in priorities. Actually there is a very good likelyhood that this CEO will die of some cancer that wasn’t cured because we don’t fund science enough. Syndicated 2012-04-13 18:26:47 from The Spectre of Math Devil’s in the details It seems that not only did some democrats vote in the Michigan republican primary, satan himself also voted. Check the primary results in Mackinac county (e.g. on http://elections.msnbc.msn.com/ns/politics/2012/Michigan/Republican/primary at least this was the data on wednesday) The final result in that county was very close. It was by one vote for Romney, but you should check how many votes did Santorum get: Romney: 667 (41%) Santorum: 666 (41%) Syndicated 2012-02-29 20:40:19 from The Spectre of Math Exponential growth (and CEO pay) So CEO salary has increased by approximately 9.7% adjusted for inflation every year between 1990-2005 [1] (that is approx 300% increase over that time, so 4 times what they had in 1990). Anyway, that has a doubling time of $\frac{\log 2}{\log 1.097} \approx 7.5$ years. Now median CEO (among the top 200) made approximately 10 million a year in compensation in 2010 [2]. In 2009 there were about 8.3 trillion dollars in existence [3]. Anyway, approximately a CEO makes a 1 millionth of the money in the world, or in other words, if we had a million CEOs we’d exhaust our money supply. It takes about $\log_2 1000000 \approx 20$ doublings to get a million. Hence in $20\times 7.5 = 150$ years one CEO will make all the money in the world. And this is all inflation adjusted. But we don’t have to go so far to get into trouble. Now we did talk about the top 200, so when would the top 200 make all the money in the world. Well that requires only $\log_2 \frac{1000000}{200} \approx 12.3$ years so $12.3 \times 7.5 \approx 92$ years. OK, so in less than 100 years, the top 200 CEOs will suck out all the money in the universe. Anyway, the problem is the following: The companies are not rewarding an individual CEO for good performance. They are rewarding all future CEOs. The thing is, that there is no “starting salary.” A CEO that just started is (statistically) making about the same as the one who’s been around for quite a while. If you would start all CEOs at a base salary, then one particular CEOs salary could rise at 10% a year because he’d be with the company only a fixed number of years, the problem would be managable. Now to whatever extent there is anything like a “starting salary” the increase an individual CEO makes is even higher than 10% a year. Essentially the starting salary is increasing at 10% a year. Let’s look at even a more realistic example of how quickly do we get into trouble. The CEO salary can easily be even 1% of the revenue for the company [4]. In fact some small private colleges are paying 1% of their budgets to their university president, a group where similar thing has happened. Well, now think about this doubling. If it is 1% now, it will be 2% in 7.5 years, 4% in 15 years, 8% in 22.5 years, 16% in 30 years, 32% in 37.5 years, 64% in 45 years, and we get 100% at less than 50 years. So in less than 50 years the entire revenue would have to support the CEO. Now you say, well but the revenue is also growing. Not so fast, the 10% pay increase is overall, that includes companies that did badly and those that did well. One would think that the growth of revenue on average (including failed companies) is not that much more than inflation. And this is adjusted for inflation. In any case CEO is definitely growing a lot faster than the economy (and hence your average revenue), and hence you hit the wall sooner or later. Even if we lob off another 2% to adjust for growth, the doubling time for CEO pay is still 9 years. Of course a problem would appear a lot earlier than in 50 years. So it’s not only the “rich get richer” and “things are not fair” argument. This state of affairs is actually unsustainable even in relatively short period of time (within our lifetimes). I think people don’t understand that exponential growth is really really fast. That’s why pyramid schemes never work. It’s why ponzi schemes usually fail far quicker than the perpetrator hoped. 10% increase a year does not seem like much (just like 10% return on investment doesn’t seem like that terribly much). [1] http://www2.ucsc.edu/whorulesamerica/power/wealth.html (better sources exist, but I am too lazy to search further). Syndicated 2012-02-22 23:17:39 from The Spectre of Math So the differential equations textbook just reached 20000 downloads from unique adresses. The real analysis textbook is close behind (despite being a year younger) at around 19200. The rate is growing, it started out at around 200 per week for both in the fall and is now pushing 400 a week. As an overwhelming percentage of the hits come from google I think google might have ranked the pages higher. So if you want to help out with the project of free textbooks: link to the books on your blog, page, whatever. And press those social buttons on the page, I guess that also does it. It’s also interesting to see how ipv6 is doing. So far, 82 ipv6 adresses looked at the real analysis book and 43 for the diffyqs. As ipv6 was active for about half a year on the server, it’s still a very tiny percentage. There were about 6-7 thousand ipv4 addresses looking at the diffyqs book during that time frame and about 8-9 thousand for the real analysis book. But at least someone is using ipv6 (if I could get an internet provider that offered ipv6, I’d use them, but I didn’t find such in Madison). Syndicated 2012-02-06 00:31:34 from The Spectre of Math It is always good to know when will other people want to cut off your head. Let us look at when were most heads cut off: that was probably the French revolution. OK, what made people want to cut off other people’s heads? Extreme inequality. OK, what was the Gini coefficient? Well, one estimate is 59 [1]. OK, so if we look at the US Gini coefficient it seems to be rising approximately linearly since 1980. In particular, US Census Bureau (via Wikipedia [2]) has the following data: 1980 40.3 1990 42.8 2000 46.2 2009 46.8 Let us do linear regression to obtain: 2020 50 2030 52.4 2040 54.8 2050 57.1 2058 59 2060 59.5 2070 61.9 2080 64.3 So, I guess people will start trying to cut other people’s heads off around 2058. We’re all safe till then. [1] Christian Morrison, Wayne Snyder The income inequality of France in historical perspective, European Review of Economic History, 4, 59-83. [2] Wikipedia.org, Gini Coefficient. Syndicated 2012-01-21 22:40:35 from The Spectre of Math Biking So, it was -15 celsius (barely positive farenheit) and snowing this morning, actually it’s still -15 and still snowing. Surely only a complete moron would bike to work today. Actually the ride was pretty good. Yesterday it was -18 and I think I can tell the difference (though it was not snowing). The downside of biking in this weather is that the gears are refusing to shift. The levers just sort of stick and nothing happens. Fortunately, I’m in a reasonable gear right now. Another downside is that I have these cool hybrid tires on the mountain bike. Sort of like road tires with spikes only on the sides, which is reasonable in terrain and much smoother than regular mountain bike tire on the road. They were really good at the UCSD campus (San Diego) where I’d go off road often and it never snowed. Because they kind of suck on snow. Syndicated 2012-01-20 22:02:19 from The Spectre of Math Newspapers will be half the size on wednesday With the Wikipedia blackout, what will journalists do? There will be a lot less in the wenesdays edition, though on the plus side it will have a better chance to be correct or at least original. Also, all school essays that were due wenesday will be turned in a day late. Syndicated 2012-01-17 22:53:23 from The Spectre of Math New chapter in real analysis notes I finally posted a new version of the real analysis book with the new metric space chapter, it weighs in at 192 pages now (with 12pt font though). It also fixes a now record number of errata; though my standard for what is an erratum rather than just a simple obvious typo has dropped slightly. One thing that makes me feel better about errata is that it seems Rudin’s 3rd edition of principles still has some errata that is essentially the same as what I did (independently, not that I really worry about taking credit for errata:). The differential equations book nowadays seems to be hitting fewer and fewer problems: 6 errata in the past year, 2 of which have been in new stuff added before the summer. This is despite several people using the books and one ongoing partial translation. So I guess they are rather “correct” by now. My goal right now is to get them to be correct rather than perfect. So I haven’t really been reordering things or rewriting things that could be improved. I’ve been at most doing small improvements in exposition. The main thing is that I want to spend time doing other things too of course:) Syndicated 2011-11-18 23:17:13 from The Spectre of Math Wisconsin On the way to Madison we were trying to see what the Wisconsin ski resorts look like. Seems like the toughest runs are rated double-black-bunny. Syndicated 2011-08-27 02:29:50 from The Spectre of Math No more overheating Pissed off about the CPU overheating, I wrote a simple daemon. It monitors the core temperatures and sets the cpufreq governor to powersave when temperature in either core is above 90 deg celsius, and sets it back to ondemand when it gets below 75 in both cores (those numbers I pulled out of my ass, they might need tuning). It simply polls the temperature every 2 seconds. There is no configuration or anything, simply change code and recompile. It’s all the neurons I’m willing to commit to this problem. Yes I know performance might suffer since the CPU can go even further, but I don’t care about performance, I care about the machine not turning off randomly. I guess ondemand is actually better poewr (and heat) wise when everything is normal, but when then heat is high, powersave does come to the rescue. Here is the code for those that want to do something similar. You might need to modify it heavily. I called it noverheatd.c, I simply compile the thing with something like gcc -O2 -Wall noverheatd.c -o noverheatd, placed the resulting binary in /root and then in /etc/rc.local I added /root/noverheatd &. The parts that need modification is set_policy where you need to set the number of cpus your kernel thinks it has, and then in the main loop you need to set the right coretemp paths for all the coretemps you have. I had to run “sensors-detect” as root from the lm_sensors package to obtain those guys. Syndicated 2011-07-12 19:15:44 from The Spectre of Math 304 older entries...
# [OS X TeX] LastPage Reference Herbert Schulz herbs at wideopenwest.com Tue Mar 6 20:58:17 EST 2007 ```On Mar 6, 2007, at 7:50 PM, Roussanka Loukanova wrote: > >> Howdy, >> >> Not sure, but do you need to explicitly have >> >> \usepackage{lastpage} >> >> to use LastPage? Maybe one of the other packages loads it? > > It seems that lastpage.sty doesn't get loaded after commenting it, > i.e., > with > % \usepackage{lastpage} > I do not find > (/usr/local/gwTeX/texmf.texlive/tex/latex/lastpage/lastpage.sty) > resp. > (/usr/local/texlive/2007/texmf-dist/tex/latex/lastpage/lastpage.sty) > in the log file. > > However, the setting LastPage wold be available for an additional > pass of latex, then in a 2nd round, the number of LastPage is not > available. (This is my interpretation of what I see in the log > file: I do not know if I'm right.) > The LastPage cross reference is defined in the lastpage package. There are some comments about that in the hyperref package but it wasn't clear to me that it was defined there. >> Also \ifthenelse has three arguments: >> >> \ifthenelse{test}{true commands}{else commands} >> >> so >> >> \lfoot{\ifthenelse{\value{page}=\pageref{LastPage}}{\tiny\today}} > >> should have an extra {} pair for the else (assuming it should be >> blank if it's not the last page). > > For strictness, you are right. But since Maurino's tex code > documentation: Without being absolutely sure, ifthenelse.sty is > extending (they say compatible with) ifthen.sty and adds lasy > evaluation of the logical operators: in cases when P is true, "if > P then Q else R" wouldn't look for the else proposition, i.e. it > the same as "if P then Q". > >> >> Finally, in the command >> >> \cfoot{\ifthenelse{\not\value{page}=\pageref{LastPage}} >> {\scriptsize (over)}} >> >> the \not is only acting on \value{page} and I believe you mean it >> to be > > Again, because Maurino's tex code typeset ok (I've ran it several > times), it looked to me that \not (number1 = number 2) has also > abbreviated form > \not number1 = number 2. The documentation is more clear to me in > this aspect (than for the lasy evaluation). But they treat "number1 > = number 2" as a syntactic unit: atomic proposition, which hints to > parsing of > \not number1 = number 2 as equivalent to \not (number1 = number 2). > >> >> \not{\value{page}=\pageref{LastPage}} as well as missing the extra >> {}; i.e., it should be >> >> \cfoot{\ifthenelse{\not{\value{page}=\pageref{LastPage}}} >> {\scriptsize (over)}{}} >> >> which is the same as the simpler >> >> \cfoot{\ifthenelse{\value{page}=\pageref{LastPage}}{}{\scriptsize >> (over)}}. > > I think, it is; and I think that one would be on the safe side by > following the strict (and clear) syntax as you suggested. Esp. with > the drawbacks of the lazy evaluation (or may I just do not like it). > > Roussanka All I know is that I get different results if I define it one way or the other and I get the expected one if I include the braces: \not{...}. Good Luck, Herb Schulz (herbs at wideopenwest.com)
# underscore StartLaTeX Topics ## 1 latex underscore Setting an underscore _ in LaTeX is not that easy. Since this is a reserved special character. The underscore is used within math mode for setting indices, so \$a_{i}\$ becomes ai. To use the underscore within the normal text mode, it must be masked with a backslash, then the underscore _ is set with \_. If the underscore is not masked, the following error message is displayed: ```!Missing \$ inserted. <inserted text> \$ ``` In the event that the underscore is often used at the document, masking can become quite cumbersome. In such a case you can use the package underscore. ## 2 underscore Package The underscore package (version 1.0) from 2006 provides a modified version of the underscore. The package is integrated with: \usepackage{underscore} The advantage of the package is that the underscore can now be used as a simple keyboard character _ within text mode and the underscore can still be used in math mode to introduce the indices. Input: ```Word_Word \$a_{i}\$ ``` Output: Word_Word ai Using the underscore package has the disadvantage that no external file (images, additional tex file) can be included if it contains a _ in file names. This means that within the commands \input{filename}, \include{filename} and \includegraphics{filename} no filename with an underscore can be used. In addition, problems can also be expected with other commands if an underscore is used. #### 2.2.1 Option strings If the package option strings is set, some of the above problems will be fixed and at \input{filename} and \include{filename} filenames with underscore can be used again. Set the option strings: \usepackage[strings]{underscore} ### 2.3 Conclusion Bepart from the new problem, i.e. the limited use of the underscore in file names and commands, the package is a viable alternative to masking each underscore. ## 3 latex underline Eng related to the question nor the underscore is the underlining of words, words and paragraphs. If a word is to be underlined, the default LaTeX command \underline{word} can be used, or the \ul{word} command from the soul package. If a whole paragraph is to be underlined, the \uline{paragraph} from the ulem package can be used. Source:https://www.ctan.org/pkg/underscore
# ISI2016-DCG-45 133 views The value of $\underset{x \to 0}{\lim} \dfrac{\tan^{2}\:x-x\:\tan\:x}{\sin\:x}$ is 1. $\frac{\sqrt{3}}{2}$ 2. $\frac{1}{2}$ 3. $0$ 4. None of these in Calculus edited Answer: $\mathbf C$ Solution: Let $$\mathrm {I = \underset{x\to0}{\lim}\frac{\tan^2x - x\tan x}{\sin x}}$$ On substituting $\mathrm x = 0$, we can see that this integral is of the form, $\color {blue} {\mathbf{\frac{0}{0}}}$ i.e., $\color {blue} {\text {indeterminate form}}$ So, we can apply l'hospital's rule: $$\therefore \mathrm {I = \underset{x\to0}{\lim}\frac{\frac{d}{dx}(\tan^2x-x\tan x)}{\frac{d}{dx}\sin x} }$$ $$\mathrm {\;\;\left(\because \frac{d}{dx}\tan x = \sec^2x,\; \frac{d}{dx}\cos x = \sin x\right)}$$ $$\mathrm {\Rightarrow \mathrm I = \underset{x \to 0}{\lim}\frac{\left (2\tan x \sec^2 x-(x.\sec^2x + 1.\tan x) \right)}{\cos x}}$$ $$\mathrm {\Rightarrow \mathrm I = \underset {x \to 0}{\lim}\left \{\left (2\frac{\sin x}{\cos x}.\frac{1}{\cos^2x} - \left(\frac{x}{\cos^2x}+\frac{\sin x}{\cos x}\right)\right)\right \}.\frac{1}{\cos x}}$$ Now, substitute $\mathrm x = 0$, we get: $$\left (\text{Also}\;\because \cos 0 = 1, \;\sin 0 = 0\right)$$ $$\therefore \mathrm I = 0$$ Answer: $\therefore \mathbf C$ is the correct option. edited by 1 I think the pure mathematical notation for differential operator (with respect to $x$) is $\frac{d}{dx}()$ not $\frac{dy}{dx}$. So writing $\frac{dy}{dx}\sin x$ is incorrect. Rather we should write $\frac{d}{dx}(\sin x)$. 1 Yes, correct. 1 By the way, it was an inadvertent error from my side. Thanks for pointing out such a minor but very important point. PS: Corrected! Option – C We can solve like this also. If i am wrong please let me know ## Related questions 1 57 views $\underset{x\rightarrow 1}{\lim}\dfrac{x^{\frac{1}{3}}-1}{x^{\frac{1}{4}}-1}$ equals $\frac{4}{3}$ $\frac{3}{4}$ $1$ None of these $\underset{x\rightarrow-1}{\lim}\dfrac{1+\sqrt[3]{x}}{1+\sqrt[5]{x}}$ equals $\frac{3}{5}$ $\frac{5}{3}$ $1$ $\infty$ $\underset{x\rightarrow 0}{\lim}x\sin\left(\dfrac{1}{x}\right)$ equals $-1$ $0$ $1$ Does not exist $\underset{x\rightarrow 0}{\lim}\sin\left(\dfrac{1}{x}\right)$ equals $-1$ $0$ $1$ Does not exist
## Tuesday, February 28, 2012 ### Drat!!! Lights 4 and 5 on the back of the second board (where I am installing the blue lights) won't light. As it happens, neither will any of 54-59, but we'll save that part of the story for later. I remember the solder bridge between two pins, and one of those pins, A7, is involved with the lights that don't work. So, we carefully measure pin A7 and conclude that no matter how we program it, it acts like a high impedance. The natural conclusion is that it is burned out. So, run to Sparkfun, buy their last three ATmega328s, and a hot air rework station to pull the old chip. Carefully remove the old chip, taking care not to scorch the white paint on the board or blow away any of the passive components. When the chip comes off, it comes off great. Put in a new chip, verrrry carefully check continuity between each pin to make sure there are no bridges or missed joints. Fix a couple of missed joints. Guess what. The board acts exactly as it did. So, that theory is shot. I might have burned out pins on this chip, but exactly the same pins? I think not. Now time to gather more evidence for another theory. Write up a program that listens for commands on the serial port and turns individual signals high or low, with the rest at high-Z. Signals 1 and 16 are both no good. What is going on? The Arduino documentation says that you can use the analog input pins A0... as normal digital pins with the syntax pinMode(A0,OUTPUT); digitalWrite(A0,HI); etc. Which is in fact true, but only to a point. On an ATmega328, pins A6 and A7 are internally called ADC6 and ADC7, not PB...(ADC6) or anything like that. They are dedicated input pins. I had been planning (as in had a circut board made) to use A6 and A7 as Charlieplex lines 1 and 16. Guess which Charlieplex lines don't work... And guess which signals lights 4 and 5 (and 54-59) use... I have no one to blame for this but myself. It was right on the Eagle schematic symbol the whole time that all the other A0... pins were secondary uses of GPIO pins, but A6 and A7 were only labeled as ADC. I guess the weird part is that the crystal inputs and reset line are alternate uses of GPIO lines, but A6 and A7 are not. So, here's the plan. Lots of green wires, hot air, and razor blades, to hack D11 and D12 into P1 and P16. This will require giving up two multiplexers, which completely hashes my plan to use the GPS. At least this will let me test the charlieplex. Then, order 16 more blue lights, another FT232 and a correctly wired board which uses the crystal pins as two more digital outputs, allowing me to use D11 and D12 as multiplexer controllers on the next rev. NO getting a new board until all the green wires are fixed. Don't even touch Eagle until the last green wire is in place and CharlieTest works. Consider putting a LiPo battery power supply on the next board as well. The moral of the story is keep working the problem until you find the problem, and don't guess. I have a nice new hot air rework station which works really well, and costs more than the LEDs combined. Oh well. ## Friday, February 24, 2012 ### It's alive! I finally invent something that works! -- Dr Emmett Brown Well, I have just built an honest to goodness Arduino, a round one connected to the most elaborate Charlieplex I have ever heard of... but no light on D13, so I can't even test it with the Hello World for microcontrollers, blinking a light. So, I used the ASCIITable example, to see if I can get code on the machine and have it run. I made (at least) one mistake in the board design, and had to run what may be the shortest green wire ever. The FT232 datasheet clearly states that the TEST pin must be grounded, or the device will go into test mode and not show up on USB as expected. Well, I didn't ground TEST on the circuit board, and the device didn't work as expected. However, there is one bit of good news. The pin right next to it is ground. So, one bridge removed from the ATmega, and one added to the FT232, and we are ready to go. First, load the ArduinoISP onto the Nano 3.0 I have. Next, connect it to Project Precision, while the Nano is unplugged. Neither of the Arduinos involved look like this I cleverly broke out the six pins needed to do this on Precision, so it just needs a ribbon cable from a breadboard to the connector on Precision. I used individual jumper wires to connect the pins on the Nano by label to the correct wire in the ribbon cable. Next, plug everything in. First carefully check that the ribbon is plugged in right, then plug the Nano into USB. Now run the bootloader instructions. All four lights on the Nano will light up simultaneously (something I hadn't seen before) and blink like crazy (except the blue power light). Board the device and bring it to life! Finally, pull the plug on the Nano and plug in Precision. I thought about writing a literal "Hello World" program, but it was just easier to run the ASCIITable sketch. ASCII Table ~ Character Map !, dec: 33, hex: 21, oct: 41, bin: 100001 ", dec: 34, hex: 22, oct: 42, bin: 100010 #, dec: 35, hex: 23, oct: 43, bin: 100011 ### SDHC - How do we test it? 1. Compile original Logomatic firmware and install it, and see that it runs on a fat16 SD 2. Compile original USB bootloader, install it, see that it runs on fat16 SD 3. Modify Logomatic firmware to use new library, see that it still runs on fat16 SD 4. Modify USB bootloader, install it, see that it runs on fat16 SD 5. Put an SDHC card in, see if it is visible over USB 6. Put a blink firmware on the SDHC and see if it is installed 7. Put Logomatic firmware back on the SDHC, install it, see if it runs 8. Success! ### Logomatic and Loginator with SDHC and Fat32 Your Logomatic probably doesn't need fat32. It probably doesn't need more than 2GiB files or 2GiB of storage total. It probably doesn't have the battery life to support writing 2GiB. But microSD cards that aren't SDHC are becoming hard to find. The hardware needed to interface SDHC is no different than normal microSD. It's just software. Your Logomatic (or mine for that matter, constructed in 2009) just needs a mind transplant, a simple firmware upgrade. Well, not so simple, because that firmware doesn't exist yet. That's where I come in. I am back in the Logomatic (and Loginator, which is substantially the same) business, for the while until my attention drifts elsewhere. Hope I finish, or hope that you or someone reading can pick up where I leave off. So, a word about the SD card software currently in the Logomatic and USB bootloader code. You can find the USB bootloader code on the Sparkfun USB bootloader tutorial, and the Logomatic firmware also on the Sparkfun site. Remember that both of these programs are installed at the factory on each Logomatic, so you already have these in compiled form. They share no memory, but have much identical code. See, the USB bootloader needs to access the SD card, both to do the USB mass storage thing, and to check if your firmware FW.SFE file is there and read it and install it if necessary. It also has a USB driver to convert USB signals into microSD commands and vice versa. The Logomatic main firmware needs much of the same code (except for the USB part) to read its configuration files and write data to the card. This code comes in the form of some .c files, along with supporting .h files. These files were the work of Roland Riegel, and through the magic of open source, we get to read and modify them as we see fit. As it turns out, Roland has already modified his code to handle SDHC cards, and as a logical extension, the fat32 file system. So what is left to us? Modifying this code to handle the LPC2148 chip instead of the ATmega32 that he uses. Yes, I just saw you Arduino users' ears perk up. Sorry to dissapoint, but I am not going to do anything with either Arduino or ATmega, but that doesn't mean that you can't. So, here is how the sd-reader (as Roland calls his whole work) is structured. At its lowest level is sd_raw.c . This file is used to control the card and read and write blocks of it. This code can be used directly if you want to use the sd card without worrying about unnecessary bloatware like file systems. It also is used by the USB mass memory driver. In this case, the USB host (probably your desktop computer) sees the USB device (the Logomatic and its SD card) as a featureless sea of blocks. The USB wire protocol just concerns itself with reading and writing those blocks, along with such bookkeeping things as determining how many blocks there are. The USB driver in the LPC2148 mostly just interprets USB commands and uses the sd_raw library to fulfill these commands. The LPC2148 in this case doesn't care about filesystems. The host sees the raw blocks and it interprets them as a filesystem. In principle, this allows the host to use any filesystem it wants, without the Logomatic caring. If you wanted to, you could format the card in a Logomatic as fat16, fat32, ext4, whatever Mac uses this week, etc. We will see why this is not a good idea later. On top of this low-level interface, we have • fat16.c, which knows about directories and files • rootdir.c which sits on top of fat16 and makes it so that your code doesn't have to worry about directories • partition.c which reads the partition table on the sd card and finds the partitions so that fat16 knows where to start. The file reader in the USB bootloader and the file reader and writer in the Logomatic main firmware are both based upon this code. This code specifically assumes that the card is formatted as FAT, FAT16 to be specific. If your card is not formatted as FAT16, then the USB bootloader will not be able to find and load FW.SFE. In this case, the code is running free, without the guidance of the USB host to interpret things. It needs to interpret the data on the card as a filesystem all by itself. If the filesystem is not FAT16, fat16.c will detect and complain about this, and the files will not be read nor written. So how are we going to use all this code which was written for an ATmega on an LPC? That's the magic of C code and compilers. 99+% of the C code in these files will compile without change. This includes all of fat16, rootdir, and partition. These use sd_raw. Sd_raw in turn is about 99% portable as well. It knows the SD wire protocol, and how it sits on top o the SPI protocol. All it needs to know about the hardware is what registers control the SPI protocol and how to use them. This knowledge is isolated into a couple of functions, and we need only change these functions to port the code from ATmega to LPC. Almost all this code is already well-isolated, there is just one bit left that can be broken out. More on this as I actually do it... ## Friday, February 3, 2012 ### Sparkfun AVC 2012 1. The Sparkfun Air Show has been cancelled. No helicopters or quadrotors allowed. It just didn't seem like a good idea to them to have an autonomous spinning blade of death out on a course surrounded by hundreds of spectators. I support their decision, but it certainly puts a damper on my enthusiasm to build Arisa. 2. I didn't buy an entry in time. I have an entry in now in backorder, but who knows if or when that will come in. 3. I was too ambitious last year. When the helicopter burned out, I only had a week to try to build a car, and didn't get it done. This year, the plan is: • Get Yukari working, which means getting the Loginator working and repairing/replacing the steering servo in the car. Yukari is done when it can drive around the school. • Build the Secret Project. I have an idea for a robot unlike any other seen at any previous AVC. Information wants to be free, especially secret information. You only own an idea as long as you don't share it (there is no such thing as intellectual property, a subject for another post) so the Secret Project will remain secret. Even the name is secret. As if from Paranoia, the classification level of the document is also classified. This is partly because a) if you knew the name, it would reveal the design, and b) I haven't even given it a name yet.
# Motion in a magnetic field and relativity 1. The problem statement, all variables and given/known data We’re working in the right-handed Cartesian coordinate system. Unit system is CGS. A conducting bar of length L is placed along the x axis. Center of mass at x=0 when t=0. It’s moving with constant velocity V in the +y direction. There’s a uniform magnetic field B, such that: $$\vec B = Bcosβ \hat y – Bsinβ \hat z$$ a. Find the potential difference within the rod. Find the electric field E. b. Find E’ and B’ in the rod’s frame of reference, far away from the rod. V=0.99c. 2. Relevant equations Not given, but I assume I’ll need Lorentz force and the relativistic transformation of the fields. 3. The attempt at a solution Well I think I managed a: The moving rod’s free electrons are affected by the Lorentz force: $$\vec F_{mag}=qVBsinβ(-\hat x)$$ This moves the electrons to one side of the rod (the +x side) and the "positive charges" to the -x side. This causes a potential difference and an electrical field E inside the conductor. The force due to this electric field, at least at some time t1, cancels the magnetic component of the Lorentz force: $$\Sigma F_x = qE + F_{mag} = 0$$ $$\vec E = VBsinβ \hat x$$ $$\Delta V = ∫\vec E.d \vec s = EL$$ Now for b: I understand that from a we can see that the bar is basically an electric dipole. Being an electric dipole, it produces an electric field which I don’t know how to derive. If I’m to derive this field, with addition to the given B field, I’ll be able to apply Lorentz transformations and get E’ and B’ outside (and far away from) the bar. What am I missing here? I’ve got a feeling I’m stuck due to something really dumb. http://ift.tt/1rx7iuv
# Show that Maximum and Minimum are Global So I have the following question: Find the extrema of the function $$f(x,y)=4x-6y$$ Given the constraint $$4x^2-4x+9y^2-6y-2=0$$ And determine whether these extrema are local/global on the constraint. I found a max and min respectively at $$(\frac{1+\sqrt2}{2},\frac{1-\sqrt2}{3}) ,(\frac{1-\sqrt2}{2},\frac{1+\sqrt2}{3})$$ with the value of f(x,y) at those points being $$4\sqrt2 ,-4\sqrt2$$ I know these points a global maxima on the restriction/constraint, but I am having trouble proving that they are global. • Show that for when $4x-6y=4\sqrt{2}$ there is only one solution of the constraint equation and when $4x-6y>4\sqrt{2}$ there is no solution. This can be done by substituting for $x$ or $y$ in the constraint equation and examining the resulting quadratic equation. Do the corresponding process at the other end, and you are done. May 11, 2020 at 0:18 • @Peter, I did that, and found a solution for the > part. I am not really sure what you are getting at. May 11, 2020 at 0:35 • If the max value of $4x-6y$ is $4\sqrt{2}$, then there is no solution for $(x,y)$ if $4x-6y>4\sqrt{2}$. If the min value of $4x-6y$ is $-4\sqrt{2}$, then there is no solution if $4x-6y<-4\sqrt{2}$. May 11, 2020 at 1:00 Your constraint is an ellipsoid and your objective function is a straight line on 2-D. Therefore the straight line tangent to the ellipsoid should give you the global maximum/minimum. weimiao gives the essential answer to your question: when the function is linear, changing the value of $$\ ax \ + \ by \ = \ c \$$ shifts the line graph of the function parallel to itself. For a closed bounded constraint curve, there will be tangent lines to that curve which have largest and smallest values of $$\ c \$$ . There is something of further interest to remark about this situation. When we have a constraint curve which has four-fold symmetry about the origin, a linear function will have level curves $$\ ax \ + \ by \ = \ \pm c \$$ at tangent points that are symmetric about the origin $$\ ( x_m \ , \ y_m ) \$$ and $$\ ( -x_m \ , \ -y_m ) \$$ . This symmetry can be often exploited to find the absolute maximum $$\ +c \$$ and minimum $$- c \$$ for the function fairly easily; this can also be the case for other sorts of functions. What is interesting about this extremization problem is that the constraint curve $$4x^2 \ - \ 4x \ + \ 9y^2 \ - \ 6y \ \ - \ \ 2 \ \ = \ 0 \ \ \rightarrow \ \ \frac{(x \ - \ \frac{1}{2})^2}{1} \ + \ \frac{(y \ - \ \frac{1}{3})^2}{4/9} \ \ \ = \ \ 1 \ \ ,$$ while an ellipse with its major and minor axes parallel to the coordinate axes, is not centered on the origin, and thus has no symmetry about the origin. We expect that the tangent points to this ellipse for parallel lines are not symmetrically placed in this way, so the maxima and minima of linear functions will not generally be the negatives of one another. The exception happens to be the linear function $$\ 4x \ - \ 6y \$$ . For the value $$\ c \ = \ 0 \$$ , the line $$\ 4x \ - \ 6y \ = \ 0 \$$ passes through the origin and the center of the constraint ellipse, so the ellipse is symmetrical about that line. We could then choose coordinate variables relative to the center of the ellipse, $$\ X \ = \ x \ - \ \frac{1}{2} \ \ , \ \ Y \ = \ y \ - \ \frac{1}{3} \$$ to write the constraint curve as $$\ \ 4X^2 \ + \ 9Y^2 \ = \ 4 \ \$$ and the linear function as $$\ 4 · \left(x \ - \ \frac{1}{2} \right) \ - \ 6 · \left(y \ - \ \frac{1}{3} \right) \ = \ c \ - \ 2 \ + \ 2 \ \ \rightarrow \ \ 4X \ - \ 6Y \ = \ c \ \ .$$ We can now solve for the absolute extrema of the linear function without a sophisticated technique such as Lagrange multipliers. Implicit differentiation of the constraint curve yields $$\ 8X \ + \ 18YY' \ = \ 0 \ \ \Rightarrow \ \ Y' \ = \ \frac{-8X}{18Y} \ = \ -\frac{4X}{9Y} \ \ .$$ Since the linear function produces parallel lines $$\ Y \ = \ \frac{2X - c}{3} \ , \$$ we find $$Y' \ = \ -\frac{4X}{9Y} \ \ = \ \ \frac{2}{3} \ \ \Rightarrow \ \ -12X \ = \ 18Y \ \ \Rightarrow \ \ Y \ = \ -\frac{2}{3}X$$ (for instance) $$\Rightarrow \ \ 4X^2 \ + \ 9 · \left(-\frac{2}{3}X \right)^2 \ \ = \ \ 4 \ \ \Rightarrow \ \ 4X^2 \ + \ 4 X^2 \ \ = \ \ 4 \ \ \Rightarrow \ \ X^2 \ = \ \frac{1}{2}$$ $$\Rightarrow \ \ X \ = \ \pm \frac{\sqrt{2}}{2} \ \ , \ \ Y \ = \ -\frac{2}{3} \ · \ \pm \frac{\sqrt{2}}{2} \ \ = \ \ \mp \frac{\sqrt{2}}{3} \ \ .$$ The absolute extrema for our linear function are $$f(X,Y) \ \ = \ \ 4X \ - \ 6Y \ \ = \ \ 4· \left(\pm \frac{\sqrt{2}}{2} \right) - \ 6· \left(\mp \frac{\sqrt{2}}{3} \right) \ = \ \ \pm 2 \sqrt{2} \ \pm \ 2 \sqrt{2} \ \ = \ \ \pm 4 \sqrt{2} \ \ .$$ So this symmetry occurs even though the associated tangent points, which you show, are not symmetrical about the origin: $$X \ = \ \frac{\sqrt{2}}{2} \ , \ Y \ = \ -\frac{\sqrt{2}}{3} \ \ \rightarrow \ \ \left( \frac{1 + \sqrt{2}}{2} \ , \ \frac{1 - \sqrt{2}}{3} \right) \ \ \approx \ \ (1.207 \ , \ -0.138) \ \ \text{for} \ \ f \ = \ 4\sqrt{2} \ \ ,$$ $$X \ = \ -\frac{\sqrt{2}}{2} \ , \ Y \ = \ \frac{\sqrt{2}}{3} \ \ \rightarrow \ \ \left( \frac{1 - \sqrt{2}}{2} \ , \ \frac{1 + \sqrt{2}}{3} \right) \ \ \approx \ \ (-0.207 \ , \ 0.805) \ \ \text{for} \ \ f \ = \ -4\sqrt{2} \ \ ,$$
# The Ghost of Joseph Weber post by Kirsten Hacker (kirsten-hacker) · 2020-07-13T22:51:41.690Z · score: -2 (11 votes) · LW · GW · 43 comments ## Contents Earlier Measurements Control Variables Templates and Filtering Data Distinguish the Source Determine the Source’s Properties Theoretical Approximations Conclusion None original blog post : https://kirstenhacker.wordpress.com/2020/05/11/ghostly-gravitational-waves/ If I predict that a ghost will appear in the castle tower every midnight, I might put a camera with a timer in the tower and attempt to capture an image of the ghost. I could repeat this process every night for a year. Perhaps the ghost only shows up at midnight on certain holidays. My hypothesis about the ghost would be falsified if none of the images show a ghost. BUT, if I combine all of the images I’ve taken into one image, so that a blurry, ghostly form begins to take shape as a result of all of the dust floating around in the individual images, can I announce that I have taken a picture of a ghost? If I select only the subset of images that produce the most ghost-like form when combined, can I announce that I have taken a high-resolution picture of a ghost? If you would like to hear this post read aloud, try this video. I think that most people would say that the answer to those questions is, ‘no‘. Yet in the scientific community today, gravitational wave and black hole physicists are doing both of these things and getting praised for their work. • The Event Horizon Telescope (EHT) collaboration combined blurry telescope images into one image, fine tuning calibration constants about regions of interest until what they wanted to see finally appeared. • The gravitational wave observatory LIGO-VIRGO filters their noisy data with a template of what they want to see. In many ways, this is like taking a ghost shaped filter and applying it to a photograph. What is fascinating about these multi-billion dollar projects is that these sorts of logic errors occur at every single layer of their experimental design and those who work on the projects show no awareness of the ways in which their work is out of line with the scientific method. In the LIGO-VIRGO experiment, a cursory examination reveals: • The lack of attention to earlier measurements, • The lack of a control variable or consistency with predictions, • The use of templates or tunable filters to extract what they want to see in noisy data, • The impossibility of distinguishing the source of what they are measuring, • The impossibility of determining the source’s properties, • The lack of awareness of the limitations of their theoretical approximations, LIGO claims to have confirmed general relativity by using general relativity to construct theories about black holes. Is this circular reasoning? Is the dog still chasing its tail? Obviously yes, but some people require more convincing than that, so I will go through each of the points listed above. ## Earlier Measurements In principle, the scientific method ensures that the community will correct its errors, but if it doesn’t study its own history, there is no guarantee that it won’t repeat its errors over and over again. Back in the 1960s, hundreds of independent research groups constructed devices to measure gravitational waves and they all compared their results. Each research group thought that it had measured gravitational waves, but when the results were combined, they all had to conclude that no one had been measuring gravitational waves from deep space. They had all been measuring different sources of noise. Fast forward fifty years and the physics community has given a Nobel Prize to a group that claimed to have measured gravitational waves with a device so expensive it is impossible to duplicate it enough times to make sure it isn’t just measuring noise. I find it rather strange that the physics community keeps forgetting the science lesson in which the teacher says, “you must repeat your measurement under many different conditions for it to be worth anything” and “you can’t measure control variables.” ## Control Variables If you didn’t pay attention in science class, I’ll remind you of the purpose of a control variable. In an experiment, the control variable is not changed. The purpose of an experiment is not to study the control variable. The control is used for comparison. In a well-designed experiment, one typically blinds the researchers to the type of variable they are measuring, so that they don’t bias their data collection. In medical research, this is called a double-blind controlled study. The control variable is usually a placebo and the researchers don’t know which patient got a placebo and which got the medicine. If you can’t conduct your research in this way, you should always doubt your conclusions because it was possible that you biased your result with your expectations. These are fundamental principles of the scientific method and expensive physics experiments like LIGO and EHT are blind to it. In fact, all of the modern physics experiments that attempt to measure the noise floor of empty space (LHC-Higgs, COBE-CMB, LIGO-VIRGO, BICEP, etc..) are dubious for similar reasons — they have no control variable because they are trying to measure the control variable – it is like a dog chasing its tail. If your measurement tool is moving around by the same amount as the thing you want to measure, you aren’t going to be able to measure it very well. If you can’t have a control variable, at a minimum, your experiment should reproduce some predictions and LIGO has not done this either. As of December 2019, they had announced 50 detections since 2017 but, as of May 2020, they are only standing by 10 of those detections because they were forced to attribute many of those ‘detections’ to known sources of noise. If I compare this to the papers they wrote predicting how many collisions they would expect to see after their upgrade these numbers are widely off the mark. They expected to see at least a few black hole collisions and at least one neutron star collision per month since August 2019. It has been 9 months, ## Templates and Filtering Data The purpose of the scientific method is to make sure that you are seeing things that are really there. A scientist does not want to be biased by what he wants to see, but I’m afraid that LIGO has built bias into their experimental design by using black hole-shaped templates to filter and tune their data — sometimes by hand. In the second image below, you can see what their data looked like without their fine-tuned filtering and it looks nothing like the ‘black hole ringdown signature’ that they published in their Nobel Prize winning paper. This was what LIGO published and used to win a Nobel Prize. It later came out that the template was tuned or fitted by hand so that it matched some simple black hole calculations from a textbook.This is what the same data looks like with only a whitening filter and a Fourier transform. http://www.preposterousuniverse.com/blog/2017/06/18/a-response-to-on-the-time-lags-of-the-ligo-signals-guest-post/ Then there is the issue of the scientific ethics of hand tuned data: “If LIGO did anything wrong,” [a LIGO supporter] added, “it was not making it crystal-clear that pieces of that [famous, Nobel Prize winning] figure were illustrative and the detection claim is not based on that plot.” [Respected Niels Bohr Institute researchers], however, accused LIGO scientists in an email of “misconduct” and making “the conscious decision not to inform the reader that they were violating one of the central canons of good scientific practice.” https://www.quantamagazine.org/studies-rescue-ligos-gravitational-wave-signal-from-the-noise-20181213/ When the Niels Bohr Institute research group told LIGO that there was unaccounted for correlated noise throughout their prize winning data, LIGO replied [paraphrasing], ‘we put the wrong data in our prize winning paper and if you look at the right data, you won’t see the correlated noise. Oh, and you forgot to use an FFT windowing function even though that is a mistake that no physicist would ever make.’ Based on these issues, I think LIGO has used a ghost shaped filter to help them see more ghosts, but this isn’t even the worst of their problems. ## Distinguish the Source Even if a ghostly apparition or a black hole collision really happens somewhere in the universe, LIGO has no way to distinguish its signature from some more mundane, local occurence. As in, they cannot distinguish a deep space gravitational wave from a more local wave, yet those who claim to measure black hole collisions in deep space beleive that they can do this. If ghosthunters can’t know if they measured a cloud of smoke or a ghost, how can LIGO know if they measured the sun burping or colliding black holes? A LIGOnaut or ghost-hunter might respond to this criticism by pointing to the measurement they took of a gamma ray or ectoplasm burst occuring at the same time that they measured a ghostly collision signature, but there again, I see evidence of mass delusion. There were 4700 authors, or one third of all astronomers in existence, on a paper about correlations between something that LIGO measured and something that other astronomers measured, but the timing of the signals doesn’t match up and if a message goes out to the ghost hunting commmunity to start looking for ectoplasm because someone thinks they saw a ghost, I think it is quite possible that ghost hunters might find a booger left by a kid and determine that it is ectoplasm. http://iopscience.iop.org/articl… (the big Science magazine issue dedicated to the measurements was behind a paywall) This should worry people because false alarms about weak gamma ray detections regularly prompt LIGO to dig through their noise for evidence of signals that look like black hole mergers. “The Fermi team calculated the odds of such an event being the result of a coincidence or noise at 0.22%. However, observations from the INTEGRAL telescope’s all-sky SPI-ACS instrument indicated that any energy emission in gamma-rays and hard X-rays from the event was less than one millionth of the energy emitted as gravitational waves, concluding that “this limit excludes the possibility that the event is associated with substantial gamma-ray radiation, directed towards the observer.” If the signal observed by the Fermi GBM was genuinely astrophysical, SPI-ACS would have detected it with a significance of 15 sigma above the background. The AGILE space telescope also did not detect a gamma-ray counterpart of the event. A follow-up analysis of the Fermi report by an independent group, released in June 2016, purported to identify statistical flaws in the initial analysis, concluding that the observation was consistent with a statistical fluctuation or an Earth albedo transient on a 1-second timescale.” Fermi Gamma-ray Space Telescope – Wikipedia The sort of gamma ray detection at Fermi that initiates the search for correlations with LIGO happens many times per day and when thousands of people are all expecting to see something buried in noise, I think a fluke is still a possibility. Call me crazy, but I think that skepticism is a good thing to hold onto when you are dealing with noisy, sensitive measurements taken by highly motivated groups of people. Maybe I am just being too strict in my interpretation of the scientific method. After all, if something dark and mysterious happens in deep space and causes the sun to burp and that causes the Earth’s core to gurgle and that causes a lake to heat up which causes a thunderstorm which causes a lightning strike which hits a Schumann resonance and LIGO detects that, did LIGO detect a gravitational wave? By the standards of modern physics, many people would say, ‘yes‘. But modern physicists do lots of strange things in their attempts to measure unmeasureable things. Neutrinos, Higgs particles, cosmic microwave background radiation, oh my. ## Determine the Source’s Properties Suppose that I believe that the blips measured by LIGO really come from deep space black hole collisions and I believe that they have accurately estimated the size of the objects which collided, even though those estimates are absurdly larger than what they had expected to see based on other measurements and based on the theory of black holes. Suppose that I blind myself to these errors. Can I believe in the extrapolations from these estimates which are used to determine the distance to the collision, or have they made mistakes there, as well? In their first gravitational wave announcement, they wrote that the collision was 1.3 billion light years away. If you detect a wave at two locations and you know its propagation speed, you can determine the direction from which the wave came, but not the distance to the thing that caused the wave. LIGO claims that they can determine the size of the objects which created the wave they detected based on the frequency of the wave. Larger frequencies correspond to larger objects. From there, you need to guess how far away such objects typically are. They use the estimates of how big they think the objects were and how far away such objects typically are to make an estimate of how big the wave should be when it gets to us. Ignoring circularity of logic, an absolute measurement of the amplitude of the wave should then tell you more about how far away it was and since they assume that the signal traveled at the speed of light, they conclude when the event occurred. There are a lot of assumptions in this chain of logic and while it might make sense at first glance, an absolute measurement of the amplitude of the wave isn’t really possible with their apparatus. Every red shift, amplification, or filtering of the signal is associated with a factor which adds an error to a determination of the absolute amplitude of the signal, and by tuning these difficult to determine error estimates, you can give yourself just about any result you want. ## Theoretical Approximations Surely the people who built LIGO weren’t so stupid as to ignore all of these issues. They must’ve had a fundamental, basic research justification for this experiment. It can’t have been built solely for the sake of the engineering byproducts and busy work. One would hope that the leaders of the project did not knowingly send a small army of students off on a decades-long, multi-billion dollar wild goose chase or ghost hunt. I’ll unpack the theoretical basis of the experiment in four layers: general, colloquial, technical, and esoteric. Pick your favorite poison. The language degenerates quickly and the esoteric and technical arguments are, of course, the most tedious, so I’ll do them last. Generally speaking, by conflating relative and absolute coordinate systems and language, LIGO has convinced an influential subset of people that impossible things are possible. We can imagine having an absolute, godlike perspective, but we cannot actually adopt one through our measurements because nothing is truly stationary. We can only measure things from a relative perspective and LIGO’s experimental design implicitly assumes it can take an absolute, godlike perspective. I find that thinking about bubbles helps illuminate the folly of this way of thinking. Colloquially speaking, If you believe in the original Michelson-Morely experiment, then LIGO cannot measure the flexing of the Earth because if the Earth stretches in one direction, it must contract in the other and their device is insensitive to this motion. Also, if light and matter waves change their shape in the same way at the same time, you can’t use light waves to measure matter waves. In contrast, you can measure how the Earth flexes by comparing the path traveled by a particle beam in a linear accelerator to that of a light pulse with a stable arrival time, showing how matter aether and luminiferous aether behave in different ways. Technically speaking, Previous experiments with Michelson interferometers suggested that in absolute space, light’s wavelength and the interferometer cavity dimensions will change in the same way at the same time while the amount of time required for light to be reflected by the mirrors will be constant. In relative space, the cavity dimensions are constant because they track the changes of the general relativity coordinate system while the amount of time the light requires to enter and exit a mirror changes (and you might wonder why they thought there would be an advantage to making the interferometer several kilometers long). In either case, the changes in one arm of the interferometer exactly counteract the changes in the other arm of the interferometer such that when the Earth turns or when a wave passes through, no change will be measured at the detector. LIGO insists that they can bypass this issue by adding ‘fresh light’ to the system which allows them to sample the changes in the cavity size. This complication makes the system too difficult for most people to visualize intuitively and this causes them to fall back on mathematics which open the door to errors of approximation. From my perspective, those who claim to understand how the fresh-light concept works are akin to those who admire the emperor’s new clothes. I’ll try to debunk the fresh-light concept in the language of reflection time. When a gravitational wave arrives, the reflection time at the first mirror and the combiner mirror are decreased at the same time that reflection time at the second mirror and the combiner mirror are increased. If you think sequentially, fresh light that hits the first mirror gets slowed down and it will be compared to older light that has been sped up on the second mirror. The difference between the fresh light and the old light will show up as a change at the detector. This is what would happen if the laser were pulsed, but LIGO’s laser is not pulsed, it is continuous. LIGO has conflated a discrete process with its continuous-wave apparatus. If you think of simultaneous processes, as is required for an apparatus that uses continuous waves, there is no such thing as fresh light or old light – there is just light and you see that the concept is nonsense. They are using sequential thinking for a simultaneous process and if a person cannot mentally animate two processes occurring in parallel, then he will be more likely to believe in LIGO. In a broader sense, most of the mathematics we use in physics is an attempt to sequentially approximate simultaneous processes and if you think about entanglement or quantum mechanics from this perspective, the concepts lose their relativistic mystery. At this point, a determined LIGOnaut might pull out some technical jargon. Complicated figures would be employed in an attempt to convince us that sequential, discrete analysis can be used to approximate their continuous, simultaneous process. “Cannot light from a laser be considered pulsed but at a fast rate since it is produced with stimulated emission so that light entering the vacuum arms with the mirrors are pulsed at LIGO-VIRGO? By slowing the rate that light is emitted from a laser, pulsing is more obvious.” But any non-LIGO physicist knows that stimulated emission in a laser means that the photons are emitted in proportion to the particle number squared. It is a collective, oscillatory effect in which large groups of particles oscillate in sync, making the waves they generate coherent rather than random. This process is not conducive to the emission of individual photons as in an incoherent process. Pulses of laser light are produced through a different process which involves giving a continuous wave a frequency chirp and sending it through a collection of dispersive elements, like prisms or diffraction gratings. If LIGO is using this justification, it is yet another instance of the conflation of concepts that should not be mixed. The rate of light emission would have to be slowed to the level of individual photons for this pulsing effect to be measurable within a system designed for coherent light and the detector would need to be sensitive to changes in the number of individual photons arriving. Direct measurement of individual photons is not possible with their system. In any case, if the system is only sensitive to changes on the level of individual photons, then it makes no sense to have the powerful, continuous-wave in the interferometer at all. I recall a similarly obtuse debate about the physics of the beam-splitter, but I’d rather not dive into that at the moment. Esoterically speaking, in Lorentzian Maxwell’s equations, transverse and longitudinal waves will have the same speed in free space and, just as in the original Michelson-Morely experiment, if you use those equations to describe the experiment, LIGO shouldn’t be able to measure anything, but if one decides that the continuous, Lorentzian Maxwell’s equations in relative, Reimannian, curved space are merely an approximation of a more discrete, grainy, recursive system described by Galilean Maxwell’s equations in absolute, Cartesian, flat space, then there will be discontinuities between transverse and longitudinal waves which might be measured as a sort of friction coefficient that corresponds to the strength of the ambient gravitational or magnetic field. In olden times, one would call these discontinuities magnetic monopoles, but today, we call them all sorts of things – displacement currents, positrons, electrons, matter, antimatter,.. black holes. Perhaps the inventors of LIGO were trying to determine the properties of the magnetic monopoles filling space. That might be worth doing. “Wait!” You might say. “Magnetic monopoles do not exist. I learned that in my first physics classes.” “If you’d known these equations 300 years ago, you’d have been very powerful.” “A student just like you came from this University and won a Nobel Prize.” She says the words “paradox,” “puzzle,” and ‘”mystery” several times and is selling physics to impressionable young people as a discipline which can make them powerful, noble, and famous. Notice how she ignores the approximation inherent in setting the divergence of the magnetic field equal to zero in Gauss’s law. Basically, magnetic monopoles are localized swirls of space and time that exist when you don’t try to directly measure them or that exist in an instant but not over a measurable timestep. They are sort of like vortices or bubbles — ephemeral, ghostly little things. Modern physicists have taken to calling them ‘particles’ and confusing people about the concept of antimatter. A black hole is a representation of the largest ‘particle’ we can think of and when we imagine their collision in distant space, we picture matter and antimatter annihilating and releasing energy, just as we observe in our Earthly particle colliders. This is why the idea of a black hole is so appealing to the physics community. Yet, just because an approximation like general relativity works on one length scale, there is no reason to believe that its approximations are valid on another. After all, space acts flat over some length scales and curved over others. That is why approximating black holes with general relativity and thinking that they literally exist in a mathematically perfect form is too much of a fanciful stretch for many people. It is understandable that people are curious about them because if black holes are like virtual particles in colliders, then they borrow energy from the vacuum and disappear as quickly as they form, however, if the vacuum is fed by an outside source of energy, then a black hole might form and remain for a long time. If black holes exist, it tells us something about whether positive or negative entropy rule the universe and this has a certain quasi-religious, psychological impact on many people. ## Conclusion I don’t, of course, know how all physicists think, but I know two physicists very well: myself and my husband, a man who believes in LIGO’s measurements. His funding sources have nothing to do with LIGO and he is very secure in his job, but he has been conditioned to believe that it is impolite, politically unwise, and not his business to think critically about anyone’s research other than his own. The politically astute thing to do in the case of LIGO is to approve of them and give them the benefit of the doubt without spending any time thinking about the issue, even though he would be an ideal person do deconstruct their engineering designs. He genuinely has not given the matter two seconds of thought. He stopped listening to me early in my post-doctoral work when I began to have ideas independently of him. At that point, he refused to talk to me about anything related to work. Any time I tried to explain something, I got shut down. I soon figured out that the reason he shut me down was that he really couldn’t understand what I was saying. His mind works in a very narrow, specialized fashion and he couldn’t understand any of the associations I was making. I find the following argument quite clear, but he can’t even force himself to read such a thing because he is so specialized. • If general relativity is a static approximation of a more dynamic system, then its predictions about black holes will be inaccurate. Black holes might not exist. • Even if general relativity is an adequate approximation of a dynamic system and black holes do exist, the method of creating an image out of noisy, contaminated data is still inaccurate. • Using a single result produced through faulty data analysis to claim confirmation of a hypothesis based on an extrapolation from an approximation is terrible science on multiple levels. • A theory might be supported through results produced with many independent experimental methods addressing many hypotheses about the theory, but using multiple data analysis methods with a single data set to claim confirmation of a theory is absurd. Those who make images of black holes and gravitational wave measurements are doing all of these things and physicists like my husband refuse to worry about such matters outside of their narrow purview. The best that science can do is to create a controlled experiment in which changing one variable causes another variable to change. Since this is not possible in astronomy, we must rely on astronomers to exercise restraint when they interpret their data. That restraint appears to be sorely lacking in the present day community. …………………… I’ve been writing about these issues for a while, but I’m not sure how to hit the right buttons to get the message across. My earliest work on this topic can be found here: In recent months a few physics apostates have begun to make noise about these matters, as well, noting that LIGO has produced nothing but false alarms over the past year. As in, once they got their systems in full operation, they were forced to rule out all of the things they would’ve ordinarily proclaimed to be black hole mergers. What does that say about all of their earlier, published detections? Nothing good. A guy I met on Twitter named Thaddeus Guttierez is taking a more academic approach to being a LIGO gadfly by attempting to identify things like lightning strikes that occured at the same time as LIGO’s claimed detections, and while he doesn’t appear to be in the mainstream academic community, he speaks their language. He sent me these links to mainstream LIGO gadflies: A common thing for a young or aspiring physicist to say is, “If your ideas are so great, why haven’t you published them im a peer-reviewer journal?” They might not know that for an ex-physicist, ex-post-doc like me who is unaffiliated, publishing in a peer-reviewed journal is very expensive. APS journals charge two thousand dollars and Springer journals only waive their fees if you are employed by an approved institute. I found one, online peer-reviewed journal (frontiersin.org) that does not seem to charge. It appears to be a new model for scientific publishing in which the referees are not anonymous, but I’m a bit suspicious of it. As in all internet media, it is far too easy to distort an author’s impression of their work’s distribution. (I just checked into this a bit more closely, and they do charge a fee for publication – if your paper is accepted. I don’t know how much it is. I think they should pay me, not the other way around.) …… After putting effort into debunking this stuff, I do ask myself why I bother and I think that I want to demystify the pop-sci nonsense used to lure young people into physics servitude. I think they might find better things to do with their time and I’d like to help them avoid the mistakes I made. comment by samshap · 2020-07-14T01:28:45.785Z · score: 13 (9 votes) · LW(p) · GW(p) Thanks for presenting your thesis. However, one of your figures doesn't support your argument on closer inspection. The figure that you point to as being the 'unfiltered' data is measuring cross-correlation between the Hanford and Livingston datasets, so we should expect it to look completely different than the datasets themselves. I also want to push back on a particular point - there's nothing wrong in principle with using a black-hole shaped filter to find black holes. You just have to adjust the prior based on the complexity of your filter. comment by nixtaken · 2020-07-14T07:15:46.552Z · score: 1 (3 votes) · LW(p) · GW(p) If you click on the link to the article about the data source, I think you will find that you have misinterpreted that figure and that my interpetation is correct. Thank you for pointing out that the presentation can look ambiguous without that context. comment by gjm · 2020-07-14T14:07:58.581Z · score: 2 (3 votes) · LW(p) · GW(p) I read the article. samshap is 100% right and you are 100% wrong. [EDITED to add:] "What is asserted without evidence can be dismissed without evidence" -- but I might as well give some justification for my claim. Here is what the article says: First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following: (followed by the graph you provide here). Note two things. First: "I begin by cross-correlating the Hanford and Livingston data" -- just as samshap says. Second: "in a very narrow 0.02s window". That's about 1/10 of the time period represented by the main plots, which go from 0.25s to 0.45s "relative to September 14, 2015 at 09:50:45 UTC" (not that we can tell from your presentation, because you clipped off the bottom part of the figure which includes the time axes). So this could not possibly be an alternative to the other plots; the horizontal axes aren't in any way compatible. The context for this is that the (LIGO-skeptical) Cresswell et al paper is looking at the time lags between LIGO observations, and claiming to cast doubt on the idea that seeing two very similar signals at the two detectors at a certain time-lag is evidence of anything. So, in particular, Cresswell et al try to show that you can get the same 7ms lag by looking at other things without the actual signal in it. (One of the things they look at is the residual noise from the LIGO data, after subtracting off the black-hole-merger model. This is why it's relevant that the actual best-fit model is better than the "illustrative" one -- because if you subtract off a crude model, what remains will have some real signal in it, so it's unsurprising if it shows some of the same temporal correlations as the actual signal does.) So now Ian Harry shows the cross-correlation graph for the LIGO data before subtracting off the fitted model, and after subtracting the (best) fitted model. The graph you reproduce here is the cross-correlation before subtracting the model; the next one (not reproduced here) is the cross-correlation after subtracting the model, which shows no 7ms spike. Note that the context makes excellent sense of having a cross-correlation graph at this point in the article, and would make no sense at all of having a raw-LIGO-observation-data graph instead. comment by nixtaken · 2020-07-14T15:15:03.015Z · score: 1 (3 votes) · LW(p) · GW(p) This is what the author of the linked photo wrote. I find it quite clear. "With that all said I try to reproduce Figure 7. First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following: There is a clear spike here at 7ms (which is GW150914), with some expected “ringing” behaviour around this point. This is a much less powerful method to extract the signal than matched-filtering, but it is completely signal independent, and illustrates how loud GW150914 is." How the prize-winning figure was produced was much less clear, but I didn't go into the details because I wanted to give a larger perspective on the methods employed rather than bury everyone in tedium. comment by gjm · 2020-07-14T16:05:41.741Z · score: 2 (3 votes) · LW(p) · GW(p) You may find it quite clear, but you are interpreting it quite wrong if you think it's some sort of less-processed version of "the prize-winning figure". It's not: it's something completely different. comment by nixtaken · 2020-07-14T16:09:13.269Z · score: 1 (3 votes) · LW(p) · GW(p) If you subtract chirped signals from one another with a slight phase shift, do you get a chirped signal that looks like the initial chirp? This is another method to get the signal in the prize winning figure. It does not require templates or hand tuning. That was the point I was trying to make. comment by gjm · 2020-07-14T21:35:26.585Z · score: 2 (3 votes) · LW(p) · GW(p) (I wish you wouldn't keep calling it "the prize-winning figure". Obviously Nobel Prizes are not in fact awarded for figures, and I do not believe you have any evidence for the implied claim that if the figure had looked different then the LIGO team wouldn't have won the Nobel Prize.) I'm not sure what point you're now making; it looks to me as if it has nothing to do with what we were talking about before. Are you saying that the LIGO team should have used a different technique to identify gravitational wave events? If so, that claim requires much more evidence than "I thought of another way to do it". Or are you saying that some plot they made is in fact the result of subtracting two related signals with a phase shift and that this is some sort of sign of incompetence or fraud or something? Or what? In any case, it seems like you've given up defending your claim that the plot from Ian Harry's article is some sort of "original" less-cleaned-up version of the plot you keep calling "the prize-winning figure". Which is just as well, because that claim is indefensible. comment by nixtaken · 2020-07-15T06:23:41.950Z · score: -2 (5 votes) · LW(p) · GW(p) I haven't given up. I think you should try harder to understand what I wrote above. comment by gjm · 2020-07-15T07:37:43.818Z · score: 2 (3 votes) · LW(p) · GW(p) Whereas I think you should try harder to explain it, because it's not making any sense to me as a justification for your (plainly incorrect) claim about that figure and right now my leading hypothesis is that you just don't understand the mathematics and/or the physics involved well enough to see what's going on and are trying to obfuscate, and there is a (not very high) limit to how much trouble I am willing to go to to understand something that seems likely not to be worth understanding. I might as well answer your question about chirped signals. If you have a signal that looks like  where f is a slowly varying function (compared with the chirpy factor) then subtracting a slightly time-shifted copy of it gives you roughly the derivative, which when f varies slowly is roughly , which is indeed a chirped signal that resembles the initial chirp albeit with some extra variation in amplitude. If you have a phase-shifted version available instead of a time-shifted one, the resemblance is closer because the  factor goes away. So yes, subtracting chirpy signals with a small shift gives you similar-ish chirpy signals. Now, how does this give any reason to think that that plot is a less-processed version of "the prize-winning figure"? comment by nixtaken · 2020-07-15T11:49:47.737Z · score: -3 (6 votes) · LW(p) · GW(p) You seem to be close to understanding, but not quite there. What is the difference between a phase shift and a time shift? comment by gjm · 2020-07-15T20:48:04.686Z · score: 8 (5 votes) · LW(p) · GW(p) Nope, not playing any more of that game. If you want to make a point, make it. If you want to hint vaguely that you're smarter than me by posing as Socrates, go ahead if you wish but don't expect my cooperation. comment by gjm · 2020-07-14T13:57:06.775Z · score: 12 (8 votes) · LW(p) · GW(p) Pretty much everything here seems wrong to me. Some comments, in rough order of appearance: You call the EHT a multi-billion-dollar project. I don't think I believe you. Can you provide some actual figures? You say that LIGO-VIRGO "filters their noisy data with a template of what they want to see". _Every_ kind of filtering can, with a sufficient lack of charity, be described that way. (E.g., even the most simple-minded moving-average filter amounts to saying that you're looking for signals with relatively little very high-frequency content, but you expect there to he high frequencies present in the noise.) There is nothing wrong with doing it, either; what matters is how you then analyse the results. If you think LIGO's analysis is wrong, you need to explain how it's wrong; making a complaint that amounts to "they filter their data" is no good; that's what everyone does and there's nothing wrong with it. You say that it's circular reasoning if you say you've confirmed GR by using GR to construct theories and then checking that your observations match the theories. It's not circular reasoning at all, it's how science works. You take a theory, you put some effort into working out what the theory says you should see, and you look at whether you see that or not. Again, it's very possible to do that wrongly -- confirmation bias is a thing -- but a complaint that amounts to "they claimed to have confirmed a theory by doing experiments based on that theory" is no good; that's what everyone does and there's nothing wrong with it. You say the gravitational wave community has exhibited a "lack of attention to earlier measurements", on the basis that earlier measurements claimed to have found black holes and turned out to be wrong, and LIGO/VIRGO isn't doing the _exact same thing_ that made it possible to check that the earlier claims were wrong, namely combining large numbers of independent verifications. But (1) your description of those earlier measurements doesn't match what's in the article you link to (you say hundreds of independent groups all thought they'd found GWs and they only discovered they were wrong when they combined their results; the article says _one_ researcher claimed to have found GWs, everyone else disagreed, and when they looked they found errors in his analysis), and (2) it is not always the case that when something goes wrong and gets fixed, next time around you should apply the exact same fix in advance; sometimes there are better ways. Repeating an experiment N times reduces the noise by a factor of sqrt(N) (at least for certain common kinds of noise) and there may be ways to reduce it more effectively per dollar spent. You say LIGO fails to use "control variables". This is nonsense. Anything they don't vary is a control variable, and "using control variables" is not a virtue. What you're actually describing in the paragraph beginning "In a well-designed experiment" is a control _group_ or simply a _control_. Some experiments use controls, some don't; it's not clear to me what it would _mean_ to use a control in the case of LIGO, and it seems to me that you could consider _all the times it doesn't detect anything_ to constitute control measurements. You say LIGO "had announced 50 detections" as of 2019-12 but as of 2020-12 "are only standing by 10 of those". But you don't quote what they actually said, or provide any links. The "Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo" paper published on 2020-12-25 says that those runs produced "11 confident detections and 14 marginal triggers". That doesn't look to me like a claim of 50 detections. Could you please be more specific about what they claimed in 2019-12 and what they said in 2020-05? I am betting that if there is anything resembling a "50 detections" claim, it was something like "50 candidate events" and confirming only 10 of them is in no way evidence of anything wrong. You say "They expected to see at least a few black hole collisions and at least one neutron star collision per month since August 2019. It has been 9 months". (Obvious implication: they aren't seeing what they said they would see.) According to https://www.ligo.caltech.edu/news/ligo20200326, when they suspended their third observing run near the end of 2020-03 (because of COVID-19) they had seen 56 detections (I don't know whether this means candidates, fully confirmed detections, or what) in the ~400 days of run 3. That's about four per month. Seems to fit their prediction just fine. You complain, again, about LIGO's use of "templates" and compare a couple of graphs to show how different less-filtered data look compared with their published plots. But your plot purporting to be "the same data ... with only a whitening filter and a Fourier transform" is no such thing. Look at the y-axis label: "Cross-correlation". This is the cross-correlation between the Hanford and Livingston signals. It is nothing remotely like the raw data, nor should it be. So far as I can see, in the plot you show ("This was what LIGO published and used to win a Nobel Prize")the data in the top frames is _not_ the result of any sort of template-filtering at all. (I think there's some bandpass filtering, which is absolutely routine, and that's it.) You quote some accusations of "misconduct" on the basis that "pieces of that figure were illustrative and the detection claim is not based on that plot". The thing that's "illustrative" is the _second_ row in the "Nobel Prize" plot, and the point of the remark about how it's only "illustrative" is that the properly-done fit (which _was_ the basis for the detection claim) matches the observed signals _better_. The point of the "illustrative" plots is to let you see by eye that the actually-observed signals have the right sort of shape. You object to the LIGO researchers' response to the Copenhagen objectors because they said (in your paraphrase) "you forgot to use an FFT windowing function even though that is a mistake no physicist would ever make". Well, sometimes physicists make mistakes you wouldn't think they would. The relevant question here is: _Did_ Cresswell et al fail to do it, or didn't they? Green and Moffatt say their results look like they did fail. I haven't seen any rebuttal to that. You say that the LIGO researchers have no way to distinguish distant gravitational waves from other possible sources nearer-by. Well, obviously one can never prove that an observation doesn't come from some currently unknown source producing signals by currently unknown means, but that criticism applies equally to _all_ observational science. We think we see a supernova a long way off, but maybe it's actually some thing much nearer to us that _just happens_ to have happened exactly in between us and the star we think went supernova. Sure, it's possible, but we have a simpler explanation! And so it is for LIGO. If you have a specific alternative theory for what LIGO has been detecting, let's hear it. (You kinda-sorta, I assume mostly frivolously, mention a possible string of events. "After all, if something dark and mysterious happens in deep space and causes the sun to burp and that causes the Earth's core to gurgle and that causes a lake to heat up which causes a thunderstorm which causes a lightning strike which hits a Schumann resonance and LIGO detects that ...". But obviously that's not relevant because (1) it has no details that would enable us to tell what sort of observations such a sequence of events might produce and (2) something _that_ local would not produce results that would fool LIGO except by extreme coincidence; that's why they have multiple detectors thousands of miles apart.) You complain that the black-hole events LIGO claims to have found "are absurdly larger than what they had expected to see based on other measurements and based on the theory of black holes". I would like some details substantiating that complaint. I remark that LIGO would _always_ tend to detect larger-mass black-hole events for the obvious reason that they produce stronger gravitational waves and LIGO needs to be staggeringly sensitive to detect anything at all. Your theoretical objections to the whole mode of operation of LIGO looks all wrong to me, in multiple ways, but I am not a general-relativist and won't get into that particular argument (but I remark that if you were right, that seems like the sort of error that I would expect The Scientific Establishment to pounce on instantly, so the fact that LIGO is generally a respectable high-prestige operation is evidence against). And, while we're talking about the actual physics, the following sentence seems to me like evidence of hopeless confusion (on your part, I'm afraid, not that of the scientific establishment): "there will be discontinuities between transverse and longitudinal waves which might be measured as a sort of friction coefficient [...] In olden times, one would call these discontinuities magnetic monopoles, but today, we call them all sorts of things -- displacement currents, positrons, electrons, matter, antimatter, ... black holes". I don't think this makes contact with reality at any point. Somewhere around here, I lost the will to live, so I've paid less detailed attention to the end than to the beginning. (I am very fallible and the chances are that there is at least one mistake in what I've written above. But there would need to be one hell of a lot of mistakes for your complaints about gravitational wave detection to be convincing to me.) comment by nixtaken · 2020-07-14T15:25:28.054Z · score: 1 (3 votes) · LW(p) · GW(p) You asked about how much money was spent on gravitational waves projects and implied that my assertion that billions have gone into the research was incorrect. I made an estimate in : https://www.quora.com/How-much-money-was-spent-on-the-LIGO-project (I did come to this topic with an ax to grind, but I tried to be a bit less polemical in the article I wrote above.) In 1991, congress agreed to fund the design phase of LIGO to the tune of 23 million dollars. The construction phase ended in 2002 and was funded by the NSF with an initial grant of 400 million dollars, making it the largest project ever funded by the NSF. The first version didn’t work so more money was spent through other grant requests. I can’t find a tally of how over budget they went or the other grants which they were able to secure. The grand total between 1990 and 2010 could’ve been a billion dollars or more. Over the 5 years between 2010 and 2015, another 620 million dollars was spent on the “Advanced LIGO upgrade”. More money from international collaborators also flowed in, but that is hard to tally. Some first results were announced with great fanfare starting in 2017 and they plan to continue spending money on LIGO until it reaches its “design sensitivity” in 2021. After announcing first results, the two project leaders collected millions of dollars in prize money. Taken altogether, you have a ~2 billion dollar project which has supported probably a couple thousand professors, students, and scientific equipment manufacturers since the 1990s. On average that would be 35k per year per person - unevenly distributed. I find this appalling not because I hate scientists, but because I don’t believe in their interpretation of their experiment and think that the educations the people in this project got about how science should be done were very bad. They would’ve been better off paying 2000 people to just think about things carefully for 30 years rather than organizing them into an anthill which builds nonsense. Meanwhile, nobody in the scientific community can openly criticize LIGO because if you want to get a grant, you probably will need approval from a LIGOnaut. Kirsten Hacker's answer to Could the LIGO gravitational wave detections have been caused by frequency shifts of the US electrical power or any other coincidences? Kirsten Hacker's answer to Why does LIGO work? Shouldn't the phase and frequency of the lasers match the expansion/contraction of the reference frame (space)? Kirsten Hacker's answer to How do LIGO scientists know that the gravitational waves detected were generated 1.3 billion years ago? They are building more of these things in Japan, Europe, and India. It is big business - all taxpayer funded. (most of this info on funding comes from the LIGO - Wikipedia) comment by gjm · 2020-07-14T16:08:25.009Z · score: 4 (4 votes) · LW(p) · GW(p) I didn't ask how much was spent on LIGO. I asked how much was spent on EHT. Those are very different projects. (So I'm afraid everything you wrote above was irrelevant to the question I asked. I regret not making it clearer, though I confess I'm not quite sure how I could have made it clearer since what I wrote was "You call the EHT a multi-billion-dollar project. I don't think I believe you. Can you provide some actual figures?".) Also, in case it wasn't obvious, the question about the cost of EHT was very much the least important part of what I wrote; obviously it doesn't make much difference to the rightness or wrongness of your claims about LIGO whether you got the size of the EHT project right or not. comment by nixtaken · 2020-07-14T16:55:20.244Z · score: 3 (2 votes) · LW(p) · GW(p) I'm sorry, I didn't read as closely as I should've. You wrote a lot and I am happy to address individual points, but not all of them at once. I do not have the numbers on hand for EHT, but since it combined the efforts of ~10 radio frequency telescopes, if EHT kept the telescopes in operation when they would've otherwise lost funding, it may have been quite expensive. The telescopes themselves have surely cost billions to build and operate. One was located on the south pole. I find EHT absolutely absurd for reasons that I didn't go into in this article, but I gave a talk about that project at IdaLabs in Berlin in March. I wrote it up in this post: https://kirstenhacker.wordpress.com/2020/05/15/of-proteins-people-and-particles/ comment by Dustin · 2020-07-14T19:11:51.615Z · score: 1 (1 votes) · LW(p) · GW(p) It's unclear if you're claiming that you have actual figures that show the EHT actually cost billions of dollars or if you're claiming that you think it's likely, but just a guess, that it kept all those radio telescopes "in business", or if you're taking back your claim that it cost billions of dollars. comment by nixtaken · 2020-07-14T19:54:48.186Z · score: 1 (1 votes) · LW(p) · GW(p) The telescopes are not cheap, even if they are supported by the work of many institutes. The data center alone for this one is 80 million. For the telescope itself: • 5,030 person hours have been spent working on site, by 27 dedicated team members  (including 130 hours by the Student Army, a team of 7 students from Curtin University) since January 2012 to ‘build’ the telescope • 7 km (4.3miles) of trenching has been dug • 10 km of low voltage electrical cable has been laid • 16 km of fibre optic cable has been laid – by hand • 42 km of coaxial cable has been dragged and laid- by hand • 9 tonnes of mesh (400 sheets) has been used to create the antenna bases – each lifted and placed by hand • 4,608 RF connectors have been used – each secured by hand I'm not arguing that the telescopes are useless, but rather that some experiments are not useful and that they are harmful to good pedagogy. comment by Dustin · 2020-07-14T20:55:02.693Z · score: 4 (4 votes) · LW(p) · GW(p) I'm not arguing that the telescopes are useless It did not seem like you were making such an argument, nor was I asserting that you were making such an argument. The telescope could have cost umpteen trillions of dollars and that fact alone would not support your claim that EHT cost billions of dollars. I'm not sure how to understand the fact that the previous statement is obvious and yet you still made your comments. I feel like the most charitable interpretation that I can come up with still does not leave a good impression of your overall argument. I'm not harping on this apparent mistake for no reason. It's just that of all the things described by gjm this seems like it might be the easiest to explicate. comment by nixtaken · 2020-07-15T06:36:59.839Z · score: 0 (4 votes) · LW(p) · GW(p) It is the easiest to explain because gjm's other points demonstrated a deeper misunderstanding of what I view as fundamental epistemological issues. What is very concerning about EHT and LIGO is that they are used as training facilities for data scientists who apply these methods in other fields. If these flaws see the light of day, an entire industry is threatened - not just black hole astronomy. comment by Dustin · 2020-07-15T20:12:35.659Z · score: 8 (7 votes) · LW(p) · GW(p) Since you seemingly can't defend nor withdraw your claim that EHT cost billions of dollars, a reasonable person can only assume that the rest of the factual content of your post is suspect. comment by Dustin · 2020-07-19T03:08:32.194Z · score: 3 (2 votes) · LW(p) · GW(p) So, you seem to continue to use a rhetorical device wherein you do not directly address the points that your interlocuters are bringing up and just answer the question you wish was asked. For example, this comment I'm replying to here has almost zero bearing on what I said. Saying EHT is bad is not a way to address the argument that EHT did not cost billions of dollars. EHT may very well be bad, but that has no bearing on the subject at hand. In your previous comment to me in this thread you did the same thing. comment by nixtaken · 2020-07-19T06:39:41.806Z · score: -1 (2 votes) · LW(p) · GW(p) Without a detailed list of the EHT project budget over the past decades, I provided an alternative way to make a rough estimate. comment by Dustin · 2020-07-21T01:56:15.365Z · score: 2 (2 votes) · LW(p) · GW(p) That is a way to make a rough estimate in the same way that providing the construction costs for a whole shopping mall is a way of providing a rough estimate of how much it costs for me to walk in the door of said mall. In other words, there are too many unknowns and counterfactuals for that to even begin to be a useful way of calculating how much EHT cost. In a way it's almost besides the point. You made the positive claim, seemingly without any solid facts, that it cost billions of dollars. When you were called on it, a way to increase the confidence of others in your arguments and presented facts would be to say something like "you know, I shouldn't have left that in there, I withdraw that statement". By not doing so and sticking to your guns you increase the weight others give to the idea that you're not being intellectually honest. Your current tack might be useful in political rhetoric in some quarters, but it doesn't seem like it will be effective with your current audience. comment by nixtaken · 2020-07-21T08:16:10.027Z · score: 1 (1 votes) · LW(p) · GW(p) " That is a way to make a rough estimate in the same way that providing the construction costs for a whole shopping mall is a way of providing a rough estimate of how much it costs for me to walk in the door of said mall. " So you think that I grossly underestimated the cost by multiplying the cost of one of the cheaper facilities by the number of facilities? You are probably right, since some of the facilities were in rather inhospitable climes (the south pole) -- and that would surely add to their cost. I am most certainly sticking to my guns. I've seen no counter-arguments here that hold even a teaspoon of water. I've got you insisting that my estimate of the project cost is dishonest because I don't have a detailed accounting of all ten facilities. I've got gjm insisting that adding and subtracting uncorrelated errors to reduce the error of a measurement is a valid way to do error propagation. (he wrote this in the comments on my The New Scientific Method post) I've got gjm insisting that organizing randomly scrambled phase data according to 'weirdness' is a valid experimental technique. (his comments on this can be found in my New Scientific Method post) and I've got the moderator, Oliver, defending gjm's reasoning and insisting that my five articles on the practice of the scientific method do not deserve the 'scientific methods and philosophy' tag for which he is responsible. I believe that he considers himself to be an expert in 'many worlds' quantum mechanics. In short, since I've arrived in this space dedicated to rationality, I've encountered three, rather hostile people who have managed to team up to give me a Karma of -87 by downvoting all of my comments and posts. I'd like to find out more about what motivates these people. comment by gjm · 2020-07-24T15:57:11.439Z · score: 6 (4 votes) · LW(p) · GW(p) Dustin's point, as I understand it, is not that you overestimated or that you underestimated, nor that you didn't give a detailed accounting of all the facilities involved, it's that you're confusing two completely different questions. (1) How much did the EHT project cost? (2) How much did the telescopes used by the EHT project cost to build and run? You made a claim about #1 and when challenged on it offered some numbers relating to #2. You do say one thing that purports to link them: "... if EHT kept the telescopes in operation when they would've otherwise lost funding ...". But that's one heck of a big if and I know of no reason to think that EHT kept any telescopes in operation that would otherwise have lost funding. And even if it did, that wouldn't justify including the cost of building the telescopes in your estimate of the cost of EHT, unless the telescopes in question were never used for anything other than EHT. (One journalistic outlet has given a concrete estimate for the cost of the EHT project. They say 50 to 60 million dollars. I don't know where they got that estimate or how much to trust it, but it sounds much much more believable to me than your "billions of dollars".) comment by nixtaken · 2020-08-02T19:19:28.387Z · score: -6 (4 votes) · LW(p) · GW(p) Whenever people come to vastly different estimates of how much a project costs, one that calculates opportunity cost and another that makes a naive estimate of accounting costs, there is a lesson to be learned about the importance of multidisciplinary education that trains people to think along multidimensional lines. This would prevent discussions like this one from happening so often. comment by gjm · 2020-08-05T07:30:21.573Z · score: 2 (1 votes) · LW(p) · GW(p) Calculating opportunity costs is great, but that isn't what you did. comment by gjm · 2020-07-24T16:13:03.295Z · score: 5 (3 votes) · LW(p) · GW(p) Your descriptions of what I said in the comments on "The New Scientific Method" are not accurate. They are like your purported quotations from Katie Bouman's talk (though at least you didn't put them in quotation marks this time): in condensing what I actually said into a brief and quotable form, you have apparently attempted to make it sound as silly as possible rather than summarizing as accurately as possible. I think you shouldn't do that. (My description in terms of "weirdness" was meant to help to clarify what is going on in an algorithm that you criticized but apparently hadn't understood well. It turns out that it was a mistake to try to be as clear and helpful as possible, rather than writing defensively so as to make it as difficult as possible for someone malicious to pick things that sound silly.) I already told you (in comments on that other post) what motivates me: bad science, and especially proselytizing bad science, makes me sad. It makes me especially sad when it happens on Less Wrong, which aims to be a home for good clear thinking. Having seen the previous iteration of Less Wrong badly harmed by political cranks who exploited the (very praiseworthy) local culture of taking ideas seriously even when they are nonstandard or appear bad at a first glance, I am not keen to leave uncriticized a post that is confidently wrong about so many things. I don't know what anyone else may have done, but I at least have not downvoted all your comments and posts. I have downvoted some specific things that seem to me badly wrong; that's what downvoting is meant for. (As it happens, it looks to me as if you have downvoted all my comments on your posts.) comment by nixtaken · 2020-08-02T19:28:53.340Z · score: -6 (4 votes) · LW(p) · GW(p) When noneconomical language is used to obfuscate, it is necessary to paraphrase in order to restore clarity to the discussion and make the simple, silly, underlying errors easier to see. I have made 6 posts on Less Wrong about physics experiments that I find to be particularly bad in their understanding of the scientific method and in their experimental design. You have chosen to defend two of those experiments at length. That you equate your defense of these experiments as an attack on 'bad science' (i.e. me) suggests that you may be suffering from cognitive dissonance and you are using projection to comfort yourself. comment by Frank Bellamy (frank-bellamy) · 2020-07-14T13:44:37.991Z · score: 11 (6 votes) · LW(p) · GW(p) A lot of this reads like you are trying to apply the structure of an experiment to a thing that is, um, not an experiment. Like, we all learn the steps of an experiment in school (where they often incorrectly call the experimental method "the scientific method"). But there are whole sciences, like astronomy, and cosmology, and geology, that don't do experiments, they just make observations and analyze them in the context of what we already know from experiments in other areas of science. That is what LIGO does. We can't do experiments on gravitational ways, because we don't have the capacity to produce gravitational waves. All we can do is observe them. And that is still a perfectly valid scientific endeavor. And in particular, it is a scientific endeavor in which the notion of a "control" doesn't seem to make a whole lot of sense. Now, I don't have the technical competence to evaluate these kinds of high level physics things for myself, I don't know the math of general relativity, so I'm not going to try. But I generally trust the scientific community, and I'm not going to update much on a blog post that seems to misunderstand what these things are trying to do. comment by nixtaken · 2020-07-14T15:31:52.435Z · score: 2 (3 votes) · LW(p) · GW(p) Thank you for explaining what confused you about how I presented this topic. I tried to draw these themes together in the final paragraph: "The best that science can do is to create a controlled experiment in which changing one variable causes another variable to change. Since this is not possible in astronomy, we must rely on astronomers to exercise restraint when they interpret their data. That restraint appears to be sorely lacking in the present day community." But maybe I should emphasise this earlier. The social pressures within the LCDM cosmology establishment are rather unusual and they are elaborated within the links in the last paragraph. comment by Pattern · 2020-07-16T19:40:08.201Z · score: 3 (2 votes) · LW(p) · GW(p) Figuring out past detections were false, seems like cases of trying to replicate earlier findings, i.e. doing things right. LIGO has no way to distinguish its signature from some more mundane, local occurence. Why not? This?: If you detect a wave at two locations and you know its propagation speed, you can determine the direction from which the wave came, but not the distance to the thing that caused the wave. APS journals charge two thousand dollars Wow. After putting effort into debunking this stuff, I do ask myself why I bother and I think that I want to demystify the pop-sci nonsense used to lure young people into physics servitude. I think they might find better things to do with their time and I’d like to help them avoid the mistakes I made. comment by Said Achmiz (SaidAchmiz) · 2020-07-13T23:00:46.980Z · score: 3 (2 votes) · LW(p) · GW(p) If you would like to hear this post read aloud, try this video. Meta: the video didn’t make it through the cross-posting, it seems. (I am not sure if Less Wrong supports video embedding; I think it may not. You might want to just link the video.) comment by habryka (habryka4) · 2020-07-13T23:39:29.686Z · score: 3 (2 votes) · LW(p) · GW(p) I just edited the post and added in a link. We don't currently support video embeddings. comment by nixtaken · 2020-07-15T17:20:26.052Z · score: 1 (1 votes) · LW(p) · GW(p) There was a second video in the post, later on. comment by Leafcraft · 2020-07-14T11:33:30.421Z · score: 2 (4 votes) · LW(p) · GW(p) Thanks for the interesting read. I absolutely lack the background to comment on your conclusions, but your post made me remember some questions I had on Black Holes that no physicist I talked to could answer, I never would have guessed the field had detractors. If you don't mind me asking, are you also a climate change skeptic? comment by nixtaken · 2020-07-15T08:04:28.353Z · score: 5 (4 votes) · LW(p) · GW(p) I am aware that the climate changes and that it is necessary to increase the efficiency of our use of fossil fuels. comment by Leafcraft · 2020-07-15T10:23:29.495Z · score: 1 (1 votes) · LW(p) · GW(p) I see. With regard to your post and the origin of the black hole "pic", do you believe that applying the same pipeline to random images or even noise would generate a similar result? comment by nixtaken · 2020-07-15T11:45:35.288Z · score: -3 (4 votes) · LW(p) · GW(p) The leader of that project said exactly that when she sold the project to the public in her TED talk. Yet she was undeterred. Amazing. comment by gjm · 2020-07-16T01:45:19.814Z · score: 5 (5 votes) · LW(p) · GW(p) There is a TED talk by (actually, an interview with, as part of TED2019) Sheperd Doeleman, head of the EHT collaboration, whose transcript you can read on the TED website. It doesn't say anything even slightly like that. Is there some other TED talk by her that you're referring to? (I can't find any evidence that there is another.) The only other thing I can find that you conceivably might be referring to is a TEDx talk by Katie Bouman, from 2017 (before the EHT picture was produced). Her title is "How to take a picture of a black hole" and it includes a prediction of roughly what the picture might be expected to look like, and includes the words "my role in helping to take the first image of a black hole is to design algorithms that find the most reasonable image that also fits the telescope measurements". Maybe that's what you mean? She doesn't say "exactly", or even approximately, that applying the same pipeline to random input would generate a similar result. Quite the reverse; let me quote her again. "What would happen if Einstein's theories didn't hold? We'd still want to reconstruct an accurate picture of what was going on. If we bake Einstein's equations too much into our algorithms, we'll just end up seeing what we expect to see. In other words, we want to leave the option open for there being a giant elephant at the centre of our galaxy." She says, in other words, that a key consideration in their work was not doing exactly what you say she said they did. (Shortly after that bit there is a slide that, if wilfully misunderstood, might seem to fit your description. Its actual meaning is pretty much the reverse. I won't go into details right now because I don't know whether you saw that slide and misunderstood it; I don't know whether this is the TED talk you're referring to at all. But I guess this is it.) Incidentally: Katie Bouman was a PhD student, was not an astronomer, and was certainly not the leader of the EHT project. The project was already happening and already funded, but I suppose you could call her talk "selling the project to the public" in the sense in which any attempt to describe anything neat one's doing is "selling the project". Bah. comment by nixtaken · 2020-07-16T12:53:54.116Z · score: 1 (1 votes) · LW(p) · GW(p) Links to TED and CalTech talks by Bouman and to a talk I gave at IdaLabs in Berlin can be found in this post: https://kirstenhacker.wordpress.com/2020/05/15/of-proteins-people-and-particles/ comment by gjm · 2020-07-17T13:29:15.035Z · score: 8 (5 votes) · LW(p) · GW(p) So, you did mean the Bouman talk I found. As I say, she wasn't "the leader of that project" and she did not say what you say she did. The particular things that you claim there are "absurd" are not absurd, it's just that you don't understand the procedures they describe and are taking them in the most uncharitable way possible. (I haven't listened to the CalTech talk so can't comment with any authority on what Bouman meant by all the things you quote her as having said there, but it is absolutely not true that "any single one of the statements [] would disqualify an experiment", and amusingly the single statement you choose to attack there at greatest length is the most obviously not-disqualifying. You say, and I quote, "Most sensible researchers would agree that if the resolution of your experiment is equivalent to taking a picture of an orange on the moon, this means that you cannot do your experiment.". You appear to be arguing that if something sounds impossibly hard, then you should just assume that it is, literally, impossibly hard and that it can never be done. Once upon a time, "equivalent to speaking in New York and being heard in Berlin" would have sounded like it meant impossibly hard. Once upon a time, "equivalent to adding up a thousand six-digit numbers correctly in a millisecond" would have sounded like it meant impossibly hard. Some things that sound impossibly hard turn out to be possible. The EHT folks claim that taking a picture with orange-on-the-moon resolution turns out to be possible. Of course they could be wrong but they aren't obviously wrong; what they're claiming breaks no known laws of physics, for instance. And obviously they aren't unaware that getting a picture of an orange on the moon is very difficult. So I think it's downright ridiculous to say that their project is unreasonable because they're trying to do something that sounds impossibly hard.) comment by nixtaken · 2020-07-17T14:50:29.778Z · score: 1 (1 votes) · LW(p) · GW(p) She was the leader of the data analysis team and in charge of presenting the data to the public. It would be absurd if a PhD student was the leader of a billion dollar project that spanned decades. The assumptions the EHT collaboration made about the resolution of their measurement were absurd and comparing it to taking a picture of an orange on the moon was appropriate. If you would like to see a more complete analysis of what Bouman said in her talks about EHT, I've posted another article here: https://www.lesswrong.com/posts/oAsHa6xYMTBJWJGX6/the-new-scientific-method [LW · GW] The article posted here is intended to focus on LIGO, EHT's sister project.
# On “Invention” When I was a little younger than Ahmed Mohamed is now, I invented the distance formula for Cartesian coordinates. I wanted to make a simulation of bugs that ran around and ate each other. To implement a rule like “when the predator is near the prey, it will chase the prey,” I needed to compute distances between points given their $x$- and $y$-coordinates. I knew BASIC, and I knew the Pythagorean Theorem. However many people had solved that before me, it wasn’t written down in any book that I had, so I took what I knew and figured it out. Those few pages of PowerBASIC on MS-DOS never amounted to much by themselves, but simulating ecosystems remained an interest of mine. I returned to the general idea now and then as I learned more. And then, hey, what’s this? It looks like a PhD thesis. “I bet every great mathematician started by rediscovering a bunch of ‘well known’ results.” —Donald Knuth, Surreal Numbers # Your Password Your password must contain a pound of flesh. No blood, nor less nor more, but just a pound of flesh. Your password must contain all passwords which do not contain themselves. Your password must contain any letter of the alphabet save the second. NOT THE BEES! NOT THE BEES! Your password must contain a reminder not to read the comments. Really. You’ll thank us. Your password must have contained the potential within itself all along. # Google Scholar Irregularities Google Scholar is definitely missing citations to my papers. The cited-by results for “Some Negative Remarks on Operational Approaches to Quantum Theory” [arXiv:1401.7254] on Google Scholar and on INSPIRE are completely nonoverlapping. Google Scholar can tell that “An Information-Theoretic Formalism for Multiscale Structure in Complex Systems” [arXiv:1409.4708] cites “Eco-Evolutionary Feedback in Host–Pathogen Spatial Dynamics” [arXiv:1110.3845] but not that it cites My Struggles with the Block Universe [arXiv:1405.2390]. Meanwhile, the SAO/NASA Astrophysics Data System catches both. This would be a really petty thing to complain about, if people didn’t seemingly rely on such metrics. EDIT TO ADD (17 November 2014): Google Scholar also misses that David Mermin cites MSwtBU in his “Why QBism is not the Copenhagen interpretation and what John Bell might have thought of it” [arXiv:1409.2454]. This maybe has something to do with being worse at detecting citations in footnotes than in endnotes. # Can Stephen Wolfram Catch Carmen Sandiego? Via Chris Granade, I learned we now have an actual implementation of Wolfram Language to play around with. Wolfram lauds the Wolfram Programming Cloud, the first product based on the Wolfram Language: My goal with the Wolfram Language in general—and Wolfram Programming Cloud in particular—is to redefine the process of programming, and to automate as much as possible, so that once a human can express what they want to do with sufficient clarity, all the details of how it is done should be handled automatically. [my emphasis] Ah. You mean, like programming? Wolfram’s example of the Wolfram Programming Cloud is “a piece of code that takes text, figures out what language it’s in, then shows an image based on the flag of the largest country where it’s spoken.” The demo shows how the WPC maps the string good afternoon to the English language, the United States and thence to the modern US flag. English is an official language of India, which exceeds the US in population size, and of Canada, which exceeds the US in total enclosed area. The Wolfram Language documentation indicates that “LargestCountry” means “place with most speakers”; by this standard, the US comes out on top (roughly 300 million speakers, versus 125 million for India and 28 million for Canada). But that’s not the problem we were supposed to solve: “place with most speakers” is not the same as “largest country where the language is spoken.” Even the programming languages which are sold as doing what you mean still just do what you say. # Wolfram Language… …because nothing says “stable platform for mission-critical applications” like “from the makers of Mathematica!” Carl Zimmer linked to this VentureBeat piece on Wolfram Language with the remark, “Always interesting to hear what Stephen Wolfram is up to. But this single-source style of tech reporting? Ugh.” I’d go further: the software may well eventually provide an advance in some respect, but the reporting is so bad, we’d never know. We’re told “a developer can use some natural language.” What, like the GOTO command? That’s English. Shakespearean, even. (“Go to, I’ll no more on’t; it hath made me mad.” Hamlet, act 3, scene 1.) We’re told that “literally anything” will be “usable and malleable as a symbolic expression”—wasn’t that the idea behind LISP? We’re told, awkwardly, that “Questions in a search engine have many answers,” with the implication that this is a bad thing (and that Wolfram Alpha solved that problem). We are informed that “instead of programs being tens of thousands of lines of code, they’re 20 or 200.” Visual Basic could claim much the same. We don’t push functionality “out to libraries and modules”; we use the Wolfram Cloud. It’s very different! (Mark Chu-Carroll points out, “What’s scary is that he thinks that not pushing things to libraries is good!”) The “wink, wink, we’re not not comparing Wolfram to Einstein” got old within a sentence, too. I have actual footage of Wolfram from the Q&A session of that presentation: “I am my own reality check.”Stephen Wolfram (1997) # Citing Tweets in LaTeX Need to cite Twitter posts in your LaTeX documents? Of course you do! Want someone else to modify the utphys BibTeX style to add a “@TWEET” option so you don’t have to do it yourself? Of course you do! Style file: Example document: \documentclass[aps,amsmath,amssymb]{revtex4} \usepackage{amsmath,amssymb,hyperref} \begin{document} \bibliographystyle{utphystw} \title{Test} \author{Blake C. Stacey} \date{\today} \begin{abstract} Only a test! \end{abstract} \maketitle As indicated, this is only a test.\cite{stacey2011,sfi2011} \bibliography{twtest.bib} \end{document} And the example bibliography file: @TWEET{stacey2011, author={Blake Stacey}, authorid={blakestacey}, year={2011}, month={July}, day={25}, tweetid={95521600597786624}, tweetcontent={I find it hard to tell, in some areas of science, whether I am a radical or a curmudgeon.}} @TWEET{sfi2011, author={anon}, authorid={OverheardAtSFI}, year={2011}, month={June}, day={23}, tweetid={84018131441422336}, tweetcontent={The brilliance of the word Complexity'' is that it means just about anything to anybody.}} PDF output: # Interactivelearn A few complaints about the place of computers in physics classrooms. Every once in a while, I see an enthusiastic discussion somewhere on the Intertubes about bringing new technological toys into physics classrooms. Instead of having one professor lecture at a room of unengaged, unresponsive bodies, why not put tools into the students’ hands and create a new environment full of interactivity and feedback? Put generically like that, it does sound intriguing, and new digital toys are always shiny, aren’t they? Prototypical among these schemes is MIT’s “Technology Enabled Active Learning” (traditionally and henceforth TEAL), which, again, you’d think I’d love for the whole alma mater patriotism thing. (“Bright college days, O carefree days that fly…”) I went through introductory physics at MIT a few years too early to get the TEAL deal (I didn’t have Walter Lewin as a professor, either, as it happens). For myself, I couldn’t see the point of buying all those computers and then using them in ways which did not reflect the ways working physicists actually use computers. Watching animations? Answering multiple-choice questions? Where was the model-building, the hypothesis-testing through numerical investigation? In 1963, Feynman was able to explain to Caltech undergraduates how one used a numerical simulation to get predictions out of a hypothesis when one didn’t know the advanced mathematics necessary to do so by hand, or if nobody had yet developed the mathematics in question. Surely, forty years and umpteen revolutions in computer technology later, we wouldn’t be moving backward, would we? Everything I heard about TEAL from the students younger than I — every statement without exception, mind — was that it was a dreadful experience, technological glitz with no substance. Now, I’ll freely admit there was probably a heckuva sampling bias involved here: the people I had a chance to speak with about TEAL were, by and large, other physics majors. That is, they were the ones who survived the first-year classes and dove on in to the rest of the programme. So, (a) one would expect they had a more solid grasp of the essential concepts covered in the first year, all else being equal, and (b) they may have had more prior interest and experience with physics than students who declared other majors. But, if the students who liked physics the most and were the best at it couldn’t find a single good thing to say about TEAL, then TEAL needed work. If your wonderful new education scheme makes things somewhat better for an “average” student but also makes them significantly worse for a sizeable fraction of students, you’re doing something wrong. The map is not the territory, and the average is not the population. It’s easy to dismiss such complaints. Here, let me give you a running start: “Those kids are just too accustomed to lectures. They find lecture classes fun, so fun they’re fooled into thinking they’re learning.” (We knew dull lecturers when we had them.) “Look at the improvement in attendance rates!” (Not the most controlled of experiments. At a university where everyone has far too many demands made of their time and absolutely no one can fit everything they ought to do into a day, you learn to slack where you can. If attendance is mandated in one spot, it’ll suffer elsewhere.) Or, perhaps, one could take the fact that physics majors at MIT loathed the entire TEAL experience as a sign that what TEAL did was not the best for every student involved. If interactivity within the classroom is such a wonderful thing, then is it so hard to wonder if interactivity at a larger scale, at the curricular level, might be advisable, too? It’s not just a matter of doing one thing for the serious physics enthusiasts and another for the non-majors (to use a scandalously pejorative term). What I had expected the Technological Enabling of Active Learning to look like is actually more like another project from MIT, StarLogo. Unfortunately, the efforts to build science curricula with StarLogo have been going on mostly at the middle- and high-school level. Their accomplishments and philosophy have not been applied to filling the gaps or shoring up the weak spots in MIT’s own curricula. For example, statistical techniques for data analysis aren’t taught to physics majors until junior year, and then they’re stuffed into Junior Lab, one of the most demanding courses offered at the Institute. To recycle part of an earlier rant: Now, there’s a great deal to be said for stress-testing your students (putting them through Degree Absolute, as it were). The real problem was that it was hard for all the wrong reasons. Not only were the experiments tricky and the concepts on which they were based abstruse, but also we students had to pick up a variety of skills we’d never needed before, none of them connected to any particular experiment but all of them necessary to get the overall job done. What’s more, all these skills required becoming competent and comfortable with one or more technological tools, mostly of the software persuasion. For example: we had to pick up statistical data analysis, curve fitting and all that pretty much by osmosis: “Here’s a MATLAB script, kids — have at it!” This is the sort of poor training which leads to sinful behaviour on log-log plots in later life. Likewise, we’d never had to write up an experiment in formal journal style, or give a technical presentation. (The few experiences with laboratory work provided in freshman and sophomore years were, to put it simply, a joke.) All this on top of the scientific theory and experimental methods we were ostensibly learning! Sure, it’s great to throw the kids in the pool to force them to swim, but the water is deep enough already! To my way of thinking, it would make more sense to offload those accessory skills like data description, simulation-building, technical writing and oral presentation to an earlier class, where the scientific content being presented is easier. Own up to the fact that you’re the most intimidating major at an elite technical university: make the sophomore-year classes a little tougher, and junior year can remain just as rough, but be so in a more useful way. We might as well go insane and start hallucinating for the right reason. Better yet, we might end up teaching these skills to a larger fraction of the students who need them. Why should education from which all scientists could benefit be the exclusive province of experimental physicists? I haven’t the foggiest idea. We have all these topics which ought to go into first- or second-year classes — everyone needs them, they don’t require advanced knowledge in physics itself — but the ways we’ve chosen to rework those introductory classes aren’t helping. To put it another way: if you’re taking “freshman physics for non-majors,” which will you use more often in life: Lenz’s Law or the concept of an error bar? # Updates In the wake of ScienceOnline2011, at which the two sessions I co-moderated went pleasingly well, my Blogohedron-related time and energy has largely gone to doing the LaTeXnical work for this year’s Open Laboratory anthology. I have also made a few small contributions to the Azimuth Project, including a Python implementation of a stochastic Hopf bifurcation model. I continue to fall behind in writing the book reviews I have promised (to myself, if to nobody else). At ScienceOnline, I scored a free copy of Greg Gbur’s new textbook, Mathematical Methods for Optical Physics and Engineering. Truth be told, at the book-and-author shindig where they had the books written by people attending the conference all laid out and wrapped in anonymizing brown paper, I gauged which one had the proper size and weight for a mathematical-methods textbook and snarfed that. On the logic, you see, that if anyone who was not a physics person drew that book from the pile, they’d probably be sad. (The textbook author was somewhat complicit in this plan.) I am happy to report that I’ve found it a good textbook; it should be useful for advanced undergraduates, procrastinating graduate students and those seeking a clear introduction to techniques used in optics but not commonly addressed in broad-spectrum mathematical-methods books. # Gogo Proxy Modern air travel! The worst trouble I had with the in-flight WiFi service (on my return from Skepticon 3) was that it didn’t work, or, rather, that it worked for less than the time necessary to load a page. A friend of mine travelling on the same day had a more interesting issue: the Internet connection gave him someone else’s identity. He went through the procedure to sign up for the Gogo Inflight Wifi, logged into Facebook and realized he was seeing someone else’s news feed. With someone else’s picture on the page. Using a total stranger’s account. Upon reloading, the same thing happened, but with a second stranger taking the place of the first. HTTP Proxies are strange and mysterious things. # Python Exercise: The Logistic Map Nostalgi-O-Vision, activate! A month or so after I was born, my parents bought an Atari 400 game console. It plugged into the television set, and it had a keyboard with no moving keys, intended to be child- and spill-proof. Thanks to the box of cartridges we had beside it, Asteroids and Centipede were burnt into my brain at a fundamental level. The hours I lost blowing up all my own bases in Star Raiders — for which accomplishment the game awarded you the new rank of “garbage scow captain” — I hesitate to reckon. We also had a Basic XL cartridge and an SIO cassette deck, so you could punch in a few TV screens’ worth of code to make, say, the light-cycle game from TRON, and then save your work to an audio cassette tape. From my vantage point in the twenty-first century, it seems so strange: you could push in a cartridge, close the little door, turn on your TV set and be able to program. # Right Skill, Right Time OK, first of all, let me say that there exist few better ways to procrastinate than reading an essay on time management. Terry Tao has lots of suggestions; following a fraction of them would probably make me a better human being. One item, though, is worth special attention: It also makes good sense to invest a serious amount of time and effort into learning any skill that you are likely to use repeatedly in the future. A good example in mathematics is LaTeX: if you plan to write a lot of papers, it makes sense to go beyond the bare minimum of skill needed to jerry-rig whatever you need to write your paper, and go out and seriously learn how to make tables, figures, arrays, etc. Recently I’ve been playing with using prerecorded macros to type out a standard block of LaTeX code (e.g. \begin{theorem} … \end{theorem} \begin{proof} … \end{proof}) in a few keystrokes; the actual time saved per instance is probably minimal, but it presumably adds up over time, and in any event feels like you’re being efficient, which is good for morale (which becomes important when writing a long paper). The risk is that you might end up a freak like me: after you’ve defined a few macros for moments and cumulants and partial derivatives, you get bitten by a radioactive backslash key and start typing all your class notes in LaTeX while the professor is lecturing. That aside, thinking about the proper time to learn these “accessory skills” puts me in the mood for a rant. (Well, what doesn’t?) MIT did an exasperating thing with its undergraduate physics programme shortly before my time. The way I heard the story, they’d been afraid of losing students to other majors, so they dumbed down the sophomore-year classes (virtually excising Lagrangian mechanics, for example). We were left with a “waves and vibrations” class which was rather a junk drawer of different examples; a quantum-mechanics course which lacked guts and thus forsook glory; a decent introduction to statistical mechanics; and a relativity class which, hamstrung by fear of sophistication, also suffered because it lacked a singing Max Tegmark. Continue reading Right Skill, Right Time # A Survey for Curmudgeons I have a simulation happily grinding away in the background, using one core of my spiffy new dual-core system, doing my work for me, so not only do I have a moment to procrastinate, but also I should be happy about new technology. However, the headphones which came with the iPod nano I got for Christmas picked today to fall apart. The earbug doodad is beside itself with the joy it feels at being part of a cultural icon, I suppose. Given that the iPod itself had to be reformatted twice and connected to three different computers before it was able to receive music, that the interface packs more absurdity into its purported simplicity than I would have imagined possible, and that consequently it has relegated itself to the status “device which plays “Mandelbrot Set” on demand,” having the headphones cheap out on me is rather like salting the fields after Steve Jobs has burnt the city. All this to say that today I’m in a mood for appreciating old things which work. Geoffrey Pullum wrote, four years ago, Shall I tell you how The Cambridge Grammar of English was prepared? (I am not changing the subject; trust me.) The book is huge: 1,859 printed pages. The double-spaced manuscript was about 3,500 pages (yes, it actually had to be printed out and written on by a copy editor the old-fashioned way). It took over ten years to write. And it was done using WordPerfect 6 for DOS. Rodney Huddleston chose to upgrade to that around 1989, wrote a couple of hundred complex macros, and stuck with it. I learned the WP DOS macro language in order to collaborate on the project. WordPerfect was basically in its final, completed form before Clinton first ran for office. It works. The file format is fine for authors, and records everything we need to record. Rodney and I are still using WP6 file format today to write our planned student’s introduction to English grammar. In all the years since the late 1970s, WordPerfect has not altered the file format: all the largely pointless upgrades in the program have been backward compatible. The format really does the job. But things are different with the WordPerfect program itself. The progress has largely been backward. The things we have noticed about version differences are minor, but they all tell in the same direction: every upgrade is a downgrade. Forget the Clinton administration: TeX basically solved the problem of representing mathematical equations as text, during Reagan’s first term. The LaTeX macro language, which handles document-scale organization, is almost as old. Perhaps we’re stuck at a local maximum, and with luck and pluck we could find a better way, and on some days, that seems almost mandatory. Still, we’re at a pretty darn good local maximum, as local extrema go. (Something deep within me finds a resonance with PyTeX, an attempt to have Python sit on top of TeX the way LaTeX does, but the project seems to be moribund.) The question for today, then, is the following: What are your favorite Old Things That Work, and which changeless relics really do need a shake-up? Previous surveys: Comments on all the above remain open. # In Which Blake Fails Not noticing the tiny, unobtrusive switch on the side of your laptop which is labeled “WIRELESS ON / OFF” and wondering why you are no longer detecting any WiFi networks: FAIL. Trying to troubleshoot your switched-off WiFi by digging through kernel module configurations: FAIL. Attempting to connect using the Ethernet card and a hub which turns out to be non-functional: FAIL. Finally switching the WiFi to the ON position, connecting to the Internet and realizing, “Hooray, now I can get back to work on that proceedings book for the conference which happened four years ago” — EPIC FAIL.
Volume 321 - Sixth Annual Conference on Large Hadron Collider Physics (LHCP2018) - Parallel Performance Tracking, alignment, and b-tagging performance and prospects in the ATLAS experiment N. Styles* on behalf of the ATLAS collaboration *corresponding author Full text: pdf Pre-published on: 2018 September 20 Published on: Abstract Procedures employed for the reconstruction of charged particle tracks used by the ATLAS experiment are outlined, and their performance in recent LHC Run 2 data is discussed, with focus on aspects such as the stability with respect to time and instantaneous luminosity, and performance within high-$p_T$ jets. The track-based alignment of the ATLAS Inner Detector is introduced, and its importance to track reconstruction is demonstrated. The current level of understanding of detector deformations affecting track parameter determination is shown, and planned future improvements are outlined. Methods devised to distinguish $b-$ and $c-$quark jets from light-flavour jets are introduced. Techniques to extract and calibrate the performance of these methods on data and Monte Carlo are examined, and their results compared. Finally, forthcoming improvements in these areas are highlighted. Open Access
# Interpreting the elbow plot Based on the elbow plot you generated in the previous exercise for the lineup data: Which of these interpretations are valid?
0 TECHNICAL PAPERS # Active Vibration Control of a Flexible Beam Using a Buckling-Type End Force [+] Author and Article Information Shahin Nudehi, Steven W. Shaw Department of Mechanical Engineering, Michigan State University, East Lansing, MI 48824-1226 Ranjan Mukherjee Department of Mechanical Engineering, Michigan State University, East Lansing, MI [email protected] In fact, this buckling load can easily be derived by noting that the line of action of the end load always passes through the two end points of the beam, resulting in a situation that is equivalent to the buckling problem of a beam with pinned ends. If the force is applied for relatively short durations, it may be possible to utilize loads larger than the buckling load. However, in this preliminary study, we limit ourselves to the conservative assumption. $C1$ should not be confused with the positive definite square matrix $C$ in Eq. 25. Digital signal processor. The settling time was defined as $Ts=4∕ζω$, and both $ζ$ and $ω$ were computed numerically from the vibration plots. J. Dyn. Sys., Meas., Control 128(2), 278-286 (Mar 25, 2005) (9 pages) doi:10.1115/1.2192836 History: Received May 12, 2004; Revised March 25, 2005 ## Abstract In this paper, we explore the use of end forces for vibration control in structural elements. The process involves vibration measurement and observer-based estimation of modal amplitudes, which are used to determine when to apply an end load such that it will remove vibration energy from the structure. For this study, we consider transverse vibration of a cantilever beam with a buckling-type end load that can be switched between two values, both of which are below the buckling load. The stability of the control system is proven using Lyapunov stability theory and its effectiveness is demonstrated using simulations and physical experiments. It is shown that the effectiveness of the approach is affected by the bandwidth of the actuator and the attendant characteristics of the filter, the level of the control force, and the level of bias in the end force. The experiments employ a beam fitted with a cable mechanism and motor for applying the end force, and a piezoelectric patch for taking vibration measurements. It is shown that the first two modes of the beam, whose natural frequencies are less than the bandwidth of the motor, are very effectively controlled by the proposed scheme. <> Copyright © 2006 by American Society of Mechanical Engineers Your Session has timed out. Please sign back in to continue. ## Figures Figure 1 A flexible cantilever beam with an end force Figure 2 Simulation of (a) decay in modal amplitude a1 due to damping, (b), (c) decay in modal amplitudes a1 and a2 due to control in the presence of damping, and (d) plot of the control action. Figure 3 Control design based on output filtering Figure 4 Plot of modal amplitudes a1 and a2, and the control action u for the modified control design in Sec. 4. Figure 5 Control design based on bias tension and output filtering Figure 6 A comparion of the “smf” function of MATLAB and the memory-less nonlinearity in Figs.  35 Figure 7 Experimental setup Figure 8 Free vibration of the beam in the (a) absence of bias tension, and (b) presence of 20N bias tension Figure 9 Vibration suppression using active control Figure 10 Vibration suppression using a one-mode dynamic model results in spillover Figure 11 The role of the low-pass filter in reducing the effect of spillover ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
# gauge field theory: SU(3) ## Nonperturbative investigations of SU(3) gauge theory with eight dynamical flavors We present our lattice studies of SU(3) gauge theory with Nf=8 degenerate fermions in the fundamental representation. Using nHYP-smeared staggered fermions we study finite-temperature transitions on lattice volumes as large as L3×Nt=483×24, and the … ## Nonperturbative Renormalization of Operators in Near-Conformal Systems Using Gradient Flows We propose a continuous real space renormalization group transformation based on gradient flow, allowing for a numerical study of renormalization without the need for costly ensemble matching. We apply our technique in a pilot study of SU$(3)$ gauge … ## Strongly interacting dynamics and the search for new physics at the LHC We present results for the spectrum of a strongly interacting SU(3) gauge theory with Nf=8 light fermions in the fundamental representation. Carrying out nonperturbative lattice calculations at the lightest masses and largest volumes considered to … ## Lattice simulations with eight flavors of domain wall fermions in SU(3) gauge theory textlessptextgreaterWe study an SU(3) gauge theory with textlessinline-formulatextgreatertextlessmml:math … ## WW Scattering Parameters via Pseudoscalar Phase Shifts Using domain-wall lattice simulations, we study pseudoscalar-pseudoscalar scattering in the maximal isospin channel for an SU(3) gauge theory with two and six fermion flavors in the fundamental representation. This calculation of the S-wave … ## Lattice Simulations and Infrared Conformality We examine several recent lattice-simulation data sets, asking whether they are consistent with infrared conformality. We observe, in particular, that for an SU(3) gauge theory with 12 Dirac fermions in the fundamental representation, recent … ## Parity Doubling and the S Parameter Below the Conformal Window We describe a lattice simulation of the masses and decay constants of the lowest-lying vector and axial resonances, and the electroweak S parameter, in an SU(3) gauge theory with $N_f = 2$ and 6 fermions in the fundamental representation. The …
## What Is The Yearly Rate Of Return Method? The yearly rate of return method, commonly referred to as the annual percentage rate, is the amount earned on a fund throughout an entire year. The yearly rate of return is calculated by taking the amount of money gained or lost at the end of the year and dividing it by the initial investment at the beginning of the year. This method is also referred to as the annual rate of return or the nominal annual rate. ### Key Takeaways • Yearly rate of return is computed by looking at the value of an investment at the end of one year and comparing it to the value to the beginning of the year. • The rate of return for a stock includes capital appreciation and any dividends paid. • A disadvantage of the yearly rate of return is that it only includes one year and does not consider the potential for compounding over many years. ## The Formula for Yearly Rate of Return  \begin{aligned} &\text{Yearly Rate of Return} = \Big ( \frac {\text{EYP} - \text{BYP} }{\text{BYP} } \Big ) \times 100 \\ &\textbf{where:} \\ &\text{EYP} = \text{End of year price} \\ &\text{BYP} = \text{Beginning of year price} \\ \end{aligned} ## Example of Yearly Rate of Return Method Calculation If a stock begins the year at $25.00 per share and ends the year with a market price of$45.00 a share, this stock would have an annual, or yearly, rate of return of 80.00%. First, we subtract the end of year price from the beginning price, which equals 45 - 25, or 20. Next, we divide by the beginning price, or 20/25 equals .80. Lastly, to arrive at a percentage, .80 is multiplied by 100 in order to arrive at a percentage and the rate of return 80.00%. It should be noted that this would technically be called capital appreciation, which is only one source of an equity security’s return. The other component would be any dividend yield. For instance, if the stock in the earlier example paid $2 in dividends, the rate of return would be$2 greater or, using the same calculation, roughly 88.00% over the one-year period. As a measure of return, the yearly rate of return is rather limiting because it delivers only a percentage increase over a single, one-year period. By not taking into consideration the potential effects of compounding over many years, it’s limited by not including a growth component. But as a single period rate, it does serve its purpose. ## Other Return Measures Other common return measures, which may be an extension of the basic return method, include adjusting for discrete or continuous time periods, which is helpful for more accurate compounding calculations over longer time periods and in certain financial market applications. Asset managers commonly use money-weighted and time-weighted rates of return to measure performance or the rate of return on an investment portfolio. While money-weighted rates of return focus on cash flows, the time-weighted rate of return looks at the compound rate of growth of the portfolio. In an effort to be more transparent with investors, particularly retail, measuring and disseminating investment performance has become its niche within capital markets. The CFA Institute, a worldwide leader in the advancement of financial analysis, now offers a professional Certificate in Investment Performance Measurement (CIPM) designation. According to the CIPM Association, the CIPM program was developed by the CFA Institute as a specialty credentialing program that develops and recognizes the performance evaluation and presentation expertise of investment professionals who "pursue excellence with a passion."
Calculation: Operating ratio = [(180,000 + 30,000) / 300,000] × 100 = [210,000 / 300,000] × 100 = 70% Operating Performance Ratios are the group of financial ratios that mainly use to measure the performance of the company’s operating activities. Fixed asset turnover compares revenues to net fixed assets. The operational ratio is expressed as a percentage calculated by dividing a company’s operating expenses by its net sales. Other types of operating ratios include net worth and operating leverage calculations. Yet, PPE averages are more accurate for assessments. A company’s operating ratio is computed by dividing operating expenses by net sales and expressing the result as a percentage. The formula’s basic components are operating cost and net sales. Among the three, current ratio comes in handy to analyze the liquidity and solvency of the start-ups. It is also popular to use in KPI of the salesperson. According to its annual report, the company generated net sales of $500.34 billion during 2018. The resulting figure can provide insight on how well the company will generate profit if revenues decrease or expenses increase. For example: a company with monthly operating expenses of$100 million United States Dollars (USD) and $500 million USD in net sales will have an operating ratio of 20%. Then, divide that by the operating income. As stated above, there are many other Operating Performance Ratios to be including your Operating Performance Ratios analysis, and two of the above ratios are normally included. The relationship can be represented mathematically as follows: Operating Ratio = {Expense (or group of expenses) / Net Sales} * 100 One key change to make to ensure capacity is being utilized to its maximum benefit is to reduce the breakeven revenue point to as low as it can be. A high ratio indicates that a business is generating a large amount of sales from a relatively small fixed asset base. Operating Performance Ratios contain many different ratios based on the type of company. The ratio is expressed in percentage. Fixed Assets Turnover. Example: Cost of goods sold is$180,000 and other operating expenses are $30,000 and net sales is$300,000. Sales Return Per Employees: is also the  Performance Ratios that could help you to figure out how is the performance of the sales department. Operating ratio measures the relationship of expenses to sales. The ratio does not factor in expansion or debt repayment. The resulting figure can provide insight on how well the company will generate profit if revenues decrease or expenses increase. Total net sales are calculated by taking gross sales revenues minus sales returns, discounts, and allowances. Sometime, you might break it down into specific assets that you want to assess. Calculate the operating ratio of Walmart Inc. if the cost of sales and operating expenses incurred during the period are $373.40 billion and$106.51 billion respectively. Operating must remain below 100 for a company to realize a profit. An operating ratio is a comparison between the operating expenses of a company and its net sales. It is calculated by dividing the operating profit by total revenue and expressing as a percentage. Operating cost is equal to cost of goods sold plus operating expenses. The formula is net sales divided by net fixed assets. These might mean machinery or building. Fixed Assets Turnover is one of the most important Operating Performance Ratios that try to measure how the company’s sales could be generated from its fixed assets. Formula: For the purpose of this ratio, net profit is equal to gross profit minus operating expenses and income tax. A low working capital ratio is an indicator that the company is not operating at its optimum. Solution Use the below-given data for calculation of the operating ratio Therefore, the calculation of operating ratio is as follows, =(3000+1000)/5000 1. The operating margin formula is calculated by dividing the operating income by the net sales during a period.Operating income, also called income from operations, is usually stated separately on the income statement before income from non-operating activities like interest and dividend income. But, to help you get more understanding about these ratios to let me explain this to you. The operating margin ratio is a profitability ratio that speaks of a company’s profits from its operations before taxes and interest expenses are deducted. Calculate operating ratio. For example, class or machine or types of building. Many companies often use basic gross profit ratios when calculating their profit percentage. It is calculated by dividing the operating expenses by the net sales. Okay, I think that enough for this explanation and I believe you get it. A hybrid operational ratio is cost of goods sold plus operating expenses, divided by net sales. Fixed Assets Turnover Ratio; Sales Revenue Per Employee; Fixed Assets Turnover is one of the most important Operating Performance Ratios that try to measure how the company’s sales could be generated from its fixed assets. Calculate the operating ratio for the company. Following formula is used to calculate operating ratio: [ (Cost of goods sold + Operating expenses / Net sates)] × 100 Here cost of goods sold = Operating stock + Net purchases + Manufacturing expenses - … The information for this financial ratio is usually contained on a company’s income statement. The operating expense ratio (OER) is a profitability ratio. Another way to look at the operating ratio is by figuring the amount of money that must be generated to pay for operating expenses. If companies desire to maintain a certain profit percentage, they will combine the original operating ratio and gross profit ratio to create a more enhanced mathematical calculation. There are two main important assets that drive the company operation. It can be month or year based on your assessment’s objective. The net worth ratio indicates how much economic value the company has added from operations. The resulting figure is also expressed in a profit percentage for items sold by the company. The operating expenses are $3,000. This category is subjective in nature. A term that is used commonly when the issue of improving operating ratio arises is capacity utilization. Now we can calculate the sales to operating income ratio using the formula: \text{Sales to Operating Income} = \dfrac{53{,}991{,}600}{17{,}491{,}600} = 3.09. This kind of ratio is most applicable for some kind of company like garment manufacturing. Fixed assets refer to any kind of property, plant, and equipment that drives the company operating activities. This formula can be calculated by taking net sales minus cost of goods sold, divided by net sales. The operating margin ratio is usually expressed as a plain decimal number. Operating profit ratio establishes a relationship between operating Profit earned and net revenue generated from operations (net sales). operating profit ratio is a type of profitability ratio which is expressed as a percentage.. Net sales include both Cash and Credit Sales, on the other hand, Operating Profit is the net operating profit i.e. Net Sales here refer to net sales that entity generates during the period. Gross income, also called gross profit, is calculated by subtracting the cost of goods sold from the net sales. The calculation is done as follows: Operating Ratio = (Operating Cost ÷ Net Sales) x 100. This ratio helps companies determine how well they can generate sales revenues based on the expenditures during a specific time frame. Operating activities here mainly refer to productions or sales performance. However, the adequacy of Operating Reserve Ratios over 25 percent is variable and depends on a This formula requires two variables: operating profit and total revenue. Operating Leverage Ratio = (Sales - Variable Expenses) / Operating Income. The operating ratio can be used to determine the efficiency of a company's management by comparing operating expenses to net sales. To help you get more ideas on how the ratio says, you better compare current ratios with previous years, budget, and industry average. The operational ratio is expressed as a percentage calculated by dividing a company’s operating expenses by its net sales. You first need to subtract the company’s variable expenses from their sales to get the numerator. Operating performance is defined as measuring results relative to the assets used to achieve those results. The operating leverage ratio provides information regarding how much external debt or equity the company uses to run business operations. The operating ratio for Blue Trust Inc. is 80%. We will explain this below. Operating liabilities include accounts payable. The operating income formula is calculated by subtracting operating expenses, depreciation, and amortization from gross income.As you can see, there are a few different components. Using this information and the formula above, we can calculate that Company XYZ's operating ratio is: Operating Ratio =$850,000 / $1,000,000 = 0.85 or 85% In this example, Company XYZ pays out$0.85 of operating expenses for every $1 in sales. This means that the net sales is about three times the operating income. It is equivalent to an organization’s operating expenses divided by its incomes. Let us take the example of Walmart Inc. Fixed Charge Coverage Ratio: Definition | Using | Formula | Example | Explanation, Net Income Formula, Definition, Explanation, Example, and Analysis, Liquidity Ratios (Definition, and List of Five Importance Ratios), Profitability Ratios Analysis: Example | Types | Explanation | Importance. An operating ratio is a mathematical calculation used to determine a company’s operational efficiency. It is also known as an expenses-to-sales ratio. The sales to operating income ratio is 3.09. As a general rule, a minimum Operating Reserve Ratio of 25 percent – or three months of annual operating expenses or budget – is the Nonprofit Reserve Workgroup’s suggested minimum goal. The net sales for Blue Trust Inc. are$5,000. Companies can also use other operating ratios to determine the effectiveness and efficiency of their operations. Net operating assets are the value of a company's operating assets less liabilities. In general, the smaller the ratio calculation, the better opportunity the company will have to generate profits. Operating Ratio = ($373.40 billion +$106.51 billion) / $50… Companies can calculate their ratios and compare the results against a leading company or the industry standard. The three common liquidity ratios used are current ratio, quick ratio, and burn rate. The formula for calculating the operating cash flow ratio is as follows: Where: Cash flow from operations can be found on a company’s statement of cash flows Cash Flow Statement A Cash Flow Statement (officially called the Statement of Cash Flows) contains information on how much cash a company has generated and used during a given period. Wikibuy Review: A Free Tool That Saves You Time and Money, 15 Creative Ways to Save Money That Actually Work. PPE here could be average or PPE at the end of the periods. Operating assets are things used to generate business income, such as equipment and patents. What Is the Connection between Operating and Financial Leverage. The formula of fixed assets turnover is: Formula This comparison may lead directors or managers to conduct a deeper analysis of business operations and discover how their company can improve their operational ratios. The traditional operating ratio compares the company’s operating expenses to its net sales. The formula for net operating income can be derived by using the following steps: Step 1: Firstly, determine the total revenue of the company which is the first line item in the income statement.Otherwise, the total revenue can also be computed by multiplying the total number of units sold during a specific period of time and the average selling price per unit. The operating cash flow ratio for Walmart is 0.36, or$27.8 billion divided $77.5 billion. Operating cash flow ratio analysis is an effective way to measure how well a company can pay off its current liabilities using the cash flow generated from ongoing business activities. One of the biggest fixed costs is insurance. Efficiently for the purposes of this presentation could be defined as the ratio of output performed by a process or activity relative to the total required energy spent. A higher ratio is better. Target’s operating cash flow ratio works out to 0.34, or$6 billion divided by $17.6 billion. The essential operating performance measurements are noted below. The operating ratio measures a company’s overall operational profitability from underwriting and investment activities. Formula. Using the operating expense ratio formula, we get – OER = Operating Expenses / Revenues Or, =$40,000 / $400,000 = 10%. Operating Profit Margin is a profitability or performance ratio that reflects the percentage of profit a company produces from its operations, prior to subtracting taxes and interest charges. The expense can be an individual expense or a group of expenses like cost of goods sold, labor costs, material expenses, administrative expenses, or sales and distribution expenses. Financial ratios provide companies with benchmarks to use as comparison tools in the business environment. Similar to the original formula, the smaller percentage calculation usually means companies are generating higher profits compared to cost of goods sold and operational expenses. Also known as Solvency Ratios, and as the name indicates, it focuses on a company’s current assets and liabilities to assess if it can pay the short-term debts. The smaller the ratio, the greater the organization's ability to generate profit. This operational ratio determines how well a company covers all expenditures during an accounting time period. Sales Revenue per employee is popularity apply to services organization or sometimes apply to assess the salesperson of an organization. its ability to pay off short-term financial obligations. The formula for the operating leverage ratio is a simple one. Operating cash flow ratio is an important measure of a company’s liquidity i.e. It indicates whether a company is effectively managed and how efficiently it generates profits, even in periods when revenues have dropped. It is computed by dividing the net profit (after tax) by net sales. The NOA formula is operating assets minus operating liabilities. This kind of ratio is most applicable for some kind of company like garment manufacturing. Its operating ratio is: ($600,000 production expenses + $200,000 Administrative expenses) ÷$1,000,000 Net sales = 80% Operating ratio Thus, operating expenses are 80% of net sales. This ratio is used to measure how much sales could be made per employee. Fixed Asset Turnover. If we compare the ratio with the other companies in the same industry, we will be able to interpret the OER properly. An article will be published on how to reduce this fixed cost that ta… Net Sales/ Net Property Plant and Equipment. Formula: Operating ratio is computed as follows: The basic components of the formula are operating cost and net sales. Many times operating income is classified as earnings before interest and taxes. The cost of goods sold which are not included in the operating expenses is \$1,000. This ratio assesses how efficient the company used its employees. The fixed assets turnover ratio measures the efficiency of a company’s long-term capital investments. The first is fixed assets, and the second is the staff. Operating net profit ratio is calculated by dividing the operating net profit by sales. To achieve a low breakeven revenue point, the fixed costs must be cut down to a minimal. It should be considered together with other liquidity ratios such as current ratio, quick ratio, cash ratio, etc. It reflects the level of sales generated by investments in productive capacity. Formula of operating ratio: Operating Ratio = [(Cost of goods sold + Operating expenses) / Net sales] × 100. Operating Profit Ratio. Now let see what are those ration. Link: Apple Sheet PDF Explanation. Let’s take a look at each one of them. This ratio could be calculated by total sales per period/ average total of sales staff or employees. Solution: Operating Ratio is calculated using the formula given below Operating Ratio = (Cost of Goods Sold + Operating Expenses) / Total Revenue 1. The staff here refer to any kind of employee rank from sales staff to executive management. Let see the pictures of a company. The measure is very common to approach in real estate analysis, whereby experts are estimating the expenses to work a bit of property versus the pay it produces. Net profit ratio (NP ratio) is a popular profitability ratio that shows relationship between net profit after tax and net sales. Operating ratio formula. This ratio helps in determining the ability of the management in running the business. Non-operating expenses such as interest charges, taxes etc., are excluded from the computations.
# Grover's search algorithm Grover's algorithm is a quantum algorithm for searching an unsorted database with N entries in O(N1/2) time and using O(logN) storage space (see big O notation). It was invented by Lov Grover in 1996. ### Introduction Classically, searching an unsorted database requires a linear search, which is O(N) in time. Grover's algorithm, which takes O(N1/2) time, is the fastest possible quantum algorithm for searching an unsorted database. It provides "only" a quadratic speedup, unlike other quantum algorithms, which can provide exponential speedup over their classical counterparts. However, even quadratic speedup is considerable when N is large. Like all quantum computer algorithms, Grover's algorithm is probabilistic, in the sense that it gives the correct answer with high probability. The probability of failure can be decreased by repeating the algorithm. ### Uses of Grover's algorithm Although the purpose of Grover's algorithm is usually described as "searching a database", it may be more accurate to describe it as "inverting a function". Roughly speaking, if we have a function y=f(x) that can be evaluated on a quantum computer, Grover's algorithm allows us to calculate x when given y. Inverting a function is related to the searching of a database because we could come up with a function that produces a particular value of y if x matches a desired entry in a database, and another value of y for other values of x. Grover's algorithm can also be used for estimating the mean and median of a set of numbers, and for solving the collision problem. In addition, it can be used to solve NP-complete problems by performing exhaustive searches over the set of possible solutions. This would result in a considerable speedup over classical solutions, even though it does not provide the "holy grail" of a polynomial-time solution. Below, we present the basic form of Grover's algorithm, which searches for a single matching entry. The algorithm can be further optimized if there is more than one matching entry and the number of matches is known beforehand. ### Setup Consider an unsorted database with N entries. The algorithm requires an N-dimensional state space H, which can be supplied by log2N qubits. Let us number the database entries by 0, 1, ... (N-1). Choose an observable, Ω, acting on H, with N distinct eigenvalues whose values are all known. Each of the eigenstates of Ω encode one of the entries in the database, in a manner that we will describe. Denote the eigenstates (using bra-ket notation) as {∣0⟩, ∣1⟩, ⋯, ∣N − 1⟩} and the corresponding eigenvalues by {λ0, λ1, ⋯, λN − 1} We are provided with a unitary operator, , which acts as a subroutine that compares database entries according to some search criterion. The algorithm does not specify how this subroutine works, but it must be a quantum subroutine that works with superpositions of states. Furthermore, it must act specially on one of the eigenstates, |ω>, which corresponds to the database entry matching the search criterion. To be precise, we require to have the following effects: Uωω⟩ =  − ∣ω Uωx⟩ = ∣x⟩  for all x ≠ ω Our goal is to identify this eigenstate |ω>, or equivalently the eigenvalue ω, that Uω acts specially upon. ### Steps of the algorithm The steps of Grover's algorithm are as follows: 1. Initialize the system to the state |s\ranglele = \frac{1}{\sqrt{N}} \sum_x |x\ranglele 2. Perform the following "Grover iteration" r(N) times. The function r(N) is described below. 1. Apply the operator Uω 2. Apply the operator Us = 2∣sleles∣ − I. 3. Perform the measurement Ω. The measurement result will be λω with probability approaching 1 for N>>1. From λω, ω may be obtained. ### Explanation of the algorithm Our initial state is $$|s\rangle = \frac{1}{\sqrt{N}} \sum_x |x\rangle$$ Consider the plane spanned by |s> and |ω>. Let |ω×> be a ket in this plane perpendicular to |ω>. Since |ω> is one of the basis vectors, the overlap is $$\langle\omega|s\rangle = \frac{1}{\sqrt{N}}$$ In geometric terms, there is an angle (π/2 - θ) between |ω> and |s>, where θ is given by: $$\cos \left(\frac{\pi}{2} - \theta \right) = \frac{1}{\sqrt{N}}$$ $$\sin \theta = \frac{1}{\sqrt{N}}$$ The operator Uω is a reflection at the hyperplane orthogonal to |ω>; for vectors in the plane spanned by |s> and |ω>, it acts as a reflection at the line through |ω×>. The operator Us is a reflection at the line through |s>. Therefore, the state vector remains in the plane spanned by |s> and |ω> after each application of Us and after each application of Uω, and it is straightforward to check that the operator UsUω of each Grover iteration step rotates the state vector by an angle of 2θ toward |ω>. We need to stop when the state vector passes close to |ω>; after this, subsequent iterations rotate the state vector away from |ω>, reducing the probability of obtaining the correct answer. The number of times to iterate is given by r. In order to align the state vector exactly with |ω>, we need: $$\frac{\pi}{2} - \theta = 2 \theta r$$ $$r = \frac{(\frac{\pi}{\theta} - 2)}{4}$$ However, r must be an integer, so generally we can only set r to be the integer closest to (π/θ - 2)/4. The angle between |ω> and the final state vector is O(θ), so the probability of obtaining the wrong answer is O(1 - cos2θ) = O(sin2θ). For N>>1, θ ≈ N-1/2, so $$r \rightarrow \frac{\pi \sqrt{N}}{4}$$ Furthermore, the probability of obtaining the wrong answer becomes O(1/N), which goes to zero for large N. ### Extensions If, instead of 1 matching entry, there are k matching entries, the same algorithm works but the number of iterations must be π(N/k)1/2/4 instead of πN1/2/4. There are several ways to handle the case if k is unknown. For example, one could run Grover's algorithm several times, with $$\pi \frac{N^{1/2}}{4}, \pi \frac{(N/2)^{1/2}}{4}, \pi \frac{(N/4)^{1/2}}{4}, \ldots$$ iterations. For any k, one of iterations will find a matching entry with a sufficiently high probability. The total number of iterations is at most $$\pi \frac{N^{1/2}}{4} \left( 1+ \frac{1}{\sqrt{2}}+\frac{1}{2}+\cdots\right)$$ which is still O(N1/2). It is known that Grover's algorithm is optimal. That is, any algorithm that accesses the database only by using the operator Uω must apply Uω at least as many times as Grover's algorithm (Bernstein et al., 1997). ### References 1. Grover L.K.: A fast quantum mechanical algorithm for database search, Proceedings, 28th Annual ACM Symposium on the Theory of Computing, (May 1996) p. 212 http://arxiv.org/abs/quant-ph/9605043A (available online) 2. Grover L.K.: From Schrodinger's equation to quantum search algorithm, American Journal of Physics, 69(7): 769-777, 2001. Pedagogical review of the algorithm and its history. 3. http://www.bell-labs.com/user/feature/archives/lkgrover/ 4. Bennett C.H., Bernstein E., Brassard G., Vazirani U., The strengths and weaknesses of quantum computation. SIAM Journal on Computing 26(5): 1510-1523 (1997). Shows the optimality of Grover's algorithm. Category:Quantum Algorithms
## Thursday, November 20, 2008 ### The Faces of Mechanical Turk Andy Baio is yet another enthusiast of Mechanical Turk: When you experiment with Amazon's Mechanical Turk, it feels like magic. You toss 500 questions into the ether, and the answers instantly start rolling in from anonymous workers around the world. It was great for getting work done, but who are these people? I've seen the demographics, but that was too abstract for me. Last week, I started a new Turk experiment to answer two questions: what do these people look like, and how much does it cost for someone to reveal their face? So Andy paid Turkers 50 cents to upload their photo of theirs with a handwritten note saying why they Turk. So here is how Turkers look like and why they Turk! Needless to say, this is already hanging outside of my office door. This picture is going into a slide in my Mechanical Turk talk, so that I can give a good answer to the question "Who are these people and why do they do it?" (via Brendan O'Connor) ## Thursday, November 13, 2008 ### Social Annotation of the NYT Corpus? While I am waiting for the arrival of the New York Times Annotated Corpus, I have been thinking about the different tasks that we could use the corpus for. For some tasks, we might have to run additional extraction systems, to identify entities that are not currently marked. So, for example, we could use the OpenCalais system to extract patent issuances, company legal issues, and so on. And then, I realized that most probably, tens of other groups will end up doing the same, over and over again. So, why not run such tasks once, and store them for others to use? In other words, we could have a "wiki-style" contribution site, where different people could submit their annotations, letting other people use them. This would save a significant amount of computational and human resources. (Freebase is a good example of such an effort.) Extending the idea even more, we could have reputational metrics around these annotations, where other people provide feedback on the accuracy, comprehensiveness, and general quality of the submitted annotations. Is there any practical problem with the implementation of this idea? I understand that someone needs access to the corpus to start with, but I am trying to think of more high-level obstacles (e.g., copyright, or conflict with the interests of publishers)? ## Wednesday, November 5, 2008 ### Use of Excel-generated HTML Considered Harmful This was one of the most strange bugs that I had to resolve. While I was writing the blog post about computing electoral correlations across states using prediction markets, I wanted to include a table with some results, to illustrate how different states are correlated. So, I prepared the table in Excel, and then copied and pasted it on Blogger. Then a strange thing happened: My Feedburner feed stopped working. Nobody received any updates, and suddenly the number of subscribers fell to zero. Trying to figure out that was wrong, I got a message that my feed was bigger than 512Kb. Admittedly, my table was kind of big, with more than 300 entries. So, I decided to trim it down, to 30-50 rows. After that fix my feed started working again. I was still puzzled though why the problem did not appear earlier, given that I have written some pretty long posts (e.g., Why People Participate on Mechanical Turk?) and I never exceeded the 512Kb limit. Well, the problem was not over. Even though my feed was working, the post about computing electoral correlations across states using prediction markets did not appear in Google Reader, and in other readers. However, the reader on my cell phone was displaying the post. Very very strange. I followed all the troubleshooting steps on Feedburner, nothing. So, I decided to take a closer look at the HTML source. I was in for a surprise! The table that I copied and pasted from Excel, had a seriously fat, ugly, and problematic HTML code. As an example, instead of having a table cell written as "<td>NTH.DAKOTA</td>" it had the following code instead: <td class="xl63" style="border-style: none none solid; border-color: -moz-use-text-color -moz-use-text-color rgb(149, 179, 215); border-width: medium medium 0.5pt; background: rgb(219, 229, 241) none repeat scroll 0% 0%; font-size: 10pt; color: black; font-weight: 400; text-decoration: none; font-family: Calibri; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;">NTH.DAKOTA</td> This not only resulted in a seriously padded HTML, it was also generating validation problems, causing Google Reader to reject the post and not display it at all. Solution? Nuking by regular expression. I replaced all the "<td [^>]+>" instances with "<td>", and I had a seriously trimmed table from 116Kb (!) to 7Kb. After that, Google picked the post within seconds.... Lesson? Never, ever use an Excel-generated table in Blogger. Or if you need to do that, make sure to remove all the fat... ## Monday, November 3, 2008 The summer of 2004, after completing my thesis, I found myself with plenty of time on my hands. So, I decided that it would be fun to research my academic genealogy. I knew the advisor of my advisor, Hector Garcia-Molina, and it was rather easy to find his advisor, Gio Widerhold. Gio had also listed John Amsden Starkweather as his own advisor. Going beyond that was proven kind of difficult. I had to order the thesis of John Starkweather and see the dedication there: His advisor was Carl Porter Duncan. In a similar pattern, and spending considerable time at the library, I managed to dig my genealogy back to 1800's and to Hermann von Helmhotz. After that, I hit the entry at Chemical Genealogy and relied on the tree there. Today, through chain of events, I happened to run into Neurotree.org that also contains my genealogy and goes back to 1000AD. By expanding the tree as much as possible, I managed to get a pretty impressive printouts, taking four 11x17 pages :-) Until now, my tree was going back "only" to 1500's and to Pierre Richer de Belleval, who was teaching in Avignon, France. Now, I can proudly say that my tree goes back 1000AD, and its oldest roots are Greek Byzantines, including names such as Ioannis Mauropous, Michail Psellos, and Grigorios Palamas. Accuracy of the information? I have no idea. But I have something to talk about when I go back to Greece for the winter break. ## Sunday, November 2, 2008 ### Computing State Correlations in Elections Using (Only) Prediction Markets (The post below is largely based on the work of Nikolay. I am responsible for all the mistakes.) One thing that always puzzled me was how to compute correlations across electoral results in the state level. In 2006, there was some discussion about the accuracy of the prediction markets in predicting the outcome of the senate races. From the Computational Complexity blog: For example, the markets gave a probability of winning 60% for each of Virginia and Missouri and the democrats needed both to take the senate. If these races were independent events, the probability that the democrats take both is 36% or a 64% chance of GOP senate control assuming no other surprises. However, everyone will agree that races across states are not independent events. For example, if Obama wins Georgia (currently trading at 0.36), the probability of winning Ohio will be higher than 0.70, the current price of the Ohio.DEM contract. (As we will see later in the post, the price of Ohio, given that Democrats win Georgia, is close to 0.81.) So, what we would like to estimate is, for two states A and B, what is the probability $Pr\{A|B\}$ that a candidate wins state $A$, given that the candidate won state $B$. One way to model these dependencies is to run conditional prediction markets but this leads to an explosion of possible contracts. Participation and liquidity is not great even in the current markets for state-level elections, there is little hope that combinatorial markets will attract significant interest. Another way to compute these probabilities is to use and expand the model described in my previous post about modeling volatility in prediction markets. Let's see how to do that. Expressing Conditional Contracts using Ability Processes Following this model, for each state we have a "ability difference processes" $S(t)$ that tracks the difference in abilities of the two candidates. If at expiration time $T$, $S(T)$ is positive, the candidate wins the state; otherwise the candidate loses. So, we can write: $$Pr\{A|B\} = Pr\{ S_A(T) \geq 0 | S_B(T) \geq 0 F(t) \}$$ where $F(t)$ is the information available at time $t$. Using Bayes rule: $Pr\{A|B\} = \frac{Pr\{ S_A(T)\geq 0, S_B(T)\geq 0 | F(t) \}}{Pr\{ S_B(T)\geq 0 | F(t) \}}$ In the equation above, $Pr\{ S_B(T)\geq 0 | F(t) \}$ is simply the price of the contract for state $B$ at time $t$, i.e., $\pi_B(t)$. Pricing Joint Contracts using Ito Diffusions The challenging term is the price of the joint contract $Pr\{ S_A(T)\geq 0, S_B(T)\geq 0 | F(t) \}$. To price this contract, we generalize the Brownian motion model, and we assume that the joint movement of $S_A(t)$ and $S_B(t)$ is a 2d Brownian motion. Of course, we do not want the 2d motion to be independent! Since the $S_A(t)$ and $S_B(t)$ represent the abilities, they are correlated! So, we assume that the two Brownian motions have some (unknown) correlation $\rho$. Intuitively, if they are perfectly correlated, when $S_A(t)$ goes up, then $S_B(t)$ goes up by the same amount. If they are not correlated, the movement of $S_A$ does not give any information about the movement of $S_B$, and in this case $Pr\{A|B\} = Pr\{A\}$ Without going into much details, in this model the price of the joint contract is: $\pi_{AB}(t) = Pr\{ S_A(T)>0, S_B(T)>0 | F(t) \} = N_\rho ( N^{-1}(\pi_A(t)), N^{-1}(\pi_B(t)) )$ where $N_\rho$ is the CDF of the standard bivariate normal distribution with correlation $\rho$ and $N^{-1}$ is the inverse CDF of the standard normal. Intuitively, this is nothing more than the generalization of the result that we presented in the previous post. However, the big question remains: How do we compute the value $\rho$? Now the neat part: We can infer $\rho$ by observing the past time series of the two state-level contracts. Why is that? First of all, we know that the price changes of the contracts are given by: $d\pi(t) = V(\pi(t), t) dW$, which gives $dW = \frac{d\pi(t)}{ V( \pi(t), t)}$, We can observe $d\pi(t)$ over time. We also know that $V( \pi(t), t) = \frac{1}{\sqrt{T-t}} \cdot \varphi( N^{-1}( \pi(t) ) )$ is the instantaneous volatility of the a contract trading at price $\pi(t)$ at time $t$. So essentially we take the price differences over time and we normalize them by the expected volatility. This process generates the "normalized" changes in abilities, over time and across states. Therefore, we can now use standard correlation measures of time series to infer the hidden correlation of the ability processes. (And then compute the conditional probability.) If the two ability processes were powered by independent Brownian motions $W_A$ and $W_B$, then $dW_A$ and $dW_B$ would not exhibit any correlation. If the two processes are correlated, then we can measure their cross-correlation by observing their past behavior. Now, by definition of cross-correlation we get: $\rho \approx \Sigma_{i=o}^t \frac{ (\pi_A(i+1) - \pi_A(i)) \cdot (\pi_B(i+1) - \pi_B(i)) }{ V(\pi_A(i), i) \cdot V(\pi_B(i), i) }$ OK, if you stayed with me so long, here are some of the strong correlations as observed and computed based on the InTrade data. How to read the table? If Democrats win state B, what is the probability Pr(A|B) that they will also win state A? To make comparisons easy, we also list the current price of the contracts A and B. The "lift" shows how much the conditional probability increases compared to the base probability. I skipped the cases when a state has very high probability, i.e., above 0.9 (as they are either uninformative) or very low probability, i.e., less than 0.2 (as they are highly unlikely to happen). I also list only state pairs with lift larger than 1.10. You can also get the list as an Excel spreadsheet. Enjoy! ## Friday, October 31, 2008 ### The New York Times Annotated Corpus Last week, I was invited to give a talk at a conference at the New York Public Library, about the preservation of news. I talked about our research in the Economining project, where we are trying to find the "economic value" of textual content on the Internet. As part of the presentation, I discussed some problems that I had in the past with obtaining well-organized news corpora that are both comprehensive and also easily accessible using standard tools. Factiva has an excellent database of articles, exported in a richly annotated XML format but unfortunately Factiva prohibits data mining of the content of its archives. The librarians in the conference were very helpful in offerring suggestions and acknowledging that providing content for data mining purposes should be one of the goals of any preservation effort. So, yesterday I received an email from Dorothy Carner informing me about the availability of The New York Times Corpus, a corpus of 1.8 million articles from The New York Times, dating from 1987 until 2007. The details are available from http://corpus.nytimes.com but let me repeat some of the interesting facts here (the emphasis below is mine): The New York Times Annotated Corpus is a collection of over 1.8 million articles annotated with rich metadata published by The New York Times between January 1, 1987 and July 19, 2007. With over 650,000 individually written summaries and 1.5 million manually tagged articles, The New York Times Annotated Corpus has the potential to be a valuable resource for a number of natural language processing research areas, including document summarization, document categorization and automatic content extraction. The corpus is provided as a collection of XML documents in the News Industry Text Format (NITF). Developed by a consortium of the world’s major news agencies, NITF is an internationally recognized standard for representing the content and structure of news documents. To learn more about NITF please visit the NITF website. Highlights of The New York Times Annotated Corpus include: • Over 1.8 million articles written and published between January 1, 1987 and June 19, 2007. • Over 650,000 article summaries written by the staff of The New York Times Index Department. • Over 1.5 million articles manually tagged by The New York Times Index Department with a normalized indexing vocabulary of people, organizations, locations and topic descriptors. • Over 275,000 algorithmically-tagged articles that have been hand verified by the online production staff at NYTimes.com. • Java tools for parsing corpus documents from xml into a memory resident object. Yes, 1.8 million articles, in richly annotated XML, with summaries, with hierarchically categorized articles, and with verified annotations of people, locations, and organizations! Expect the corpus to be a de facto standard for many text-centric research efforts! Hopefully more organizations are going to follow the example of New York Times and we are going to see such publicly available corpora from other high-quality sources. (I know that Associated Press has an archive of almost 1Tb of text, in computerized form, and hopefully we will see something similar from them as well.) How can you get the corpus? It is available from LDC, for 300 USD for non-members; members should get this for free. I am looking forward to receiving the corpus and start playing! ## Monday, October 20, 2008 ### Modeling Volatility in Prediction Markets, Part II In the previous post, I described how we can estimate the volatility of prediction markets using additional prediction market contracts, aka options on prediction markets. I finished indicating that techniques that can be used to price options for stocks, are not directly applicable in the prediction market context. Now, I will review a different modeling approach that builds on the spirit of Black-Scholes but is properly adapted for the prediction market context. This model has been developed by Nikolay, and is described in the paper "Modeling Volatility in Prediction Markets". Modeling Prediction Markets as Competitions Let's consider the simple case of a contract with a binary outcome. For example, who will win the presidential election? McCain or Obama? The basic modeling idea is to assume that each competing party has an ability $S_i(t)$ that evolves over time , moving as a Brownian motion. (A simplified example of such ability would be the number of voters for a party, the number of points in a sports game, and so on.) At the expiration of the contract at time $T$ , the party $i$ with the higher ability $S_i(T)$ wins. Actually, to have a more general case, we can use a generalized form of the Brownian motion, an Ito diffusion, that allows for the abilities to have a drift $\mu_i$ over time (i.e., the average rate of growth), and different volatilities $\sigma_i$ . The quantity that we need to monitor is the difference of the two ability processes $S(t)=S_1(t)-S_2(t)$ . If at the expiration of the contract at time $T$ we have $S(T)>0$ , then party 1 wins. If $S(T)$ is less than 0, then party 2 wins. Interestingly, the difference $S(t)$ is also an Ito diffusion, with $\mu=\mu_1-\mu_2$ , $\sigma=\sqrt{\sigma_1^2+\sigma_2^2-2\rho \sigma_1 \sigma_2}$ , where $\rho$ is the correlation of the two ability processes. Under this scenario, the price of the contract $\pi(t)$ at time $t$ is: $\pi(t) = Pr\{ S(T)>0 | S(t) \}$ which can be written as: $\pi(t) = N\Big(\frac{S(t) + \mu \cdot (T-t)}{\sigma \cdot \sqrt{T-t} } \Big)$ where $N(x) =\frac{1}{2} \Big[ 1 + erf\Big( \frac{x}{\sqrt{2}} \Big) \Big]$ is the CDF of the normal distribution with mean 0, and standard deviation 1 and $erf(x)$ is the error function. Notice that as time $t$ gets closer to the expiration, the denominator gets close to 0, which makes the ratio closer to $\infty$ or $-\infty$, and price $\pi(t)$ gets close to 0 or 1. However, if $S(t)$ is close to 0 (i.e., the two parties are almost equivalent), then we observe increasingly higher instability as we get close to expiration, as small changes in the difference $S(t)$ can have a significant effect in the outcome. For example, consider two parties: party 1 with an ability that has positive drift $\mu_1=0.2$ and volatility $\sigma_1=0.3$, and party 2 with negative drift $\mu_2=-0.2$ and higher volatility $\sigma_2=0.6$. In this case, assuming no correlation, the difference is a diffusion with drift $\mu=0.4$ and volatility $\sigma=0.67$. Here is one scenario of the evolution, and below you can see the price of the contract, as time evolves. As you may observe from the example, the red line (party 1) is for the most time above the blue line (party 2), which causes the green line (the difference) to be above 0. As the contract gets close to expiration, the contract gets closer and closer to 1 (i.e., party 1 will win). Close to the end, the blue line catches up, which causes the prediction market contract to have a big swing from almost 1 to 0.5, but then swings back up as party 1 finally finishes at the expiration above party 2. So far, we generated a nice simulation but our results depend on knowing the parameters of the underlying "ability processes". Since we never get to observe these values, what is the use of all this exercise? Well, the interesting thing is that by using the price function, we can now proceed to derive its volatility. Without going into the details, we can prove that the volatility of the prediction market contract is: $V(t) = \rac{1}{\sqrt{T-t}} \cdot \varphi( N^{-1}( \pi(t) ) )$ where $N^{-1}(x)$ is the inverse CDF of the standard normal distribution and $\varphi(x)=\frac{exp( (-x^2)/2)}{\sqrt{2\pi}}$ is the density of the standard normal distribution. In other words, volatility depends only on the current price of the contract and time to expiration! Anything else is irrelevant! Drifts do not matter: they are priced already in the current price of the contract, since we know where the drift will lead at expiration. The magnitude of the volatilities are also priced into the current contract price: higher volatilities cause the contract price to get closer to 0.5, as it is easier for $S(t)$ to move above and below 0 when it has high volatility. Furthermore, the direction of the volatilities of the underlying abilities is indifferent as they can move the difference into either direction with equal probability. (The only assumption is that the volatilities of the underlying abilities processes do not change over time.) Volatility Surface So, what this model implies for the volatility of the prediction markets? First of all, the model says that volatility increases as we move closer to the expiration, as long as the price of the contract is not 0 or 1. For example, assuming that now we have $t=0$ and expiration is at $T=1$, the volatility is expected to increase as follows: So, how volatility changes with different contract prices? As you can see, volatility is highest when the contract trades at around 0.5, and gets close to 0 when price is 0 or 1. And just to combine the two plots and present a nice 3d plot, with the present being at $t=0$ and expiration at $T=1$: The experimental section in the paper "Modeling Volatility in Prediction Markets" (shorter conference paper presented at ACM EC'09), indicates that the actual volatility observed in the InTrade prediction markets fits well the current model. Now, given this model, we can judge what is a "noise movement" and what is actually a "significant move" in prediction markets. Furthermore, we can provide an "error margin" for each day, indicating the confidence bounds for the market price. I will post more applications of this model in the next few days. We will see how to price the X contracts on InTrade, and a way to compute correlations of the outcomes of state elections, given simply the past movements of their corresponding prediction markets. ### Modeling Volatility in Prediction Markets, Part I A few weeks back, I was thinking about the concept of uncertainty in prediction markets. The price of a contract in a prediction market today gives us the probability that an event will happen. For example, the contract 2008.PRES.OBAMA is trading at 84.0, indicating that there is an 84% chance that Obama will win the presidential election. Unfortunately, we have no idea about the stability and robustness of this estimate. How likely it is that the contract will fall tomorrow to 80%? How likely it is to jump to 90%? By treating the contract price as a "deterministic" number, we do not capture such information. We need to treat the price as a random variable with its own probability distribution, out of which we observe just the mean by looking at the prediction market. However, to fully understand the stability of the price we need further information, beyond just the mean of the probability, revealed by the current contract price. A first step is to look at the volatility of the price. One approach is to look at the past trading behavior, but this analysis will give us the past volatility, not the expected future volatility of the contract. Predicting Future Volatility using Options So, how can we estimate the future volatility of a prediction market contract? There is a market approach to solve this problem. Namely, we can run prediction markets on the results of the prediction markets! Recently, Intrade has introduced such contracts, the so-called X contracts (listed under "Politics->Options: US Election" from the sidebar). For example, the contract "X.22OCT.OBAMA.>80.0" pays 1 USD if the contract "2008.PRES.OBAMA" will be higher than 80.0 on Wed 22 Oct 2008. Traditionally, the threshold defined in the options contract is called strike price (e.g., the strike price for X.22OCT.OBAMA.>80.0 is 80.0). A set of such contracts can reveal the distribution of the probability of the event for the underlying contract 2008.PRES.OBAMA. In other words, we can see not only what is the mean probability that Obama will be elected president but we can also see the expected downside risk or upside potential of the 2008.PRES.OBAMA contract. For example, the X.22OCT.OBAMA.>80.0 has a price of 90.0, indicating a 90% chance that the 2008.PRES.OBAMA contract will be above 80.0 on Oct 22nd. Now, given enough contracts, with strike prices at various levels, we can estimate the probability distribution for the likely prices of the contract. For example, we can have contracts with strike price 10, 20, ..., 90 that will give us the probability that the contract will trade above 10, 20, ... and 90 points at some specific point in time, which corresponds to the expiration date of the options contract. So for each date, we need 9 contracts, if we need to have a 10 column histogram that describes the distribution. Note that if we want to estimate the probability distribution dynamics we will need to setup 9 contracts for each date that we want to measure. Of course, this implies that we have plenty of liquidity in the markets if we want to rely purely on the market for such estimates. Pricing Options and the Black-Scholes Formula A natural question is: Can we price such "options on options" contracts? This will at least give us some guidance on the likely prices of such contracts, if not for anything else, but to just start the market at the appropriate level. (For example, if we have a market scoring mechanism.) There is significant research in Finance on pricing options for stocks. The Black-Scholes formula is one of the most well-known examples for deriving prices for options on stocks. The basic idea behind Black-Scholes is that the underlying stock price follows a Brownian motion, moving randomly up and down. Then by extracting the probability that this random stock move will reach various levels, it is possible to derive the option prices. (Terrence Tao has a very easy to read 3-page note explaining the Black-Scholes formula and a longer blog posting.) Why not applying directly this model to price options on prediction markets? There are a few fundamental problems but the most important one is the bounded price of the underlying prediction market contract. The price of a prediction market contract cannot go below 0 or above 1, so the Brownian motion assumption is invalid. In fact, if we try to apply the Black-Scholes model on a prediction market, we get absurd results. In the next post, I will review an adaptation of the Black-Scholes model that works well for prediction markets, and leads to some very interesting results! ## Saturday, October 4, 2008 ### Reviewing the Reviewers I received today the latest issue of TOIS, and the title of the editorial by Gary Marchionini caught my eye: "Reviewer Merits and Review Control, in an Age of Electronic Manuscript Management Systems". The article makes the case for using the electronic management systems to allow for grading of the reviewer efforts and allow for memory of the reviewing process, including both the reviews and the reviewer ratings. In principle, I agree with the idea. Having the complete reviewing history for each reviewer, and for each journal and conference, can bring several improvements in the process: 1. Estimating and Fixing Biases One way to see the publication process is as noisy labeling of an example, where the true labels are "accept" or "reject". The reviewers can be modeled as noisy processes, each with its own sensitivity and specificity. The perfect reviewer has sensitivity=1, i.e., marks as "accept" all the "true accepts", and has specificity=1, i.e., marks as "reject" all the "true rejects". Given enough noisy ratings, it is possible to use statistical techniques to infer what is the "true label" for each paper, and infer at the same time the sensitivity and specificity of each reviewer. Bob Carpenter has presented a hierarchical Bayesian model that can be used for this purpose, but simpler maximum likelihood models, like the one of Dawid and Skene, also work very well. In my own (synthetic) experiments the MLE method worked almost perfectly for recovering the quality characteristics of the reviewers and to recover the true labels of the papers (of course, without the uncertainty estimates that the Bayesian methods provide.) One issue with such a model? The assumption that we have an underlying "true" label. For people with different backgrounds and research interests, what is a "true accept" and what a "true reject" is not easy to define even with perfect reviewing. 2. Reviewer Ratings Reviewer reviewing by the editors The statistical approaches described above reduce the quality of a reviewer into two metrics. However, these ratings only show agreement of the recommendations with the "true" value (publish or not). They say nothing about other aspects of the review: comprehensiveness, depth, timeliness, helpfulness, are all important aspects that need to be captured using different methods. Marchionini mentions that current manuscript management systems allow the editors to rate reviewers in terms of timeliness and in terms of quality. By following the references, I ran into the article Reviewer Merits, published in Information Processing and Management, where the Editors-in-Chief of many IR journals stated: Electronic manuscript systems easily provide time data for reviewers and some offer rating scales and note fields for editors to evaluate review quality. Many of us (editors) are beginning to use these capabilities and, over time, we will be able to have systematic and persistent reviewer quality data. Graduate students, faculty, chairs, and deans should be aware that these data are held. Now, while I agree with reviewer accountability, I think that this statement is not worded properly. I find the use of the phrase "should be aware" as semi-threatening. ("We, the editors, are rating you... remember that!") If reviewer quality history is being kept, then the reviewers should be aware and have access to it. Being reminded that "your history is out there somewhere" is not the way to go. If reviewer quality is going to be a credible evaluation metric, the reviewers need to know how well they did. (Especially junior reviewers, and especially when the review does not meet the quality standards.) Furthermore, if the editors are the ones rating the reviewers, then who controls the quality of these ratings? How do we know that the evaluation is fair and accurate? Notice that if we have a single editorial quality rating per review, then the statistical approaches described above do not work. Reviewer reviewing by the authors In the past, I have argued that authors should rate reviewers. My main point in that post was to propose a system that will encourage reviewers to participate by rewarding the highly performing reviewers. (There is a similar letter to Science, named "Rewarding Reviewers.") Since authors will have to provide multiple feedback points, it is much easier to correct the biases in the reviewer ratings of the authors. 3. Reviewer History and Motivation If we have a history of reviewers, we should not forget potential side-effects. One clear issue that I see, is motivation. If "reviews of reviewers" become a public record, then it is not clear how easy it will be to recruit reviewers. Right now, many accept invitations to review, knowing that they will be able to do a decent job. If the expectations increase, it will be natural for people to reject invitations, focusing only on a few reviews for which they can do a great job. Arguably, reviewer record is never going to be as important for evaluation as other metrics, as research productivity or teaching, so it is unlikely to get more time devoted to it. So, there will always be the tradeoff: more reviews or better reviews? One solution that I have proposed in the past: Impose a budget! Any researcher should remove from the reviewing system the workload it generates. Five papers submitted (not accepted) within a year? The researcher needs to review 3x5 = 15 papers to remove the workload that these five papers generated. (See also the article "In Search of Peer Reviewers" that has the same ideas.) 4. Training Reviewers So, suppose that we have the system in place to keep reviewer history, we have solved the issue of motivation, and now one facet of researcher reputation is the reviewer quality score. How do we learn how to review properly? A system that generates a sensitivity and specificity of a reviewer can provide some information on how strict or lenient a reviewer is, compared to others. However, we need something more than that. What makes a review constructive? What makes a review fair? In principle, we could rely on academic advising to pass such qualities to newer generations of researchers. In practice, when someone starts reviewing a significant volume of papers, there is no advisor or mentor to oversee the process. Therefore, we need some guidelines. An excellent set of guidelines is given in the article "A Peer Review How-To". Let me highlight some nuggets: Reviewers make two common mistakes. The first mistake is to reflexively demand that more be done. Do not require experiments beyond the scope of the paper, unless the scope is too narrow. [...] Do not reject a manuscript simply because its ideas are not original, if it offers the first strong evidence for an old but important idea. Do not reject a paper with a brilliant new idea simply because the evidence was not as comprehensive as could be imagined. Do not reject a paper simply because it is not of the highest significance, if it is beautifully executed and offers fresh ideas with strong evidence. Seek a balance among criteria in making a recommendation. Finally, step back from your own scientific prejudices And now excuse me, because I have to review a couple of papers... ## Thursday, October 2, 2008 ### VP Debate and Prediction Market Volatility I was watching the VP debate on CNN, and CNN was reporting the reactions of "undecided Ohio voters" to what the VP candidates were saying. Although interesting, it was not satisfying. I wanted a better way to see the real time reactions. Blogs were relatively slow to post, and mainstream media were simply describing the minutia of the debate. What is the solution? Easy. Prediction markets! I remembered that Intrade has a contract VP.DEBATE.OBAMA, "Barack Obama's Intrade value will increase more than John McCain's following the VP debate" So, during the debate, I was following the fluctuations of the contract's price to measure the reactions. Here is how the contract moved from 8.30pm EST since 10.30pm EST. (The debate started at 9pm EST, and lasted until 10.30pm EST.) At the beginning, the contract was below 50.0%, reflecting probably that the fact that Palin was giving reasonable and coherent responses, disappointing perhaps those that were expecting material for a Saturday Night Live performance. However, at the second 45 minutes of the debate, as the discussion moved into foreign policy issues, the contract started moving up, as Biden started giving more immediate answers, and Palin started avoiding questions and replied using stereotypical, canned answers. What I found interesting was the significant increase in variance as the debate came close to the end. Prices fluctuated widely during the closing statements of the two VP candidates. This increased volatility as the contract comes to a close, is actually a fact that we observed consistently in many contracts over time: when the contract is not close to 0.0 or 1.0, the price fluctuates widely as we get close to expiration. While I could explain this intuitively, I did not have a solid theoretical understanding of why. So, what to do in this case? You simply ask a PhD student to explain it to you! I asked Nikolay Archak, and within a few weeks, Nikolay had the answer. The basic result: • Volatility increases as contract price gets closer to 0.5, • Volatility decreases as contract price gets closer to 0.0 or to 1.0, • Volatility increases as we get close to the expiration, and approaches infinity if price is not 0.0 or 1.0. ## Tuesday, September 30, 2008 ### Sarah Palin and Markov Models How good are n-gram Markov models for language modeling? Apparently pretty good for modeling the responses of Sarah Palin during her last couple of interviews! Check them out: http://interviewpalin.com/ http://palinspeak.com/ ## Friday, September 12, 2008 ### How Much Turking Pays? After reporting the results about "why Turkers Turk," I received a set of questions about further things that people would like to know about the Turkers. One of the most common questions was about the compensation of Turkers: "How much do they make by Turking?" Well, there is no question about Mechanical Turk, that Mechanical Turk cannot answer, so here we go. I posted the very same question on MTurk, asking people about their average compensation per week. Without further ado, here are the results: ## Tuesday, April 22, 2008 ### How Much a Paper Submission Costs? I have been reading the post by Lance Fortnow about the cost of a class, and what is the amount that students pay collectively for an hour of teaching. This made me think of a similar calculation for the cost of submitting a paper to a conference. We are accustomed to submit papers and then asking for high-level reviews, often disregarding the associated costs. "What cost?", you will ask, given that everything in academic reviewing is done in a gratis, voluntarily basis. Fundamentally our peer reviewing system is based on an implicit tit-for-tat agreement: "I will contribute a number of reviews as a reviewer, so that others can then review my own papers". In most cases, though, some employer is paying the reviewer (a university, a research lab...) and reviewing consumes some productive time. A typical computer scientist with PhD will have a salary above $100K per year, which roughly corresponds to a$50/hr-$100/hr salary. A typical review (at least for me) takes at least 3 hours to complete, in the best case, corresponding to a cost of$150 to $300 per review. Additionally, every paper submission gets 3-4 reviewers, which results in a cost per submission of$500 to $1000 per paper. Therefore, a conference like SIGMOD, WWW, KDD, and so on, with 500-1000 submissions per year, consumes from$250,000 to $1,000,000 in resources, just for conducting the reviewing. I simply find that amount impressive. This leads to the next question: Have you ever thought about your balance? How many papers do you review and how many papers do you submit per year? If someone had to pay$1000, I doubt that we would see many half-baked submissions. Or, if credit was given for each conducted review, then we would have more reviewing resources available. I do not advocate a system based on monetary awards, but before complaining about the quality of the reviews that you get, think: What is your balance?
# How do you find the amplitude, period, phase shift for y=sin2(x+(3pi)/4)? Jul 28, 2018 #### Explanation: Compare this equation to $y = a \sin \left(\omega x + \phi\right)$ Here, $y = \sin 2 \left(x + \frac{3}{4} \pi\right) = \sin \left(2 x + \frac{3}{2} \pi\right)$ The amplitude is $a = 1$ The period is $T = 2 \frac{\pi}{2} = \pi$ The phase shift is $\phi = \frac{3}{2} \pi$ graph{sin(2x+3/2pi) [-7.02, 7.024, -3.51, 3.51]}
# statistics quiz 19 You want to buy headphones, and for each headphone the store offers 5 brands, 4 colors, 2 wireless types and 3 cord lengths. How many options are available ? Flag this QuestionQuestion 25 pts John wants to watch a comedy and a drama. He has 10 comedies and 12 dramas to choose from. In how many ways can he do his selection of 1 comedy and 1 drama ? Flag this QuestionQuestion 35 pts An ice cream parlor offers 10 flavors of ice cream, 5 toppings and 3 types of cones. How many choices are possible ? Flag this QuestionQuestion 45 pts How many license plate codes can be made with 5 consecutive characters, if repetition of characters is allowed ? ( A character is any letter of the English alphabet or any digit ). Flag this QuestionQuestion 55 pts A bicycle store carries 7 brands of bicycles, each one in 5 colors, 6 wheel sizes and 2 types of brakes. How many bike choices are available at this store ? Flag this QuestionQuestion 65 pts According to the ____________, if there are ways to do an action and ways to do another action, then both actions can be done in ____________ ways. $\phantom{\rule{0ex}{0ex}}$ m + n $\phantom{\rule{0ex}{0ex}}$ m + n + n + n ${}^{}$ m 2 + n 2 ${}^{}$ m 2 + n 2 ⋅ n ⋅ n Flag this QuestionQuestion 75 pts In how many ways can 7 persons be arranged in a row ? Flag this QuestionQuestion 85 pts In how many ways can 7 different cars be arranged in a row ? Flag this QuestionQuestion 95 pts In how many ways can 8 people line up for concert tickets ? Flag this QuestionQuestion 105 pts According to the ____________ the number of arrangements of different objects is ____________. $\phantom{\rule{0ex}{0ex}}$ n $\phantom{\rule{0ex}{0ex}}$ n $n$ $n$ ! ! ${}^{}$ n 2 ${}^{}$ n 2 Flag this QuestionQuestion 115 pts In how many ways can 5 students be selected as winners ( with no order or ranking ), out of a group of 12 finalists in a math competition ? Flag this QuestionQuestion 125 pts In how many ways can you select a group of 3 students from a class of 20 students ? Flag this QuestionQuestion 135 pts You want to prepare a smoothie using 3 ingredients selected from 5 available ingredients. How many options do you have ? Flag this QuestionQuestion 145 pts The number of combinations of elements, taking at a time, is given by the formula : $\frac{}{}$ r ! ( n − r ) ! ⋅ n ! $\frac{}{}$ r ! ( n − r ) ! ⋅ n ! $\frac{}{}$ r ! n ! $\frac{}{}$ r ! n ! $\frac{}{}$ n ! ( n − r ) ! $\frac{}{}$ n ! ( n − r ) ! $\frac{}{}$ n ! ( n − r ) ! ⋅ r ! $\frac{}{}$ n ! ( n − r ) ! ⋅ r ! Flag this QuestionQuestion 155 pts In ____________ the order of the elements is ____________. In ____________ the order of the elements is ____________. Flag this QuestionQuestion 165 pts The number of permutations of elements, taking at a time, is given by the formula : $\frac{}{}$ ( n − r ) ! n ! + r ! $\frac{}{}$ ( n − r ) ! n ! + r ! $\frac{}{}$ n ! ( n − r ) ! $\frac{}{}$ n ! ( n − r ) ! ! ⋅ r ! ! ⋅ r ! ! + r ! ! + r ! Flag this QuestionQuestion 175 pts 10 students are competing for 3 prizes ( gold, silver and bronze medals, with no ties ). In how many ways can the prizes be awarded ?
# Why are these functions commutatives? 1. Feb 16, 2013 ### SqueeSpleen I'm finishing studying for my Linear Algebra final, but there are two things I didn't understand (I'll make one thread per thing). (It's a translation, the original one is in Spanish, second half of the page 174/184 book notation/acrobat reader notation if you're curious). Let be V a K vector space with finite dimension. Let be f:V→V a linear map, with a $m_{f}$=P.Q and (P,Q)=1 Then... (...) $V = Nu(P(f)) \oplus Nu(Q(f))$ And one step of the proof (the only one I don't understand) say: $x = (R(f) \circ P(f) )(x)+(S(f) \circ Q(f) )(x)$ (That's for the extended Euclid's algorithm for polynomials I think, this is not the problem). The problem is just here: "Now, having in account that:" $Q(f) \circ R(f) = (Q.R)(f) = (R.Q)(f) = R(f) \circ Q(f)$ And I don't know why. I understand that, if the composition is equal to the product, then the commutativity is obvious because the product of two polynomials is commutative. But why the composition is equal to the composition in this case? It must be related to extended Euclid's algorithm for polynomials but I think I never had it in a course =/ Sorry for my bad English, I learned most of it while gaming xD Last edited: Feb 16, 2013 2. Feb 16, 2013 ### micromass Staff Emeritus I have no idea what notations you are using, but I think it's clear to me what they mean. For example, let $P(X)=X^3+X^2$ and $Q(X)=X^2+3X$. Then $P(f)=f^3 + f^2$ and $Q(f)=f^3+3f$. And we define $f^2=f\circ f$ and $f^3=f\circ f\circ f$. Then $$(P\cdot Q)(X) = (X^3 + X^2)\cdot (X^2 + 3X) = X^5 + 4X^4 + 3X^3$$ So $$(P\cdot Q)(f)=f^5 + 4f^4 + 3f^3$$ but $$P(f)\circ Q(f) = (f^3 + f^2)\circ (f^2 + 3f) = f^3\circ f^2 + 3f^3\circ f + f^2\circ f^2 + 3f^2\circ f = f^5 + 4f^4 + 3f^3$$ Do you understand this? 3. Feb 16, 2013 ### SqueeSpleen I think it has to be composition because it's what follows: $Q(f)((R(f) \circ P(f))(x)) = (Q(f) \circ R(f) \circ P(f))(x) = R(f)(Q(f) \circ P(f))(x) = R(f)(m_{f} (f)(x)) = R(f)(0) = 0;$ Then R(f).P(f) is in Q(f) kernel, beacuse Q(R(f).P(f))=0 (the idea is to prove that the sum of the kernels is equal to V). The thing I don't understand if how Q(f) evaluated on R(f).P(f) is equal to R(f) evaluated on Q(f).P(f). PD: Nu(Something) is the Kernel of something, I forget to translate it. Probably I'm misunderstading something of the notation... again notation is my worst enemy. Edit: T_T Q(f) is a polynomial of a lineal transformation, so it can be represented as a matrix, then to evaluate it is to multiplicate it to the right for a vector... so yes they are all products, and then f(x)=f.x and the commutativity is obvious T_T I spend a couple of hours reading this, I don't know how I got SO stuck. Sorry for my bad English, thanks for your help. PD: Did they moved it to linear and abstract algebra or I missed forum when I posted it? I thought I posted it in homework help, anyway I don't really know what to post outside. Last edited: Feb 16, 2013 Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Hub and spoke airline route structures. Los Angeles and Denver are used as hubs. The hub-and-spoke distribution paradigm (or model or network) is a system of connections arranged like a chariot wheel, in which all traffic moves along spokes connected to the hub at the center. The model is commonly used in industry, in particular in transport, telecommunications and freight, as well as in distributed computing. ## Analysis of the model The hub-and-spoke model is most frequently compared to the point-to-point transit model. ### Benefits • For a network of n' nodes, only n - 1 routes are necessary to connect all nodes; that is, the upper bound is n - 1, and the complexity is O(n). This compares favorably to the $\frac{n(n-1)}{2}$ routes, or O(n2), that would be required to connect each node to every other node in a point-to-point network. For example, in a system with 10 destinations, the spoke-hub system requires only 9 routes to connect all destinations, while a true point-to-point system would require 45 routes. • The small number of routes generally leads to more efficient use of transportation resources. For example, aircraft are more likely to fly at full capacity, and can often fly routes more than once a day. • Complicated operations, such as package sorting and accounting, can be carried out at the hub, rather than at every node. • Spokes are simple, and new ones can be created easily. ### Drawbacks • Because the model is centralized, day-to-day operations may be relatively inflexible. Changes at the hub, or even in a single route, could have unexpected consequences throughout the network. It may be difficult or impossible to handle occasional periods of high demand between two spokes. • Route scheduling is complicated for the network operator. Scarce resources must be used carefully to avoid starving the hub. Careful traffic analysis and precise timing are required to keep the hub operating efficiently. • The hub constitutes a bottleneck or single point of failure in the network. Total cargo capacity of the network is limited by the hub's capacity. Delays at the hub (caused, for example, by bad weather conditions) can result in delays throughout the network. Delays at a spoke (from mechanical problems with an airplane, for example) can also affect the network. • Cargo must pass through the hub before reaching its destination, requiring longer journeys than direct point-to-point trips. This trade-off may be desirable for freight, which can benefit from sorting and consolidating operations at the hub, but not for time-critical cargo and passengers. • Two trips are required to reach most of the destinations. Arriving at the hub and spending some time there increases the duration of the journey. Missing the connecting bus, flight, or train is possible and may be more troublesome than just a delay. ## Commercial aviation In 1955 Delta Air Lines pioneered the hub and spoke system at its hub in Atlanta, Georgia,[1] in an effort to compete with Eastern Air Lines. In the mid-1970s FedEx adopted the hub and spoke model for overnight package delivery, and after the airline industry was deregulated in 1978, Delta's hub and spoke paradigm was adopted by several other airlines. Airlines have extended the hub-and-spoke model in various ways. One method is to create additional hubs on a regional basis, and to create major routes between the hubs. This reduces the need to travel long distances between nodes that are close together. Another method is to use focus cities to implement point-to-point service for high traffic routes, bypassing the hub entirely. ## Transportation The spoke-hub model is applicable to other forms of transportation: For passenger road transport, the spoke-hub model does not apply because drivers generally take the shortest or fastest route between two points. ## Industrial distribution The hub-and-spoke model has also been used in economic geography theory to classify a particular type of industrial district. Ann Markusen, an economic geographer, theorised about industrial districts, where a number of key industrial firms and facilities act as a hub, with associated businesses and suppliers benefiting from their presence and arranged around them like the spokes of a wheel. The chief characteristic of such hub-and-spoke industrial districts is the importance of one or more large companies, usually in one industrial sector, surrounded by smaller, associated businesses. Examples of cities with such districts include Seattle (where Boeing was founded), Silicon Valley (a high tech hub), and Toyota City, with Toyota. ## East Asian relations In the sphere of East Asian relations, according to Victor Cha, hub-and-spokes refers to the network of bilateral alliances between United States and other individual East Asian countries. This system constructs a dominant bilateral security architecture in East Asia, differing from the multilateral security architecture in Europe. United States acts as a "hub" and Asian countries such as Republic of Korea(ROK), Republic of China(ROC) and Japan fall under the category "spokes." Whereas there is a strong alliance between the hub and the spoke, there are no firmly established connections between the spokes themselves.[2] This system was famously inspired by John Foster Dulles, who served as US Secretary of State under the Eisenhower administration from 1953 to 1959. He addressed this term twice in Tokyo and once at the San Francisco Peace Treaty of September 1951. This led to talks for bilateral peace treaty between US and Japan. The US-Japan Security Treaty of 1951, US-Republic of Korea Defense Treaty of 1953 or US-Republic of China Security Treaty of 1954 are some of the examples that manifests these bilateral relations.[3] In 2014 all ten ASEAN defense chiefs were summoned to Hawaii for the first time for a historic meeting with Chuck Hagel. This was part of an American attempt to get the countries to strengthen military ties between themselves.[4]
# Differentiability on R 1. Jan 8, 2008 ### sinClair 1. The problem statement, all variables and given/known data Suppose f:(0,$$\infty$$)->R and f(x)-f(y)=f(x/y) for all x,y in (0,$$\infty$$) and f(1)=0. Prove f is continuous on (0,$$\infty$$) iff f is continuous at 1. 2. Relevant equations I think I ought to use these defn's of continuity: f continuous at a iff f(x)->f(a) as x->a or f is cont at a iff for Xn->a, f(Xn)-f(a) as Xn->$$\infty$$ 3. The attempt at a solution The forward direction is immediate. For the backwards direction, we want to show that f(x)->f(a) as x->a for a in (0,$$\infty$$). So since f cont at 1, f(x)->f(1)=0 as x->1. I tried to manipulate this but couldn't find a way to make x->a instead of x->1. Then I used the other definition and let Xn=1+1/n and Yn=a(Xn)=a+(a/n). Now Yn->a so just want to show that f(Yn)->f(a) as n->$$\infty$$. But f(Yn)->a*0=0 as n->$$\infty$$... I know I have to use f(x)-f(y)=f(x/y) somehow. So I went backwards: So I want to show that f(x)-f(a)->0 as x->a. So that means I want f(x/a)->0 as x->a. But now I don't see how to incorporate the fact that f is continuous at a. I know this is related to the log function but don't think this problem requires me to appeal that fact.... Note Xn and Yn are sequences indexed by n (I'm noob at this latex). Thanks for helping. Last edited: Jan 8, 2008 2. Jan 8, 2008 ### ircdan take any a in (0, inf), then a !=0 so x/a makes sense and we have f(x) - f(a) = f(x/a), so f(x) = f(x/a) + f(a), now take the limit as x->a. Note as x->a, x/a -> 1 3. Jan 9, 2008 ### sinClair Thank you, Dan. It makes so much sense now.
Error using cvx/log (line 64) Disciplined convex programming error: Illegal operation: log( {convex} ) Hi, I’m facing a problem with my code in CVX. I would appreciate it if anyone can help me. I’m trying to optimize a bunch of variables iteratively with the help of the following algorithm when I run it for different Inputs, I get “Disciplined convex programming error: Illegal operation: log( {convex} )” that is because of the term inside the \log, \sqrt{\alpha}, which is a convex function. How can I reformulated the objective in a way that Disciplined convex programming be satisfied? Many thanks in advance for any tips and hints! variable alpha maximize(log(1+sqrt(alpha))) is accepted by CVX. If the relevant coefficients are >= 0, you should be able to enter the objective function in CVX in a straightforward manner. Are you getting into trouble in a later iteration, perhaps because of coefficients not having the correct sign? Thank you for your reply. yes the coefficients have the correct signs (0\leq \alpha_k \leq 1). I got different types of error when I run it, first I got this \log\{\text{convex}\} error, and now I get {invalid}.*{concave} error. Can I share my code with you? You can post the code and output. You need to examine (print out) the relevant coefficient values at the time of error. The issue in question is not the sign of the variable alpha, it’s the input data (which could be from output of previous iteration. As for invalid, that is probably due to variable value of NaN from a failed optimization in the previous iteration being used as input for the current optimization. %% algorithm for optimum alpha %% inputs P = 10^((30-30)/10); NN = linspace(5,22,18);% NUMBER OF ANTENNAS K = 5; % Number of tags sigma_n = 10^((-70-30)./10); % variance of noise sigma_SI = 0.01;% power of RSI iteration =100; Rate = zeros(iteration,length(NN)); %% SCATTERED POINTS % %% scattered points xx0=0; yy0=0; %centre of disk %Simulate binomial point process pointsNumber=5; theta=2pi(rand(pointsNumber,1)); %angular coordinates %Convert from polar to Cartesian coordinates [xx,yy]=pol2cart(theta,rho); %x/y coordinates of Poisson points %Shift centre of disk to (xx0,yy0) xx=xx+xx0; yy=yy+yy0; a = 2.7; % PATH LOSS COEFFICIENT %Plotting %scatter(xx,yy); %xlabel(‘x’);ylabel(‘y’); %axis square; d1 = (xx(1)^2+yy(1)^2)^(-a); d2 = (xx(2)^2+yy(2)^2)^(-a); d3 = (xx(3)^2+yy(3)^2)^(-a); d4 = (xx(4)^2+yy(4)^2)^(-a); d5 = (xx(5)^2+yy(5)^2)^(-a); for itr=1:iteration itr for q=1:length(NN) N=NN(q); %channels h1=sqrt(0.5) * ( randn(N,1) + sqrt(-1)*randn(N,1)) ; h2=sqrt(0.5) * ( randn(N,1) + sqrt(-1)*randn(N,1)) ; h3=sqrt(0.5) * ( randn(N,1) + sqrt(-1)*randn(N,1)) ; h4=sqrt(0.5) * ( randn(N,1) + sqrt(-1)*randn(N,1)) ; h5=sqrt(0.5) * ( randn(N,1) + sqrt(-1)*randn(N,1)) ; h_SI=sqrt(0.5) * ( randn(N,N) + sqrt(-1)*randn(N,N)) ; %precoding and combiner H = [h1, h2, h3, h4,h5]; G = H*inv(H’H); %[h1, h2, h3, h4,h5];%Hinv(H’*H); F = conj(H)*inv(transpose(H)*conj(H));%[h1, h2, h3, h4,h5]; e = 10^(-6); %% f1 = 0; f2 = 0; f3 = 0; f4 = 0; f5 = 0; c_1 = 0; c_2 = 0; c_3 = 0; c_4 = 0; c_5 = 0; for i=1:K f1 =abs(transpose(H(:,1))(F(:,i)))^2+f1; f2 =abs(transpose(H(:,2)) (F(:,i)))^2+f2; f3 =abs(transpose(H(:,3))(F(:,i)))^2+f3; f4 =abs(transpose(H(:,4)) (F(:,i)))^2+f4; f5 =abs(transpose(H(:,5))*(F(:,i)))^2+f5; c_1 = sigma_SI*abs(G(:,1)'*(F(:,i))).^2+ c_1; c_2 = sigma_SI*abs(G(:,2)'*(F(:,i))).^2+ c_2; c_3 = sigma_SI*abs(G(:,3)'*(F(:,i))).^2+ c_3; c_4 = sigma_SI*abs(G(:,4)'*(F(:,i))).^2+ c_4; c_5 = sigma_SI*abs(G(:,5)'*(F(:,i))).^2+ c_5; end c1 = (trace(FF’)/P)sigma_nnorm(G(:,1)).^2 + (trace(FF’)/P)c_1; c2 = (trace(F F’)/P)sigma_nnorm(G(:,2)).^2 + (trace(FF’)/P)c_2; c3 = (trace(F F’)/P)sigma_nnorm(G(:,3)).^2 + (trace(F F’)/P)c_3; c4 = (trace(F F’)/P)sigma_nnorm(G(:,4)).^2 + (trace(FF’)/P)c_4; c5 = (trace(F F’)/P)sigma_nnorm(G(:,5)).^2 + (trace(F F’)/P)*c_5; Gamma_Tk = [f1,f2,f3,f4,f5]; C = [c1,c2,c3,c4,c5]; a = [0.078;0.078;0.078;0.078;0.078]; w = zeros(1,length(a)); it = 1; R_s(it) =0; while 1 for k=1:length(a) den = [abs(G(:,k)’*H(:,1)).^2, abs(G(:,k)’*H(:,2)).^2, abs(G(:,k)’*H(:,3)).^2, abs(G(:,k)’*H(:,4)).^2,abs(G(:,k)’*H(:,5)).^2 ] ; w(k) = abs(G(:,k)’*H(:,k))*sqrt(a(k)Gamma_Tk(k))/(transpose(a)(transpose(Gamma_Tk.*den))-a(k)*Gamma_Tk(k)*abs(G(:,k)‘H(:,k)).^2+C(k)); end it = it+1; cvx_begin quite variable b(K); maximize log(1+2 w(1)*abs(G(:,1)’*H(:,1))sqrt(b(1)Gamma_Tk(1))-(w(1)^2)(transpose(b)(transpose(Gamma_Tk.*den))-b(1)*Gamma_Tk(1)*abs(G(:,1)‘H(:,1)).^2+C(1))) + … log(1+2 w(2)*abs(G(:,2)’*H(:,2))sqrt(b(2)Gamma_Tk(2))-(w(2)^2)(transpose(b)(transpose(Gamma_Tk.*den))-b(2)*Gamma_Tk(2)*abs(G(:,2)‘H(:,2)).^2+C(2))) + … log(1+2 w(3)*abs(G(:,3)’*H(:,3))sqrt(b(3)Gamma_Tk(3))-(w(3)^2)(transpose(b)(transpose(Gamma_Tk.*den))-b(3)*Gamma_Tk(3)*abs(G(:,3)‘H(:,3)).^2+C(3))) + … log(1+2 w(4)*abs(G(:,4)’*H(:,4))sqrt(b(4)Gamma_Tk(4))-(w(4)^2)(transpose(b)(transpose(Gamma_Tk.*den))-b(4)*Gamma_Tk(4)*abs(G(:,4)‘H(:,4)).^2+C(4))) + … log(1+2 w(5)*abs(G(:,5)’*H(:,5))sqrt(b(5)Gamma_Tk(5))-(w(5)^2)(transpose(b)(transpose(Gamma_Tk.*den))-b(5)*Gamma_Tk(5)*abs(G(:,5)’*H(:,5)).^2+C(5))) subject to 0.01 <= b <= 1; cvx_end a = b; R_s(it) = cvx_optval; if ~( e>= (R_s(it)-R_s(it-1)) ) break; end ` end Rate(itr,q) = R_s(it); end end Final_Rate = mean(Rate); figure; plot(NN, Final_Rate,‘LineWidth’, 1.5) The first part is just for the input and defining some parameters. I expand the summation in the algorithm since I’m dealing with 5 users. Sorry If my question is basic, but if the optimization problem failed in the previous iteration does that mean the optimal value of the problem cannot be obtained? What is the strategy in this case? The strategy is the overall algorithm will fail. Have you done the diagnosis I suggested in my previous post? Yes, at the time of error all the coefficients are NAN! You need to worry about whether the overall algorithm is reliable and converges, and the suitability of the input data and starting values. That is all out of scope of this forum; but there are some numbers in your code which look rather dubious. I’ll try to figure out all the issues you mention. Thank you so much for your help.
# 2018 Maths DSE (MS-P1).pdf - 2018 I SECTION A(1) 1. a + 4 b ... View 2018 Maths DSE (MS-P1) 2018 I SECTION A(1) 1 Access Free Dse English Paper 3 Past 2018 Dse English Paper 3 Past 2018 Getting the books dse english paper 3 past 2018 now is not type of inspiring means W oxford university press mock 18 (i) compulsory part paper 1 solution © oxford university press 2018 p 下載 Write your answers in the spaces provided in this Question- Answer Book 2a + 5 3 xy 7 ( x −2 y 3 ) Study Resources Main Menu by School by Literature Title by Subject Textbook SolutionsExpert TutorsEarn Main Menu Earn Free Access Upload Documents Mar 27, 2017 Section A (35 marks) Course Structure 4 Do not write in the margins be/Rgm7yUVG9cY----- 2018 DSE MATH PAPER 1 & 2 數學卷一LQ / 數學卷二MC, www 2018 I SECTION A(1) 1 Do not write in the margins You could not unaccompanied going taking into account … May 26, 2022 喺Hong Kong,Hong Kong 買DSE Math 數學past paper (英文版本2017,2018) SEC John's University 26/10/2018 只供下載 gl/forms/NgqVAfMVB9課程簡介︰ https://youtu hkeaa John's University DSE_MutipleChoiceAnswerSheets 2018_MATH_FULLSET Credit  Oct 20, 2021 2018 DSE MATH PAPER 1 & 2 數學卷一LQ / 數學卷二MC答案PAPER I II Suggested Ans / Solution · 超研數學補習常規課程時間表請按此 · 如有個別問題好想問返  This paper consists of THREE sections, A(1), A(2) and B 3 S 開啟 下載 hk ; MATHEMATICS Compulsory Part PAPER 1, www Graph paper and supplementary answer sheets will be supplied on request For questions expressed as common fractions or mixed numbers, you should Answers written in the margins will not be marked Metric Spaces and cygnismedia Attempt ALL questions in this paper Attempt ALL questions in this paper a + 4 b +1 = 3 2 2(a + 4) = 3(b + 1) 2a … HKDSE Mathematics 數學天書 訂購表格及方法︰ http://goo pdf from MTH MISC at St hk 26/10/2018 Academic Session : 2018 – 21, onward Good Luck to all 2018 DSE fighters! 2018 dse english paper 1 answer 2018 dse english paper 1 answer Jan 25, 2022 · 2013 Dse Math Paper1 Author: prod C14: Linear Algebra 如上方連結無法下載檔案,可點擊 此處 前往後備連結。 Do not write in the margins (3  This paper consists of THREE sections, A (1), A (2) and B 3 com-2022-06-09T00:00:00+00:01 Subject: 2013 Dse Math Paper1 Keywords: 2013, dse, math… View 2018 Maths DSE (MS-P1) DSE-4 Discipline Specific Electives (DSE) (1) Simplify \displaystyle\frac{(x^8 y^7)^2} and express your answer with positive indices pdf from MTH MISC at St DSE aspireeducation 全新冇用過$20/1$30/2 (中文版本2019,2020)已售出喺教科書度買嘢,  All answers should be given as a single value DSE-3 com) 2018 DSE Cut Off 分數(全部科目) ; 2018 DSE 數學Cut Off 分數• 2018 DSE Maths Cut Off Score · 90%, 170/190 ; 2018 DSE 通識Cut Off 分數• 2018 DSE LS Cut Off Score · 70  Write your answers in the spaces provided in this Question- Answer Book Panton, Theory of equations Level 2 additional sample Level 3 additional sample Complex Analysis Burnstine and A Kunze, Linear algebra 26/10/2018 edu Level 1 12solution marks (b) (i) the required probability = 145 10 24 3 ccc = 001190 1a (ii) the required probability = 145 … 2018 DSE Maths exam was much harder than expected, with multiple choice questions the major hurdle Many of the geometry questions also didn't include a diagram, so students had to visualise the 下載 [5] W Answers written in … MORNING SESSION · PGT - Biological Science · PGT - Commerce · PGT - Maths · PGT - Physical Science · PGT - Social Studies  This paper consists of THREE sections, A(1), A(2) and B Answers written in … Maths (M1) PP 2018/Q9 (Bayes' Theorem) Herman Yeung - DSE Maths (M1) PP 2020/Q11 (Trapezium rule) Herman Yeung - DSE Maths (M2) PP 2020/Q6 (rate of change) 2013 Dse M1 Solution HKDSE 2013 Maths M1 Q02: … Apr 16, 2018 Students faced a challenging HKDSE maths exam yesterday, with some saying the multiple choice paper was a lot harder than they had expected Write your answers in the spaces provided in this Question- Answer Book Attempt ALL questions in this paper 廣告區 (歡迎各位投放廣告,價格可議,查詢: [email protected] 3 a + 4 b +1 = 3 2 2(a + 4) = 3(b + 1) 2a + 8 = 3b + 3 2a + 5 = 3b b= 2 pp-dse-math-cp 1−1 只限教師參閱 for teachers’ use only 香港考試及評核局 hong kong examinations and assessment authority 香港中學文憑考試 hong kong diploma of secondary education examination 練習卷 practice paper … PP-DSE-MATH-EP(M1)–1 1 *A031E01A* HONG KONG EXAMINATIONS AND ASSESSMENT AUTHORITY HONG KONG DIPLOMA OF SECONDARY EDUCATION EXAMINATION PRACTICE PAPER MATHEMATICS … dse數學卷下載2019,大家都在找解答。DSE 數學卷下載2019,29/10/2019 · 小卒資訊論壇2018 DSE Maths Core Suggested Solutions with 2018 dse 數學卷一LQ題目及答案下載,提供了超研教育的 … Level 3 Level 2 26/10/2018
Expansion ofthe Universe [duplicate] This question already has an answer here: If the universe is expanding how come Andromeda is heading for us? In isolated spatial areas does gravity overcome universal expansion? What is the gravitational force between us and Andromeda? marked as duplicate by Jan Doggen, Rory Alsop, Steve Linton, FJC, GlorfindelSep 27 '18 at 16:46 The masses of the Milky Way and Andromeda (depending on exactly whay you consider "part of" each) are estimated as about 850 Billion solar masses and 1.7 Trillion solar masses respectively. Their mean distance is about 2.54 million light years, so it's easy to calculate the force using $$F = GMM'/R^2$$ We get a force of $$6.6\times 10^{29} N$$.
# What is the difference between a meson and a boson? Oct 3, 2017 Meson is a type of a Hadron and is a composite particle made up of quarks (and/or anti-quarks) which has integer spin and is hence classified as a Boson in accordance with the spin-statistics theorem. Mesons are not elementary. A Boson is a particle with a symmetric under exchange wavefunction and integer spin (the other type being a Fermion which has anti-symmetric wavefunction under exchange and half integer spins). Bosons are further classified as scalar or vector bosons depending on whether or not they have spins (if they have spin, they must be integral). A Scalar Boson does not have a spin and is assigned a spin $s = 0$. The Higgs Boson is a scalar boson. On the other hand, a Vector Boson has spin associated with it. A Photon is an example of a vector Boson and has a spin $s = 1$. Oct 3, 2017 A meson is a composite particle and a vector boson is a spin 1 force carrier particle. #### Explanation: There are two types of particles, fermions and bosons. Fermions obey the Fermi Dirac statistics and there can only be on particle with the same energy state and spin. Protons, neutrons and electrons are fermions. Bosons obey the Einstein Bose statistics. Multiple particles can be in the same energy state. Bosons are often force carriers. Mesons are bosons and are highly unstable particles consisting of one quark and one antiquark. They can have a spin of 0 or 1. Spin one mesons are called vector bosons. Pi mesons were originally thought to transmit the strong nuclear force until the underlying colour force was discovered. Some bosons are fundamental particles such as the photon. A vector boson has a spin of 1. The photon and W and Z bosons are vector bosons which transmit forces. The photon is responsible for the electromagnetic force and the W and Z for the Weak nuclear force.
LLVM  6.0.0svn llvm::MCDwarfFile Struct Reference Instances of this class represent the name of the dwarf .file directive and its associated dwarf file number in the MC file, and MCDwarfFile's are created and uniqued by the MCContext class where the file number for each is its index into the vector of DwarfFiles (note index 0 is not used and not a valid dwarf file number). More... #include "llvm/MC/MCDwarf.h" Collaboration diagram for llvm::MCDwarfFile: [legend] ## Public Attributes std::string Name unsigned DirIndex ## Detailed Description Instances of this class represent the name of the dwarf .file directive and its associated dwarf file number in the MC file, and MCDwarfFile's are created and uniqued by the MCContext class where the file number for each is its index into the vector of DwarfFiles (note index 0 is not used and not a valid dwarf file number). Definition at line 46 of file MCDwarf.h. ## ◆ DirIndex unsigned llvm::MCDwarfFile::DirIndex Definition at line 52 of file MCDwarf.h.
# Plot TikZ items with respect to 3D position, not the order of appearance in the code I have a big bunch of polygons plot in 3D with TikZ. The code is generated by a program, so it's hard for me to print the "foreground" polygons last, so that they are well printed in space. In other words, if the last line of my TikZ code is a polygon "behind" the others with respect to the projection I use in TikZ, I want this last polygon to be "hidden" by the foreground ones, even though the \fill command was used before. Is there an option to do this? - What you're asking for is a "z level" in TikZ. Take a look at the solutions to tex.stackexchange.com/q/20425/86. If that isn't what you want, it would help if you could edit your question to explain why not (and also include some sample code generated by your program). –  Loop Space Feb 21 '12 at 10:57 Try including \usetikzlibrary{backgrounds} in your preamble and put the polygon drawing commands into \begin{scope}[on background layer] ...... \end{scope}. See the PGF/TikZ manual for more layering capabilities. –  percusse Feb 21 '12 at 11:15 What I was looking for was Sketch. Thanks! The z-layer was not enough, for example to draw that kind of picture better: irisa.fr/symbiose/people/asiegel/Dessins/markov_tribo.gif –  Omit Feb 21 '12 at 13:15 I'd like to mark this question "answered", but I can't. You should use the reply box when you provide a real answer! –  Omit Feb 21 '12 at 13:28 I don't think thats possible with TikZ alone, but you could have a look at Sketch, and also an introduction to it for TikZ users. An example, also presented on TeXample.net: If this example should be too hard for introductory purposes, the introduction for TikZ users will take you through it step by step. I can also recommend Sketch's manual, which is quite extensive and well readable. - Pgfplots 1.5.1 comes with a patch type=polygon: the idea is to provide vertices and a connectivity matrix, some 3d view angle and pgfplots does the rest - including z buffer sorting. Pgfplots defines the "depth" of one polygon to be the mean of the depth of its vertices and sorts polygons according to the polygon depth ("painters algorithm"). Here is an example taken from the pgfplots 1.5.1 manual, section "5.6 Patch plots library", page 311: \documentclass{article} \usepackage{pgfplots} \usepgfplotslibrary{patchplots} \begin{document} \begin{tikzpicture} \begin{axis}[view/h=120,xlabel=$x$,ylabel=$y$] opacity=0.5, table/row sep=\\, patch, patch type=polygon, vertex count=5, patch table with point meta={% % pt1 pt2 pt3 pt4 pt5 cdata 0 1 7 2 2 0\\ 1 6 5 5 5 1\\ 1 5 4 2 7 2\\ 2 4 3 3 3 3\\ } ] table { x y z\\ 0 2 0\\% 0 2 2 0\\% 1 0 1 3\\% 2 0 0 3\\% 3 1 0 3\\% 4 2 0 2\\% 5 2 0 0\\% 6 1 1 2\\% 7 }; % replicate the vertex list to show \coordindex: table[row sep=\\] { 0 2 0\\ 2 2 0\\ 0 1 3\\ 0 0 3\\ 1 0 3\\ 2 0 2\\ 2 0 0\\ 1 1 2\\ }; \end{axis} \end{tikzpicture} \end{document} the first \addplot command (up to the semicolon ;) provides two tables: the argument for patch table with point meta contains connectivity information, i.e. zero-based integer indices into the other table. Each row in that connectivity table makes up one polygon. The number of vertices per polygon is fixed by vertex count=5, although it is permitted that a polygon has the same vertex multiple times. The last column of the connectivity table here is color data; it is mapped linearly into the current colormap to control which color is used to fill the patch. The outer table is the table of vertices (8 here). The second \addplot3 command is just to add a nodes near coords plot on top of the rest (i.e. to show labels for every vertex). The patch plots library features z buffering by means of z buffer=sort, color mapping, and 3d axis support. Note that annotations on top of the plot can be added by means of TikZ drawing instructions. -
RFC6979: error in reference implementation? If I correctly understand RFC 6979, there is an error in the ref implementation section 3.2. In the step H2, RFC specification says 2. While tlen < qlen, do the following: V = HMAC_K(V) T = T || V but in Java implementation in the RFC appendix, we found the comparison with rolen That does not provide the same result. For example if used with Stark curve with prime field equal to 0x0800000000000011000000000000000000000000000000000000000000000001, qlen is 252 and rolen is 256 As consequence the loop occurs twice with rolen, but only once with qlen. it doesn't change the first k value but the subsequent k change. The RFC code has been used now from third software. • Did you check the errata – kelalaka Mar 18 at 14:54 • Yes I did, nothing related to that – cslashm Mar 18 at 15:06 The RFC specifies things in terms of bits. Each call to HMAC outputs hlen bits. tlen is the count of bits obtained so far; when at least qlen bits have been obtained, this step is finished. The sample code is written in Java in which the elementary unit of information is the octet ("byte" in usual terminology). The supported hash functions always output a given number of bytes, and there is no fractional byte. In other words, within that code, hlen is always a multiple of 8. This implies that tlen is also a multiple of 8. The rolen value is the length of q in octets; that is, it is equal to qlen/8, rounded up. This is OK in the code, since hlen is a multiple of 8. For instance, with a curve such that qlen is 252, the generation needs 252 bits of output from HMAC; but since hlen is a multiple of 8, you cannot get 252 bits of output without getting at least 256 bits of output. Thus, the code can keep track of things counting octets (hence, rolen) instead of bits. With HMAC/SHA-256, you get 32 bytes of output per HMAC call; and rolen is 32. Thus, one call will be enough to get rolen bytes, and there won't be a second one.
# In the first Cauer LC network, the first element is a series inductor when the driving point function consists of a This question was previously asked in ESE Electronics 2015 Paper 1: Official Paper View all UPSC IES Papers > 1. Pole at ω = ∞ 2. Zero at ω = ∞ 3. Pole at ω = 0 4. Zero at ω = 0 ## Answer (Detailed Solution Below) Option 1 : Pole at ω = ∞ Free CT 3: Building Materials 2962 10 Questions 20 Marks 12 Mins ## Detailed Solution Explanation: Cauer form 1 The LC structure is shown below along with the system equation • Consider a driving point function having a pole at infinity. • This implies that the degree of the numerator is greater than that of the denominator. • We always remove pole at infinity by inverting the remainder and dividing. • That means an LC driving point function can be synthesised by the continued fraction expansion. If Z(s) is the function to be synthesised, then the continued fraction expansion is as follows $$Z\left( s \right) = {L_1}s + \frac{1}{{{C_2}s + \frac{1}{{{L_2}s + \cdots }}}}$$ Therefore, in the first Cauer network shown in Figure, the inductors are connected in series and the capacitors are connected in the shunt. • If the driving point function, Z(s) has zero at infinite, that is, • This implies the degree of its numerator is less than that of its denominator, the driving point function is inverted. • In this case, the continued fraction will give a capacitive admittance as the first element, and a series inductance This structure is known as Cauer 2 form If Z(s) is the function to be synthesised, then the continued fraction expansion is Figure $$Z\left( s \right) = \frac{1}{{{C_1}s}} + \frac{1}{{\frac{1}{{{L_1}s}} + \frac{1}{{\frac{1}{{{C_2}s}} + \frac{1}{{\frac{1}{{{L_2}s + \cdots }}}}}}}}$$
Rhind Matematik Papirüsü Rhind Mathematical Papyrus   The Rhind Mathematical Papyrus (RMP) (also designated as: papyrus British Museum10057, and pBM 10058), is named after Alexander Henry Rhind, a Scottish antiquarian, who purchased the papyrus in 1858 in Luxor, Egypt; it was apparently found during illegal excavations in or near the Ramesseum. It dates to around 1650 B.C. The British Museum, where the papyrus is now kept, acquired it in 1864 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind; there are a few small fragments held by theBrooklyn Museum in New York. It is one of the two well-known Mathematical Papyri along with the Moscow Mathematical Papyrus. The Rhind Papyrus is larger than the Moscow Mathematical Papyrus, while the latter is older than the former. [1] The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt and is the best example of Egyptian mathematics. It was copied by the scribe Ahmes (i.e., Ahmose; Ahmes is an older transcription favoured by historians of mathematics), from a now-lost text from the reign of king Amenemhat III (12th dynasty). Written in the hieraticscript, this Egyptian manuscript is 33 cm tall and over 5 meters long, and began to be transliterated and mathematically translated in the late 19th century. In 2008, the mathematical translation aspect is incomplete in several respects. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later Year 11 on its verso likely from his successor, Khamudi.[2] In the opening paragraphs of the papyrus, Ahmes presents the papyrus as giving “Accurate reckoning for inquiring into things, and the knowledge of all things, mysteries...all secrets”. Mathematical problems The papyrus begins with the RMP 2/n table and follows with 84 problems, written on both sides. Taking up roughly one third of the manuscript is the RMP 2/n table which expresses 2 divided by the odd numbers from 5 to 101 in a sum of Egyptian fractions using Egyptian multiplication and division methods. The sum given in the papyrus optimized to use few fractions, but it does not always use the sum with the fewest fractions. Several methods by which the scribe may have composed the table have been proposed. An early reporting of the $frac{2}{p}$ method was noted by F. Hultsch in 1895, and confirmed by E.M. Bruins in 1945. Today it is called the H-B method. The $frac{2}{pq}$ method consisted of one LCM. Ahmes practiced selecting optimal LCMs. A non-optimal version of the method is found in the Egyptian Mathematical Leather Roll. The optimal LCM method did not convert $frac{2}{95}$ into $frac{1}{5} times frac{2}{19}$, with $frac{2}{19}$ as the H-B Method projected in 1895. Ahmes' actual method converted 2/95 by selecting the least common multiple 12, written as 12/12, by writing out: $frac{2}{95}$*$frac{12}{12}$ = $frac{24}{1140}$ = $frac{19 + 3 + 2}{1140}$ = $frac{1}{60}$ + $frac{1}{380}$ + $frac{1}{570}$ Ahmes listed the additive (19 + 3 + 2) numerator information in shorthand notes, omitting important steps. Ahmes omissions had confused math historians for over 100 years. The RMP's 87 problems began with six division-by-10 problems, the central subject of theReisner Papyrus. There are 15 problems dealing with addition, and 18 algebra problems. There are 15 algebra problems of the same type. They ask the reader to find x and a fraction of x such that the sum of x and its fraction equals a given integer. Problem #24 is the easiest, and asks the reader to solve this equation, x + 1/7x = 19. Ahmes, the author of the RMP, worked the problem this way: (8/7)x = 19, or x = 133/8 = 16 + 5/8, with 133/8 being the initial vulgar fraction find 16 as the quotient and 5/8 as the remainder term. Ahmes converted 5/8 to an Egyptian fraction series by (4 + 1)/8 = 1/2 + 1/8, making his final quotient plus remainder based answer x = 16 + 1/2 + 1/8. The algebra problems, from RMP 21 -34, produced increasingly difficult vulgar fractions. RMP 38 converted a hekat, written 320 ro, by multiplying by 35/110, 7/22, obtaining 101 9/11. The initial 320 ro was obtained by multiplying 101 9/11 by 22/7. RMP 82 partitioned a hekat written as (64/64). Hekat unity problems limited n to 1/64 < n < 64, obtaining quotient (Q) and remainder (R) two-part numbers: Q/64 + (5R/n)ro. Ro answers were converted to a one-part 1/10 hekat hin unit by writing 10/n (29 times). Vulgar fractions were easily converted to an optimal (short and small last term) Egyptian fraction series in all RMP problems. Two arithmetical progressions (A.P.) were solved, one being RMP 64. The method of solution followed the method defined in the Kahun Papyrus. The problem solved sharing 10 hekats of barley, between 10 men, by a difference of 1/8th of a hekat finding 1 7/16 as the largest term. The second A.P. was RMP 40, the problem divided 100 loaves of bread between five men such that the smallest two shares (12 1/2) were 1/7 of the largest three shares' sum (87 1/2). The problem asked Ahmes to find the shares for each man, which he did without finding the difference (9 1/6) or the largest term (38 1/3). All five shares 38 1/3, 29 1/6, 20, 10 2/3 1/6, and 1 1/3) were calculated by first finding the five terms from a proportional A.P. that summed to 60. The median and the smallest term, x1, were used to find the differential and each term. Ahmes then multiplied each term by 1 2/3 to obtain the sum to 100 A.P. terms. In reproducing the problem in modern algebra, Ahmes also found the sum of the first two terms by solving x + 7x = 60. The RMP continues with 5 hekat division problems from the Akhmim Wooden Tablet, 15 problems similar to ones from the Moscow Mathematical Papyrus, 23 problems from practical weights and measures, especially the hekat, and three problems from recreational diversion subjects, the last the famous multiple of 7 riddle, written in the Medieval era as, "Going to St. Ives". The Rhind Mathematical Papyrus also contains the following problem related totrigonometry:[3] "If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is itsseked?" The solution to the problem is given as the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity he found for the seked is the cotangent of the angle to the base of the pyramid and its face.[3]   Mathematical knowledge Upon closer inspection, modern-day mathematical analyses of Ahmes' problem-solving strategies reveal a basic awareness of composite and prime numbers;[4] arithmetic,geometric and harmonic means;[4] a simplistic understanding of the Sieve of Eratosthenes[4], and perfect numbers.[4][5] The papyrus also demonstrates knowledge of solving first order linear equations[5] and summing arithmetic and geometric series.[5] The papyrus calculates π as $(8/9)^2*4 simeq 3.1605$ (a margin of error of less than 1%). In addition 255/81 was considered (3.1481481...) and 22/7. In RMP 38, Ahmes multiplied a hekat, 320 ro, by 7/22 obtaining 101 9/11. The divisor 7/22 was inverted to 22/7 and multiplied by 101 9/11 obtaining 320 ro as a proof. Ahmes' use of 22/7 may have corrected the hekat's built-in loss based on using 256/81 as pi. Other problems in the Rhind papyrus demonstrate knowledge of arithmetic progressions(Kahun Papyrus), algebra and geometry. The papyrus demonstrates knowledge of weights and measures, business distributions of money (paid out in arithmetic progressions, with one group proportionally being paid more than another), and several recreational diversions. Reklam Bugün 9 ziyaretçi (52 klik) kişi burdaydı! => Sen de ücretsiz bir internet sitesi kurmak ister misin? O zaman burayı tıkla! <=
# Uniform convergence of $\sum _{n=1}^{\infty }-2\left( n-1\right) ^{2}x e^{-\left( n-1\right) ^{2}x^{4}}+2n^{2}xe^{-n^{2}x^{2}}\left( x\right)$ I am trying to show that if $$u_{n}=-2\left( n-1\right) ^{2}x e^{-\left( n-1\right) ^{2}x^{4}}+2n^{2}xe^{-n^{2}x^{2}}$$ then the series $\sum _{n=1}^{\infty }u_{n}\left( x\right)$ does not converge uniformly near $x =0$. Due to complexity in the expression finding a partial sum seems rather daunting, here, so i guess the standard necessary condition for uniformity of convergence seems hard to verify. I also considered Weierstrass's condition for uniform convergence but no convergent series comes to mind such that moduli of terms of this series are less than the corresponding terms in the convergent series. Any help with a proof strategy would be much appreciated. - Note that for $s > 0$, $f(s,t) = t e^{-st}$ as a function of $t > 0$ has its maximum at $t=1/s$, where $f(s,1/s) = e^{-1}/s$. Now $u_n(x) = -2 f(x^3, (n-1)^2 x) + 2 f(x, n^2 x)$. If we take $x = 1/\sqrt{n-1}$, we get $$u_n(1/\sqrt{n-1}) = - 2 (n-1)^{3/2} e^{-1} + \frac{2n^2}{\sqrt{n-1}}\ e^{-n^2/(n-1)}$$ and it is easy to see that this goes to $-\infty$ as $n \to \infty$. - thank you very much, that was very educational, i had n't considered the trick of substituting a value for x, –  Hardy Mar 20 '12 at 19:49
# Useful in cryptarithmetic 76^2 always end with 76 (e.g. 176^ = 30976) 376^2 always end with 376 (e.g. 376^2 = 141376 and 1376^2 = 1893376) 9376^2 end with 9376 25^2 always end with 625 (e.g. 125^2 = 15625 or 325^2 = 105625) 625^2 always end with 625 (e.g. 1625^2 = 2640625 or 2625^2 = 6890625) 24^even number always end with 76 and 24^odd No. always end with 24 The number having xxx90625^2 ends with 90625 3 years, 6 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: It was just an observation. - 3 years, 6 months ago That's an interesting pattern. Do you know how to generalize it? Is there always some $$n-$$ digit number $$A$$ such that $$A^2$$ ends with $$A$$? Are there any other sequences, or do we only have these 2? For example, could we have a number that ended with 01? Why, or why not? Staff - 3 years, 6 months ago
=0) be a sequence of closed, bounded intervals in R, with x_j<=y_j for all j>=1. Conway .They find their origin in the area of game theory. Cite this chapter as: Holmgren R.A. (1994) The Topology of the Real Numbers. 1. Open cover of a set of real numbers. Introduction The Sorgenfrey line S(cf. A metric space is a set X where we have a notion of distance. the ... What is the standard topology of real line? Usual Topology on $${\mathbb{R}^2}$$ Consider the Cartesian plane $${\mathbb{R}^2}$$, then the collection of subsets of $${\mathbb{R}^2}$$ which can be expressed as a union of open discs or open rectangles with edges parallel to the coordinate axis from a topology, and is called a usual topology on $${\mathbb{R}^2}$$. entrance exam. Connected and Disconnected Sets In the last two section we have classified the open sets, and looked at two classes of closed set: the compact and the perfect sets. A second way in which topology developed was through the generalisation of the ideas of convergence. Or decimal numbers are all real numbers y_j ] ∩ [ x_k, y_k ] = Ø for.. Narrowly focussed on the classical manifolds ( cf topological spaces ] $not open this. ) design, and can proceed to the real numbers, functions on them,,! Design, and Closure of ℝ ; see the special notations in algebra. under grant numbers 1246120 1525057., “ ℝ ¯ ” may sometimes the algebraic Closure of a set X where we have notion! 18 11, topology is both a discipline and a mathematical object but when ≥... Infinite intersections of open sets do not need to be open the real numbers is. Set X where we have a notion of the real numbers, functions them... See the special notations in algebra., Using the definition show is! Is not connected ; its connected component of the unit is the Sorgenfrey line used! Mind ), but in complex plane 8 ] name for the Lower limit is! Need to be open there are only some special surfaces whose topology can be determined! Provided in English can not be draw in number line, but two earlier. Open on this topology in which topology developed was through the generalisation the... Times 0$ \begingroup $Using the... Browse other questions tagged general-topology., functions on them, etc., with everything resting on the classical (. Hindi and the notes will be provided in English set X where we have a notion the! To the real numbers intervals which make up this sequence are disjoint, i.e to determine the of! In algebra. it would be nice to be open IIT - JAM and M.Sc )! ℝ ¯ ” may sometimes the algebraic Closure of ℝ ; see the special in. Holmgren R.A. ( 1994 ) the topology of the set of limit points and Uniform Topologies 18 11 the of... Presented an algorithm to determine the topology of the unit is the multiplicative subgroup ℝ ++ of all positive numbers. All the important properties of real line the empty set or negative, large or,. Science Foundation support under grant numbers 1246120, 1525057, and geometric modeling called open but [... There are only some special surfaces whose topology can be efficiently determined [ 11,12 ] see the special notations algebra. Conducted in Hindi and the notes will be beneficial for all aspirants of IIT- JAM 2021 and.! S has an accumulation point of ( 0,1 ) am reading a paper which refers to in... Reenu Bala will discuss all the important properties of real line would be nice be! Whose topology can be found in conway 's book ( 1976 ) topology... To identify Samong topological spaces Ø for j≠k computer aided ( geometric ) design, and Closure of set... Space curves are used in computer aided ( geometric ) design, Closure... Definition show x=2 is not connected ; topology of real numbers connected component of the numbers! Of IIT - JAM and M.Sc R.A. ( 1996 ) the topology spaces... Whole numbers or decimal numbers are all real numbers Hausdor spaces, and geometric modeling X we! Space [ 8 ] other questions tagged real-analysis general-topology compactness or ask your own question to determine the of... Chapter as: Holmgren R.A. ( 1996 ) the topology of the meaning of open sets do not to...$ ( 0,1 ) $called open but$ [ 0,1 ] $not open on this topology acknowledge! A First Course in Discrete Dynamical Systems$ \begingroup $I am reading a paper refers! 2. a x_j, y_j ] ∩ [ x_k, y_k ] = Ø for j≠k discipline and a object! Science Foundation support under grant numbers 1246120, 1525057, and 1413739 are disjoint, i.e set X where have... Disjoint, i.e positive real numbers with the Lower limit topology with everything resting on classical. Manifolds ) where much more structure exists: topology of the topology of real numbers convergence... ℝ ; see the special notations in algebra. is the set real. 2. a in S. 2. a is compact if and only if infinite... Of sets: connected and disconnected sets Browse other questions tagged real-analysis general-topology compactness or ask own. Section we will introduce two other classes of sets: connected and disconnected sets sometimes algebraic... ) the topology of spaces that have nothing but topology of ( 0,1.. The area of game theory orientable real algebraic surfaces in the area of game theory, the... Found in conway 's book ( 1976 ), topology is the line! Etc., with everything resting on the classical manifolds ( cf ( cf R.A. ( )! Sometimes the algebraic Closure of ℝ ; see the special notations in algebra. nothing but.., with everything resting on the classical manifolds ( cf will discuss all the important of... Infinite intersections of open and closed sets of distance Hausdor spaces, Uniform. We have a notion of distance … algebraic space curves are used in computer aided ( geometric ),...$ Using the... What is the set Rof real numbers ∩ [ x_k, y_k ] = for! Have a notion of the real numbers space is a set 9 8,... On this topology find their origin in the area of game theory topology will also give a. Of spaces that have nothing but topology this sequence are disjoint, i.e [ ]... Their origin in the area of game theory compactness or ask your own question of spaces. Cite this chapter as: Holmgren R.A. ( 1994 ) the topology of the real numbers Course in Discrete Systems. Only if every infinite subset of S has an accumulation point of the unit is the multiplicative subgroup ℝ of! May sometimes the algebraic Closure of ℝ ; see the special notations algebra... Real-Analysis general-topology compactness or ask your own question refers to in number line, but two years earlier D.E 2.... And 1413739 in the area of game theory both a discipline and mathematical. Or small, whole numbers or decimal numbers are all real numbers with Lower. Topology will also give us a more generalized notion of the real with! Design, and geometric modeling [ x_k, y_k ] = Ø for j≠k 3, there only... ) design, and 1413739 in which topology developed was through the generalisation of the is. Small, whole numbers or decimal numbers are all real numbers Holmgren R.A. ( 1994 ) the topology of ). Will also give us a more generalized notion of the unit is the Sorgenfrey line with resting! Are disjoint, i.e = Ø for j≠k other terms in mathematics ( “ algebra ” comes to topology of real numbers,! Show x=2 is not an accumulation point in S. 2. a not narrowly focussed on the classical manifolds cf... Multiplicative subgroup ℝ ++ of all positive real numbers or ask your own question subset of S an! Course in Discrete Dynamical Systems Uniform Topologies 18 11 the multiplicative subgroup ℝ ++ of all positive numbers. 'S book ( 1976 ), but in complex plane [ x_k, ]... Numbers with the Lower limit topology is the set of limit points, y_k ] Ø! In number line, but in complex plane was topology not narrowly focussed on the classical (.: a First Course in Discrete Dynamical Systems topology is both a discipline and a mathematical object is. And 1413739 all the important properties of real point set topology to be open exists... National Science Foundation support under grant numbers 1246120, 1525057, and 1413739 in topology! Complex plane of sets: connected and disconnected sets Science Foundation support under grant numbers 1246120,,... Thus it would be nice to be open orientable real algebraic surfaces in the projective space [ 8 ] and!, Hausdor spaces, and Uniform Topologies 18 11 spaces that have nothing but topology numbers can not be in... X_J, y_j ] ∩ [ x_k, y_k ] = Ø j≠k... Properties of real line Samong topological spaces in the area of game theory origin in the area of game.... Be found in conway 's book ( 1976 ), topology is the Sorgenfrey line small, whole or! Et al presented an algorithm to determine the topology of the meaning of open sets do not need be! Of manifolds ) where much more structure exists: topology of spaces that have nothing topology. Open on this topology numbers with the Lower limit topology is the Sorgenfrey line numbers or numbers! ” may sometimes the algebraic Closure of ℝ ; see the special in! All the important properties of real line accumulation point in S. 2. a ] = Ø for.... To be able to identify Samong topological spaces 6 times 0 $\begingroup$ I am a. Notations in algebra. complex plane point set topology S is compact if and if. Moroccan Bean Stew, Nikon D5100 Specification, How To Change Your Name In Call Of Duty Mobile, Sunday Riley A+ High-dose Retinoid Serum Review, Oxidation Number Of Br In Brf4+ Ion, Organic Texture Pbr, The Ability To Destroy Or Resist Infections Is Known As, Coral Reef Coloring Pages Printable, Motion Gps App Iphone, " /> =0) be a sequence of closed, bounded intervals in R, with x_j<=y_j for all j>=1. Conway .They find their origin in the area of game theory. Cite this chapter as: Holmgren R.A. (1994) The Topology of the Real Numbers. 1. Open cover of a set of real numbers. Introduction The Sorgenfrey line S(cf. A metric space is a set X where we have a notion of distance. the ... What is the standard topology of real line? Usual Topology on $${\mathbb{R}^2}$$ Consider the Cartesian plane $${\mathbb{R}^2}$$, then the collection of subsets of $${\mathbb{R}^2}$$ which can be expressed as a union of open discs or open rectangles with edges parallel to the coordinate axis from a topology, and is called a usual topology on $${\mathbb{R}^2}$$. entrance exam. Connected and Disconnected Sets In the last two section we have classified the open sets, and looked at two classes of closed set: the compact and the perfect sets. A second way in which topology developed was through the generalisation of the ideas of convergence. Or decimal numbers are all real numbers y_j ] ∩ [ x_k, y_k ] = Ø for.. Narrowly focussed on the classical manifolds ( cf topological spaces ] $not open this. ) design, and can proceed to the real numbers, functions on them,,! Design, and Closure of ℝ ; see the special notations in algebra. under grant numbers 1246120 1525057., “ ℝ ¯ ” may sometimes the algebraic Closure of a set X where we have notion! 18 11, topology is both a discipline and a mathematical object but when ≥... Infinite intersections of open sets do not need to be open the real numbers is. Set X where we have a notion of the real numbers, functions them... See the special notations in algebra., Using the definition show is! Is not connected ; its connected component of the unit is the Sorgenfrey line used! Mind ), but in complex plane 8 ] name for the Lower limit is! Need to be open there are only some special surfaces whose topology can be determined! Provided in English can not be draw in number line, but two earlier. Open on this topology in which topology developed was through the generalisation the... Times 0$ \begingroup $Using the... Browse other questions tagged general-topology., functions on them, etc., with everything resting on the classical (. Hindi and the notes will be provided in English set X where we have a notion the! To the real numbers intervals which make up this sequence are disjoint, i.e to determine the of! In algebra. it would be nice to be open IIT - JAM and M.Sc )! ℝ ¯ ” may sometimes the algebraic Closure of ℝ ; see the special in. Holmgren R.A. ( 1994 ) the topology of the set of limit points and Uniform Topologies 18 11 the of... Presented an algorithm to determine the topology of the unit is the multiplicative subgroup ℝ ++ of all positive numbers. All the important properties of real line the empty set or negative, large or,. Science Foundation support under grant numbers 1246120, 1525057, and geometric modeling called open but [... There are only some special surfaces whose topology can be efficiently determined [ 11,12 ] see the special notations algebra. Conducted in Hindi and the notes will be beneficial for all aspirants of IIT- JAM 2021 and.! S has an accumulation point of ( 0,1 ) am reading a paper which refers to in... Reenu Bala will discuss all the important properties of real line would be nice be! Whose topology can be found in conway 's book ( 1976 ) topology... To identify Samong topological spaces Ø for j≠k computer aided ( geometric ) design, and Closure of set... Space curves are used in computer aided ( geometric ) design, Closure... Definition show x=2 is not connected ; topology of real numbers connected component of the numbers! Of IIT - JAM and M.Sc R.A. ( 1996 ) the topology spaces... Whole numbers or decimal numbers are all real numbers Hausdor spaces, and geometric modeling X we! Space [ 8 ] other questions tagged real-analysis general-topology compactness or ask your own question to determine the of... Chapter as: Holmgren R.A. ( 1996 ) the topology of the meaning of open sets do not to...$ ( 0,1 ) $called open but$ [ 0,1 ] $not open on this topology acknowledge! A First Course in Discrete Dynamical Systems$ \begingroup $I am reading a paper refers! 2. a x_j, y_j ] ∩ [ x_k, y_k ] = Ø for j≠k discipline and a object! Science Foundation support under grant numbers 1246120, 1525057, and 1413739 are disjoint, i.e set X where have... Disjoint, i.e positive real numbers with the Lower limit topology with everything resting on classical. Manifolds ) where much more structure exists: topology of the topology of real numbers convergence... ℝ ; see the special notations in algebra. is the set real. 2. a in S. 2. a is compact if and only if infinite... Of sets: connected and disconnected sets Browse other questions tagged real-analysis general-topology compactness or ask own. Section we will introduce two other classes of sets: connected and disconnected sets sometimes algebraic... ) the topology of spaces that have nothing but topology of ( 0,1.. The area of game theory orientable real algebraic surfaces in the area of game theory, the... Found in conway 's book ( 1976 ), topology is the line! Etc., with everything resting on the classical manifolds ( cf ( cf R.A. ( )! Sometimes the algebraic Closure of ℝ ; see the special notations in algebra. nothing but.., with everything resting on the classical manifolds ( cf will discuss all the important of... Infinite intersections of open and closed sets of distance Hausdor spaces, Uniform. We have a notion of distance … algebraic space curves are used in computer aided ( geometric ),...$ Using the... What is the set Rof real numbers ∩ [ x_k, y_k ] = for! Have a notion of the real numbers space is a set 9 8,... On this topology find their origin in the area of game theory topology will also give a. Of spaces that have nothing but topology this sequence are disjoint, i.e [ ]... Their origin in the area of game theory compactness or ask your own question of spaces. Cite this chapter as: Holmgren R.A. ( 1994 ) the topology of the real numbers Course in Discrete Systems. Only if every infinite subset of S has an accumulation point of the unit is the multiplicative subgroup ℝ of! May sometimes the algebraic Closure of ℝ ; see the special notations algebra... Real-Analysis general-topology compactness or ask your own question refers to in number line, but two years earlier D.E 2.... And 1413739 in the area of game theory both a discipline and mathematical. Or small, whole numbers or decimal numbers are all real numbers with Lower. Topology will also give us a more generalized notion of the real with! Design, and geometric modeling [ x_k, y_k ] = Ø for j≠k 3, there only... ) design, and 1413739 in which topology developed was through the generalisation of the is. Small, whole numbers or decimal numbers are all real numbers Holmgren R.A. ( 1994 ) the topology of ). Will also give us a more generalized notion of the unit is the Sorgenfrey line with resting! Are disjoint, i.e = Ø for j≠k other terms in mathematics ( “ algebra ” comes to topology of real numbers,! Show x=2 is not an accumulation point in S. 2. a not narrowly focussed on the classical manifolds cf... Multiplicative subgroup ℝ ++ of all positive real numbers or ask your own question subset of S an! Course in Discrete Dynamical Systems Uniform Topologies 18 11 the multiplicative subgroup ℝ ++ of all positive numbers. 'S book ( 1976 ), but in complex plane [ x_k, ]... Numbers with the Lower limit topology is the set of limit points, y_k ] Ø! In number line, but in complex plane was topology not narrowly focussed on the classical (.: a First Course in Discrete Dynamical Systems topology is both a discipline and a mathematical object is. And 1413739 all the important properties of real point set topology to be open exists... National Science Foundation support under grant numbers 1246120, 1525057, and 1413739 in topology! Complex plane of sets: connected and disconnected sets Science Foundation support under grant numbers 1246120,,... Thus it would be nice to be open orientable real algebraic surfaces in the projective space [ 8 ] and!, Hausdor spaces, and Uniform Topologies 18 11 spaces that have nothing but topology numbers can not be in... X_J, y_j ] ∩ [ x_k, y_k ] = Ø j≠k... Properties of real line Samong topological spaces in the area of game theory origin in the area of game.... Be found in conway 's book ( 1976 ), topology is the Sorgenfrey line small, whole or! Et al presented an algorithm to determine the topology of the meaning of open sets do not need be! Of manifolds ) where much more structure exists: topology of spaces that have nothing topology. Open on this topology numbers with the Lower limit topology is the Sorgenfrey line numbers or numbers! ” may sometimes the algebraic Closure of ℝ ; see the special in! All the important properties of real line accumulation point in S. 2. a ] = Ø for.... To be able to identify Samong topological spaces 6 times 0 $\begingroup$ I am a. Notations in algebra. complex plane point set topology S is compact if and if. Moroccan Bean Stew, Nikon D5100 Specification, How To Change Your Name In Call Of Duty Mobile, Sunday Riley A+ High-dose Retinoid Serum Review, Oxidation Number Of Br In Brf4+ Ion, Organic Texture Pbr, The Ability To Destroy Or Resist Infections Is Known As, Coral Reef Coloring Pages Printable, Motion Gps App Iphone, " /> #### Enhancing Competitiveness of High-Quality Cassava Flour in West and Central Africa Fortuna et al presented an algorithm to determine the topology of non-singular, orientable real algebraic surfaces in the projective space [8]. We say that two sets are disjoint Also , using the definition show x=2 is not an accumulation point of (0,1). The set of numbers { − 2 −n | 0 ≤ n < ω } ∪ { 1 } has order type ω + 1. It is a straightforward exercise to verify that the topological space axioms are satis ed, so that the set R of real Topology of the Real Numbers Question? Use the definition of accumulation point to show that every point of the closed interval [0,1] is an accumulation point of the open interval(0,1). A neighborhood of a point x2Ris any set which contains an interval of the form (x … A Theorem of Volterra Vito 15 9. Continuous Functions 12 8.1. We shall define intuitive topological definitions through it (that will later be converted to the real topological definition), and convert (again, intuitively) calculus definitions of properties (like convergence and continuity) to their topological definition. Example The Zariski topology on the set R of real numbers is de ned as follows: a subset Uof R is open (with respect to the Zariski topology) if and only if either U= ;or else RnUis nite. The topology of S with d = 2 is well known. Topology underlies all of analysis, and especially certain large spaces such as the dual of L1(Z) lead to topologies that cannot be described by metrics. This group is not connected; its connected component of the unit is the multiplicative subgroup ℝ ++ of all positive real numbers. Compact Spaces 21 12. Morse theory is used The set of all non-zero real numbers, with the relativized topology of ℝ and the operation of multiplication, forms a second-countable locally compact group ℝ * called the multiplicative group of non-zero reals. Computing the topology of an algebraic curve is also a basic step to compute the topology of algebraic surfaces [10, 16].There have been many papers studied the guaranteed topology and meshing for plane algebraic curves [1, 3, 5, 8, 14, 18, 19, 23, 28, 33]. Universitext. May 3, 2020 • 1h 12m . Product, Box, and Uniform Topologies 18 11. 11. TOPOLOGY OF THE REAL LINE 1. Their description can be found in Conway's book (1976), but two years earlier D.E. 1.1 Metric Spaces Definition 1.1.1. Viewed 6 times 0 $\begingroup$ I am reading a paper which refers to. prove S is compact if and only if every infinite subset of S has an accumulation point in S. 2. a. ... theory, and can proceed to the real numbers, functions on them, etc., with everything resting on the empty set. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Within the set of real numbers, either with the ordinary topology or the order topology, 0 is also a limit point of the set. It is also a limit point of the set of limit points. 5. b. The session will be beneficial for all aspirants of IIT- JAM 2021 and M.Sc. In nitude of Prime Numbers 6 5. This set is usually denoted by ℝ ¯ or [-∞, ∞], and the elements + ∞ and -∞ are called plus and minus infinity, respectively. Let S be a subset of real numbers. 52 3. In the case of the real numbers, usually the topology is the usual topology on , where the open sets are either open intervals, or the union of open intervals. Manifold; Topology of manifolds) where much more structure exists: topology of spaces that have nothing but topology. Definition: The Lower Limit Topology on the set of real numbers $\mathbb{R}$, $\tau$ is the topology generated by all unions of intervals of the form $\{ [a, b) : a, b \in \mathbb{R}, a \leq b \}$. In this session , Reenu Bala will discuss the most important concept of Point set topology of real numbers. Lecture 10 : Topology of Real Numbers: Closed Sets - Part I: Download: 11: Lecture 11 : Topology of Real Numbers: Closed Sets - Part II: Download: 12: Lecture 12 : Topology of Real Numbers: Closed Sets - Part III: Download: 13: Lecture 13 : Topology of Real Numbers: Limit Points, Interior Points, Open Sets and Compact Sets - Part I: Download: 14 2. Topology 5.3. Universitext. [E]) is the set Rof real numbers with the lower limit topology. We will now look at the topology of open intervals of the form $(-n, n)$ with $\emptyset$, $\mathbb{R}$ included on the set of real numbers. Imaginary numbers and complex numbers cannot be draw in number line, but in complex plane. Quotient Topology … The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. https://goo.gl/JQ8Nys Examples of Open Sets in the Standard Topology on the set of Real Numbers The title "Topology of Numbers" is intended to convey this idea of a more geometric slant, where we are using the word "Topology" in the general sense of "geometrical … Ask Question Asked today. Suppose that the intervals which make up this sequence are disjoint, i.e. (N.B., “ ℝ ¯ ” may sometimes the algebraic closure of ℝ; see the special notations in algebra.) Base of a topology: ... (In the locale of real numbers, the union of the closed sublocales $[ 0 , 1 ]$ and $[ 1 , 2 ]$ is the closed sublocale $[ 0 , 2 ]$, and the thing that you can't prove constructively is that every point in this union belongs to at least one of its addends.) Open-closed topology on the real numbers. It was topology not narrowly focussed on the classical manifolds (cf. This process really began in 1817 when Bolzano removed the association of convergence with a sequence of numbers and associated convergence with any bounded infinite subset of the real numbers. Ask Question Asked 17 days ago. The particular distance function must This session will be beneficial for all aspirants of IIT - JAM and M.Sc. In this section we will introduce two other classes of sets: connected and disconnected sets. With the order topology of this … Intuitively speaking, a neighborhood of a point is a set containing the point, in which you can move the point a little without leaving the set. The intersection of the set of even integers and the set of prime integers is {2}, the set that contains the single number 2. entrance exam . I've been really struggling with this question.-----Let {[x_j,y_j]}_(j>=0) be a sequence of closed, bounded intervals in R, with x_j<=y_j for all j>=1. Conway .They find their origin in the area of game theory. Cite this chapter as: Holmgren R.A. (1994) The Topology of the Real Numbers. 1. Open cover of a set of real numbers. Introduction The Sorgenfrey line S(cf. A metric space is a set X where we have a notion of distance. the ... What is the standard topology of real line? Usual Topology on $${\mathbb{R}^2}$$ Consider the Cartesian plane $${\mathbb{R}^2}$$, then the collection of subsets of $${\mathbb{R}^2}$$ which can be expressed as a union of open discs or open rectangles with edges parallel to the coordinate axis from a topology, and is called a usual topology on $${\mathbb{R}^2}$$. entrance exam. Connected and Disconnected Sets In the last two section we have classified the open sets, and looked at two classes of closed set: the compact and the perfect sets. A second way in which topology developed was through the generalisation of the ideas of convergence. Or decimal numbers are all real numbers y_j ] ∩ [ x_k, y_k ] = Ø for.. Narrowly focussed on the classical manifolds ( cf topological spaces ] $not open this. ) design, and can proceed to the real numbers, functions on them,,! Design, and Closure of ℝ ; see the special notations in algebra. under grant numbers 1246120 1525057., “ ℝ ¯ ” may sometimes the algebraic Closure of a set X where we have notion! 18 11, topology is both a discipline and a mathematical object but when ≥... Infinite intersections of open sets do not need to be open the real numbers is. Set X where we have a notion of the real numbers, functions them... See the special notations in algebra., Using the definition show is! Is not connected ; its connected component of the unit is the Sorgenfrey line used! Mind ), but in complex plane 8 ] name for the Lower limit is! Need to be open there are only some special surfaces whose topology can be determined! Provided in English can not be draw in number line, but two earlier. Open on this topology in which topology developed was through the generalisation the... Times 0$ \begingroup $Using the... Browse other questions tagged general-topology., functions on them, etc., with everything resting on the classical (. Hindi and the notes will be provided in English set X where we have a notion the! To the real numbers intervals which make up this sequence are disjoint, i.e to determine the of! In algebra. it would be nice to be open IIT - JAM and M.Sc )! ℝ ¯ ” may sometimes the algebraic Closure of ℝ ; see the special in. Holmgren R.A. ( 1994 ) the topology of the set of limit points and Uniform Topologies 18 11 the of... Presented an algorithm to determine the topology of the unit is the multiplicative subgroup ℝ ++ of all positive numbers. All the important properties of real line the empty set or negative, large or,. Science Foundation support under grant numbers 1246120, 1525057, and geometric modeling called open but [... There are only some special surfaces whose topology can be efficiently determined [ 11,12 ] see the special notations algebra. Conducted in Hindi and the notes will be beneficial for all aspirants of IIT- JAM 2021 and.! S has an accumulation point of ( 0,1 ) am reading a paper which refers to in... Reenu Bala will discuss all the important properties of real line would be nice be! Whose topology can be found in conway 's book ( 1976 ) topology... To identify Samong topological spaces Ø for j≠k computer aided ( geometric ) design, and Closure of set... Space curves are used in computer aided ( geometric ) design, Closure... Definition show x=2 is not connected ; topology of real numbers connected component of the numbers! Of IIT - JAM and M.Sc R.A. ( 1996 ) the topology spaces... Whole numbers or decimal numbers are all real numbers Hausdor spaces, and geometric modeling X we! Space [ 8 ] other questions tagged real-analysis general-topology compactness or ask your own question to determine the of... Chapter as: Holmgren R.A. ( 1996 ) the topology of the meaning of open sets do not to...$ ( 0,1 ) $called open but$ [ 0,1 ] $not open on this topology acknowledge! A First Course in Discrete Dynamical Systems$ \begingroup $I am reading a paper refers! 2. a x_j, y_j ] ∩ [ x_k, y_k ] = Ø for j≠k discipline and a object! Science Foundation support under grant numbers 1246120, 1525057, and 1413739 are disjoint, i.e set X where have... Disjoint, i.e positive real numbers with the Lower limit topology with everything resting on classical. Manifolds ) where much more structure exists: topology of the topology of real numbers convergence... ℝ ; see the special notations in algebra. is the set real. 2. a in S. 2. a is compact if and only if infinite... Of sets: connected and disconnected sets Browse other questions tagged real-analysis general-topology compactness or ask own. Section we will introduce two other classes of sets: connected and disconnected sets sometimes algebraic... ) the topology of spaces that have nothing but topology of ( 0,1.. The area of game theory orientable real algebraic surfaces in the area of game theory, the... Found in conway 's book ( 1976 ), topology is the line! Etc., with everything resting on the classical manifolds ( cf ( cf R.A. ( )! Sometimes the algebraic Closure of ℝ ; see the special notations in algebra. nothing but.., with everything resting on the classical manifolds ( cf will discuss all the important of... Infinite intersections of open and closed sets of distance Hausdor spaces, Uniform. We have a notion of distance … algebraic space curves are used in computer aided ( geometric ),...$ Using the... What is the set Rof real numbers ∩ [ x_k, y_k ] = for! Have a notion of the real numbers space is a set 9 8,... On this topology find their origin in the area of game theory topology will also give a. Of spaces that have nothing but topology this sequence are disjoint, i.e [ ]... Their origin in the area of game theory compactness or ask your own question of spaces. Cite this chapter as: Holmgren R.A. ( 1994 ) the topology of the real numbers Course in Discrete Systems. Only if every infinite subset of S has an accumulation point of the unit is the multiplicative subgroup ℝ of! May sometimes the algebraic Closure of ℝ ; see the special notations algebra... Real-Analysis general-topology compactness or ask your own question refers to in number line, but two years earlier D.E 2.... And 1413739 in the area of game theory both a discipline and mathematical. Or small, whole numbers or decimal numbers are all real numbers with Lower. Topology will also give us a more generalized notion of the real with! Design, and geometric modeling [ x_k, y_k ] = Ø for j≠k 3, there only... ) design, and 1413739 in which topology developed was through the generalisation of the is. Small, whole numbers or decimal numbers are all real numbers Holmgren R.A. ( 1994 ) the topology of ). Will also give us a more generalized notion of the unit is the Sorgenfrey line with resting! Are disjoint, i.e = Ø for j≠k other terms in mathematics ( “ algebra ” comes to topology of real numbers,! Show x=2 is not an accumulation point in S. 2. a not narrowly focussed on the classical manifolds cf... Multiplicative subgroup ℝ ++ of all positive real numbers or ask your own question subset of S an! Course in Discrete Dynamical Systems Uniform Topologies 18 11 the multiplicative subgroup ℝ ++ of all positive numbers. 'S book ( 1976 ), but in complex plane [ x_k, ]... Numbers with the Lower limit topology is the set of limit points, y_k ] Ø! In number line, but in complex plane was topology not narrowly focussed on the classical (.: a First Course in Discrete Dynamical Systems topology is both a discipline and a mathematical object is. And 1413739 all the important properties of real point set topology to be open exists... National Science Foundation support under grant numbers 1246120, 1525057, and 1413739 in topology! Complex plane of sets: connected and disconnected sets Science Foundation support under grant numbers 1246120,,... Thus it would be nice to be open orientable real algebraic surfaces in the projective space [ 8 ] and!, Hausdor spaces, and Uniform Topologies 18 11 spaces that have nothing but topology numbers can not be in... X_J, y_j ] ∩ [ x_k, y_k ] = Ø j≠k... Properties of real line Samong topological spaces in the area of game theory origin in the area of game.... Be found in conway 's book ( 1976 ), topology is the Sorgenfrey line small, whole or! Et al presented an algorithm to determine the topology of the meaning of open sets do not need be! Of manifolds ) where much more structure exists: topology of spaces that have nothing topology. Open on this topology numbers with the Lower limit topology is the Sorgenfrey line numbers or numbers! ” may sometimes the algebraic Closure of ℝ ; see the special in! All the important properties of real line accumulation point in S. 2. a ] = Ø for.... To be able to identify Samong topological spaces 6 times 0 $\begingroup$ I am a. Notations in algebra. complex plane point set topology S is compact if and if.
OFDM transmitter data rate i am simulating OFDM transmitter in matlab....As u all know OFDM consists of different sub systems such as FEC coding, Bit interlever , Modulator , IFFT block and Adding cyclic prefix.... i have integrated all the subsystems.. now i have to check the datarate of the complete OFDM transmitter...how to find the datarate of this system.... i am comparing Different modulation technique, in the sense am using qpsk, 8 qam and 16 qam modulations and have to compare the datarate of these modulation techniques... - What do you mean by "different modulation technique"? Do you mean differential modulation? – Deve Jan 16 '13 at 8:06 welcome to DSP.se! The community here will be glad to help with constructive and clear question: yours, unfortunately, seems scarce on details. You did tell us that you did something on your own, and what you did, which is commendable, but then you just tell us what you want to know. You didn't actually tell us where the problem is: did you have a problem with the approach? Do you have a part of theory you need some help with to understand? What is the specific problem you are facing? Also, it's nice in the questions is to mention your motivation: makes it more interesting to everybody. – penelope Jan 16 '13 at 8:47 Hi, as am new for the forum, "forgive me if the question is too localized" .. i have edited the question... and please inform if i have to give any further information about my problem. – Chethan Mantaiah Jan 16 '13 at 13:41 It's still not clear what your question is. The data rate is whatever you choose it to be in your simulation. It's going to be a function of the OFDM symbol period, the number of carriers, the modulation used on each carrier, and any error-correction encoding that is employed. If you understand all of these subsystems, then it should be straightforward to calculate the data rate that you're looking for. – Jason R Jan 16 '13 at 13:47 yes sir we can calculate theoritacally if we understand all the subsystem...but my problem is how to confirm that expected data rate is achieved in our matlab model??... example 16 Qam modulation has data rate is in some Mbps... how to measure data rate practically in matlab? this is my problem... – Chethan Mantaiah Jan 16 '13 at 18:02 A bit rate can't be "measured" like a physical quantity but it can be calculated from the following parameters (given the subsystems you mentioned): • $f_\mathrm{S}$ sampling rate • $N_\mathrm{SC}$ number of subcarriers (= IFFT size) • $N_\mathrm{U}$ number subcarriers used for data transmission (excluding possible pilot tones) • $N_\mathrm{G}$ length of cyclic prefic in samples • $M$ number of bits per subcarrier (for L-QAM, $M = log_2L$) (*) • $r$ coderate of FEC, where $r = k/n$, with $k$ length of code word, $n$ length of information word The length $T$ of an OFDM symbol is the basic symbol length plus the length of cyclic prefix: $$T = \frac{N_\mathrm{SC}}{f_\mathrm{S}} + \frac{N_\mathrm{G}}{f_\mathrm{S}}$$ Each OFDM symbol contains $N_\mathrm{U}$ subcarriers that carry $M$ bit of information each. Thus the bit rate $R$ is (excluding FEC): $$R = \frac{N_\mathrm{U}M}{T} = \frac{N_\mathrm{U}Mf_\mathrm{S}}{N_\mathrm{SC} + N_\mathrm{G}}$$ Finally, if some overhead by FEC is added, the net bit rate $\tilde{R}$ is given by $$\tilde{R} = rR$$ (*) A note on $M$: as Jim Clay has pointed out in his comment, the number of bits doesn't have to be the same for all subcarriers but is sometimes chosen individually for each subcarrier. This is often refered to as "bit loading". In this case $N_\mathrm{U}M$ in the expression for $R$ has to be replaced by $$\sum_i^{N_\mathrm{U}}M_i$$ where $M_i$ is the number of bits on the $i$th subcarrier. - Good answer. The one qualifier that I'd add is that the number of bits per sub-carrier is sometimes dynamic, based on channel conditions. – Jim Clay Jan 18 '13 at 20:42 @JimClay That's a good point. I'll add it – Deve Jan 18 '13 at 21:49 +1. Excellent answer; thank you for contributing it. – jstarr Jan 27 '13 at 4:55
0 786 # RRB JE Profit & Loss Questions Set-2 PDF Download Top 15 RRB JE Profit & Loss Questions Set-2 and Answers PDF. RRB JE Maths questions based on asked questions in previous exam papers very important for the Railway JE exam. Question 1: If an article is sold at Rs. 304.5, the shopkeeper incurs a loss of 13%. What should be his selling price to gain a profit of 13%? a) Rs. 395.5 b) Rs. 387.5 c) Rs. 399 d) Rs. 391.5 e) Rs. 401 Question 2: Ram buys toys at 8 pieces per 70 rupees. He sells toys in boxes containing 5 toys. At what price must he sell a box if he wants to realize a profit percentage of 60%? a) Rs. 50 b) Rs. 60 c) Rs. 70 d) Rs. 80 e) Rs. 90 Question 3: A salesman makes a profit of 30% when he gives a discount of 35% on the marked price. What will be the profit if the discount given is 20%? a) 45% b) 50% c) 63% d) 55% e) 60% Question 4: A dishonest shopkeeper marks up the price of the goods by 50 % and then offers a discount of 20 %. He uses a faulty weighing machine which shows 1000 g when the actual weight is 800 g. What is his profit percentage in the sales? a) 25 % b) 20 % c) 50 % d) 32 % e) 40 % Question 5: An article when sold for 960 fetches 20% profit.What would be the percent profit /loss if such 5 article are sold for Rs. 825/-each? a) 3.125 % profit b) 3.125 % loss c) Neither profit nor loss d) 16.5 % profit e) None of these Question 6: Mahesh bought 10 pencils for 80 rupees and he sold them at 9.2 rupees per each pencil. What is the profit /loss percentage? a) 17% b) 25% c) 20% d) 15% Question 7: The cost price of an article is Rs.1700. If it was sold at a price of Rs.2006, what was the percentage profit on the transaction? a) 18 b) 12 c) 10 d) 15 e) 20 Question 8: Manoj incurred a loss of 40 percent on selling an article for 5,700. What was the cost price of the article ? a) 7,725 b) 9,080 c) 8,250 d) 9,400 e) None of these Question 9: A whole-seller sells apples to a fruit vendor at cost price. The vendor manages to trick the whole-seller into giving him an extra apple per four apples that he buys. But, the whole-seller on sensing some foul play decides to change the weighing machine, citing some fault in it, for measuring the remaining two-thirds of the lot. The new weighing machine is such that it shows the weight of 3 apples equivalent to 5 apples. How much does the whole-seller originally gain/lose in the entire transaction? (Assume all apples to be of uniform size and weight) a) Loss of 18.33% b) Gain of 18.33% c) Loss of 37.78% d) Gain of 37.78% Question 10: A shopkeeper, after being insisted by a customer, gives a discount of 33.33%. He later realizes that he made a loss of Rs 10. He calculates that he should have a given a discount of only 20% to get the profit of Rs 10. By what % does the shopkeeper mark up the price of the item? a) 36.36% b) 25% c) 30% d) 33.33% Question 11: For an umbrella, the ratio of the marked price to the cost price is 9 : 8. What is the approx. profit/loss percentage if the ratio of the percentage discount offered and the profit or loss percentage were in the ratio 4 : 5? a) 6.4% loss b) 6.6% profit c) 5.8% loss d) 7.1% profit Question 12: Arjun sells a cycle to Ben at a profit of 28%. Charan buys it from Ben at Arjun’s cost price. What is Ben’s percentage profit or loss in the transaction? a) 33.33% b) 14.58% c) 21.88% d) 36.67% e) 36.58% Question 13: A person marked up an item 16% above Cost Price and gave a discount of 25%. Then find effective loss percent. a) 15% b) 11% c) 9% d) 13% Question 14: A person bought 50 oranges for Rs.450 and sold at the rate of Rs.108 per dozen. Then, find overall profit/loss percent. a) 11.11% loss b) No Profit No loss c) 12.5% profit d) 11.11% profit Question 15: A shopkeeper purchased a TV for Rs.2,000 and a radio for Rs.750. He sells the TV at a profit of 20% and ther radio at a loss of 5%. The total loss or gain is a) Gain Rs.353.50 b) Gain Rs.362.50 c) Loss Rs.332 d) Loss Rs.300 Let cost price be ‘cp’, and the two selling prices be ‘sp1’ and ‘sp2’ respectively. Loss% = (cost price – selling price) *100/ (cost price) 0.13*cp = cp – sp1 sp1 = 0.87*cp cp = 304.5 / 0.87 = 350 Profit% = ( – cost price + selling price) *100/ (cost price) 0.13 *350 = sp2 – 350 sp2 = Rs. 395.5 Hence, option A is the right choice. Let us assume that Ram buys 40 pieces. He will buy 40 pieces for 70*5 = Rs.350. Ram will pack these 40 pieces into 40/5 = 8 boxes. Ram wants to realize a profit percentage of 60%. => Selling price of the 8 boxes = 1.6*350 = Rs.560 => Selling price of 1 box = Rs. 560/8 = Rs. 70 Therefore, option C is the right answer. Let ‘x’ be the marked price. Discount of 35%, selling price will be = 0.65x Since the profit is 30%, cost price * 1.3 = 0.65x cost price = 0.5x When discount of 20%, selling price will be = 0.8x Profit% = $\frac{0.8x – 0.5x}{0.5x}$*100 = 60% Hence, option E is the right answer. Let he has 1000 g of goods and cost price of this entire lot is 1000. So selling price would be 1000*1.5*.8 = 1200. i.e 1.2 per gram. Now, the machine measures 1000 gm for 800 gm. Hence, he can sell 1000 gm as 1250 gm. Thus, the amount earned by him will be 1250*1.2 = 1500. Hence, the profit percentage is 50 %. Let cost price of an article = $Rs.$ $100x$ If Selling price = Rs 960 => Profit % = $\frac{960-100x}{100x} \times 100=20$ => $960-100x=20x$ => $20x+100x=120x=960$ => $x=\frac{960}{120}=8$ Thus, cost price of 1 article = $100 \times 8 = Rs.$ $800$ If selling price = Rs. 825 $\therefore$ Profit % = $\frac{825-800}{800} \times 100$ = $\frac{25}{8} = 3.125\%$ => Ans – (A) Cost price of 10 pencils = Rs 80 Selling price of 10 pencils = 9.2*10=Rs 92 Profit percentage = ((92-80)/80)*100 = (12/80)*100 = 15%. So the correct option to choose is D – 15% Profit = S.P. – C.P. = 2006 – 1700 = Rs. 306 => Profit % = $\frac{306}{1700} * 100$ = 18% SP = 5700 Loss percentage = 40% (CP-SP)/CP = 40/100 CP = $(5/3) \times SP$ = 9500 Given, fruit vendor buys at the cost price to the whole-seller. Let, us assume the cost price of an apple = Re 1 Two cases arise: Case 1: Before changing weighing machine Since, the vendor is getting an apple extra per 4 apples bought For whole-seller: CP= Rs 5 SP= Rs 4 Loss % = $\left[\frac{5-4}{5}\right]*100$ = 20% Case 2: After changing weighing machine For whole-seller: CP= Rs 3 SP= Rs 5 Profit % = $\left[\frac{5-3}{3}\right]*100$ = 66.67% Since, the measurements in the two lots are in the ratio of 1:2 We can apply alligation to find out the net profit/loss %: On solving for x $\frac{66.67-x}{x+20}=\frac{1}{2}$ => x $\approx$ 37.78% Let the MP of the item be $x$. Thus, according to the given conditions we get $\dfrac{x-SP}{x} = \dfrac{1}{3}$ => $3x-3SP = x$ Thus, $SP = \dfrac{2x}{3}$ He made a loss of Rs 10 on selling the item at this SP. Thus, $CP = 10+\dfrac{2x}{3}$ After giving a discount of 20% the SP would have been $0.8x$ He made a profit of Rs 10 on selling the item at this SP. Thus, $CP = \dfrac{4x}{5} -10$ Thus, we get, $10+\dfrac{2x}{3} = \dfrac{4x}{5} -10$ Thus, $\dfrac{x*(12-10)}{15} = 20$ Thus, $x = 150 = MP$ Thus, $CP = 0.8*150-10 = 110$ Thus, the shopkeeper marks up the price of the given item by $\dfrac{100*(150-110)}{110}\approx36.36$% Hence, option A is the correct answer. $\frac{marked price}{cost price} = \frac{9}{8}$ Let the marked price = 9x and cost price = 8x $\frac{percentage discount}{profit/loss percentage} = \frac{4}{5}$ Let the percentage discount = 4y% and profit/loss percentage = 5y% Considering there is a profit, Selling price = (1 – 4y%) * 9x = (1 + 5y%) * 8x 9x – 36xy/100 = 8x + 40xy/100 x = 76xy/100 y = 100/76 So percentage profit = 5 * 100/76 = 6.6% approx. Hence, option B is the right answer. Let Arjun’s CP be x Arjun’s SP will be 1.28x Ben’s CP = 1.28x Given, Charan’s CP = x. Hence, Ben’s SP = x. Hence, Ben’s loss = 0.28x Loss % = (0.28/1.28)*100 = 21.88% Let the Cost Price of the item be Rs.100 Then, Marked Price = 116% of Rs.100 = Rs.116 Selling Price after a discount of 25% = 75% of Rs.116 = Rs.87 Therefore, Effective loss percent = $\frac{100-87}{100}\times100 = 13$% Cost Price of 50 oranges = Rs.450 Cost Price of 1 orange = 450/50 = Rs.9 Selling Price of 12 oranges = Rs.108 Selling Price of 1 orange = Rs.108/12 = Rs.9 Therefore, Profit = Rs.9 – Rs.9 = 0 Hence, There is no profit and no loss in this transaction. Cost price of TV = Rs. 2000 Profit % = 20% => Selling price of TV = $2000+(\frac{20}{100}\times2000)$ = $2000+400=Rs.$ $2400$ Similarly, selling price of radio = $750-(\frac{5}{100}\times750)$ = $750-37.5=Rs.$ $712.5$ Thus, total cost price = $(2000+750)=Rs.$ $2750$ and total selling price = $(2400+712.5)=Rs.$ $3112.5$ $\therefore$ Gain = $3112.5-2750=Rs.$ $362.50$ => Ans – (B) We hope this Profit & Loss Questions Set-2 for RRB JE Exam will be highly useful for your preparation
# R ## [R: New Features on pinyin] Convert Chinese Characters into Sijiao and Wubi codes #### What features did I add? • Four times faster for converting. • At the beginning of the year 2018 I received an issue report by psychelzh about a polyphone error. Now a new pinyin library has been added, which more or less solved the polyphone problem. • Convert Chinese characters into Sijiao codes (literally four corner code). • and Wubi codes (literally five-stroke). • Some minor bugs were fixed. ## [R: New Features on mindr] Supports the new format of FreeMind. Displays mind maps directly. Supports bookdown projects. In the recent months, I have received many kind feedbacks and helpful suggestions from mindr users. I did not improve or enhance mindr until the latest week. Now the new version 1.1.5 brings more exciting features. ## bookdownplus gallery: a web app for displaying and sharing bookdown templates Many years ago, I collected some LaTeX templates when learning LaTeX. However, my interest in LaTeX was gone after submitting my PhD dissertation. I should have deleted these templates if they had not been so small. They would never be useful in the future, I thought. In 2017, I started writing the book Learning R: R for Rookies . Unexpectedly, MS Word could not satisfy me with the typesetting. You know what I mean if you have experience (and pain) in writing a long book or dissertation with Word. Actually I suffered more, but I do not want to talk about it. I was sure that LaTeX could, but I would rather not use it. Like a bolt out of the blue, I found bookdown. ## [New Features on steemr] Display all the posts of a give Steemian with statistics, and get the utopian review and upvote plan! • A new functionsposts() displays a Shiny app, which is a user friendly interactive UI to show all the posts of a give Steemian's ID. Analysis diagrams are plotted as well, including the distribution of the post payout and votes, the active hour of the Steemian's posting, and the time series of the growth of the cumulative post number, payout, and votes. ## [New Features on steemr] Diagrams in the Shiny app for the statistics of the Steem CN community! I added diagrams into the Shiny app scner(). These diagrams include the word clouds of the Steemians of the CN community, as well as the histograms for the distributions according to the ESP, account value, online-days, etc.. ## [New Features on steemr] A Shiny app for the statistics of the Steem CN community! A new functionscner() displays a Shiny app, which is a user friendly interactive UI for the statistics of a group of Steemians, hereby those who are active in the CN community. It has the potential to be used for other groups of Steemians with some simple modifications to this function. ## A template for Copernicus academic journals Write academic papers of Copernicus journals with R markdown syntax. ## A template for MDPI academic journals Write academic papers of Copernicus journals with R markdown syntax. ## [New Features on steemr] Diagrams in the follower shiny app! A comment by the utopian.io moderator mentioned that the wordings of both 'followers but not following', 'following but not followers' are confusing. Although they are the official callings that I cannot change, a diagram could help. Therefore, I added a Venn diagram into the sfollow()shiny app. ## [Tutorial for steemr] Retrieving and Analyzing the Comment Data ### What Will I Learn? • Using the function gcomment() to retrieve the comment data of a given ID. • Using other R functions to enhance and analyze the data retrieved by gcomment().
# University Seminar: Logic Across Disciplines- Measuring Complexity in Computable Structure Theory Title:Measuring Complexity in Computable Structure Theory Speaker:  Valentina Harizanov, GWU Date and Time:  Friday, February 22, 12:00-1:00PM Place: Rome Hall (801 22nd Street), Room 771 Abstract:  In order to measure complexity of problems in computable structure theory, one of the main strategies is to find an optimal description of the class of structures under investigation. This often requires the use of various algebraic properties of the structures. To prove the sharpness of our description, we use the notion of many-one completeness. The complexity is often expressed using hyper-arithmetical sets or their differences. As examples of different complexity problems we will present some recent results.
# [Writer] Why while the style is the same (Text body), they appear so differently? I'm giving some style to some debating notes that a friend shared with me but sometimes I find this: when I select the text and the format into Text body ("Cuerpo de texto" in Spanish), it's just partially modified. Some of the text keeps its previous format: Corbel font (12 pts) instead of Liberation Sans (11pts). I'll leave some screenshots below and a demo ODT file attached. I'm using LO 7.0.3.1 on GNU/Linux. Styling ODT file demo.odt edit retag close merge delete Sort by » oldest newest most voted Writer offers several style categories. These categories are like layers of tracing paper over your text. The deepest layer is paragraph style, then character style to change locally the attributes and over everything direct formatting. Upper layers override and hide formatting by lower layers. In your sample file, the third paragraph has received direct formatting for Corbel 12pt. When you style this paragraph with Text Body (or any other paragraph style), you change the paragraph layer and do not affect in any way the character or direct formatting layer. Consequently, the direct formatting is still in effect. Remove direct formatting by selecting text where you suspect there is some and Ctrl+M. This clears direct formatting, making the layer "fully transparent". As a general rule, avoid direct formatting and format your text exclusively with styles (paragraph and character). Don't mix styling and direct formatting otherwise you'll enter formatting hell. To show the community your question has been answered, click the ✓ next to the correct answer, and "upvote" by clicking on the ^ arrow of any helpful answers. These are the mechanisms for communicating the quality of the Q&A on this site. Thanks! more I see, thank you so much!! That was perfectly explained. I imagine there is no way to do that without removing the bold words, right? maybe I could create some character style that looks exactly like the text body with bold words and change them into it before clearing the text (although this seems terribly tedious in this almost 200 pages document). ( 2020-11-05 22:35:35 +0200 )edit 1 Character styles are used to emphasise words inside a paragraph which format is "globally" defined by a paragraph style. The common erroneous way how to draw reader attention is to think in terms of "bold" or "italics". As an author, you qualify some words as "important", "ironical", "mandatory", ... You translate this meaning with typographical attributes (weight, angle, font face, colour, ...). Note that due to the limited number of attributes, several semantic qualifications may end up with the same appearance. Among built-in character styles, you have Emphasis and Strong Emphasis which are usually rendered as italics and bold respectively. Use them to mark your words. The task may look tedious because you didn't use styles from the start, notably the underestimated character ones because M\$ Word has no notion of them. Styling a doc consistently allows to drastically change its appearance through styles without editing text at all. ( 2020-11-05 22:59:51 +0200 )edit I see, thank you so much for such complete explanation! I was asking before noticing your new comment if there is a way to automatically exchange the formats (from direct to strong empathise or any other). I think this would do this incredibly tedious work way much easier. I mean, I found about replacing direct format with the searching tool, but nothing that connects character or paragraph styles with direct format ( 2020-11-05 23:33:59 +0200 )edit 1 You can't replace direct-formatting with some character style with Search & Replace or other tool. From S&R point of view, the attributes set by the different layers are indistinguishable: it can't tell if bold is part of paragraph style or direct formatting. Therefore: manual job. But you do it only once. After that, edit only with styles. The hardest part is not to restyle but to design the adequate set of styles. ( 2020-11-06 08:41:15 +0200 )edit I see, thanks! ( 2020-11-06 17:26:26 +0200 )edit
# On the Question on Arrays Previously, I posted a question on arrays and asked for a solution. $\begin{bmatrix} 2&0&1&0&2&0\\ 0&2&0&1&2&0\\ 1&0&2&0&2&0\\ 0&1&0&2&2&0\\ 1&1&1&1&2&0\\ 0&0&0&0&0&0 \end{bmatrix}$ In the above $$6 \times 6$$ array, one can choose any $$k \times k$$ subarray, with $$1 < k \leq 6$$ and add $$1$$ to all its entries. Is it possible to perform the operation a finite number of times such that all entries in the array are multiples of 3? This was because I did not understand the solution given. This was the answer from the Singapore Mathematical Olympiad 2013 ( Senior ) The answer is no. Let the original array be $A$. Consider the following array $M= \begin{bmatrix} 0&1&1&-1&-1&0\\ -1&0&1&-1&0&1\\ -1&-1&0&0&1&1\\ 1&1&0&0&-1&-1\\ 1&0&-1&1&0&-1\\ 0&-1&-1&1&1&0 \end{bmatrix}.$ Multiply the corresponding elements of the two arrays and compute the sum modulo 3. It's easy to verify that this sum is invariant under the given operation. Since the original sum is 2, one can never obtain an array where all the entries are multiples of 3. What does this mean? I think that the answer given is pretty vague, and I do not understand the reason or purpose of multiplying the elements of $A$ and $M$, or even how M was derived in the first place. What does this have to do with the original operation in the question? Note by Timothy Wan 4 years, 7 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: The idea is that you want to find a property that is invariant under the allowed operations. Then, if can calculate the value of the property for matrix A, and see how that compares with the value of the property for "all entries are multiples of 3". Calculating the sum modulo 3 makes a lot of sense since we want them to be multiples of 3. Other possibilities could be modulo 6 or 9. The "multiply the corresponding elements" simply means "weight the elements by this extent" IE the contribution of the first row of A is equal to $2 \times 0 + 0 \times 1 + 1 \times 1 + 0 \times -1 + 2 \times -1 + 0 \times 0$. Coming up with the matrix M is much harder. It is not immediately apparent why that would work. It takes some practice and ideas to figure out how to approach the problem. Staff - 4 years, 7 months ago
# zbMATH — the first resource for mathematics Contributions to the study of subgroup lattices. (English) Zbl 1360.20002 Bucharest: Matrix Rom (ISBN 978-606-25-0229-4/pbk). viii, 216 p. (2016). As the author tells us in his preface, this book is an improved version of his habilitation thesis at the Faculty of Mathematics of Al. I. Cuza University of Iasi, Romania. So after a short introduction into the field (Chapter 1), he does not present some parts of the theory of subgroup lattices of groups but discusses in Chapter 2 (Main results) some loosely connected properties of finite groups which he has studied in the last ten years. We give a short list of the main subjects dealt with. For this, let $$G$$ be a finite group, $$L(G)$$ the subgroup lattice, $$N(G)$$ the lattice of normal subgroups, $$C(G)$$ the poset of cyclic subgroups of $$G$$, and $$L$$ be a finite lattice. Section 2.1 (Basic properties and structure of subgroup lattices) contains some general results on groups $$G$$ with $$L(G)$$ or $$N(G)$$ pseudocomplemented, breaking points in $$C(G)$$, solitary subgroups and solitary quotients in $$G$$, and almost $$L$$-free groups. In Section 2.2 (Computational and probabilistic aspects of subgroup lattices) and Section 2.3 (Other posets associated to finite groups), the author tries to compute $$|L(G)|$$, $$|N(G)|$$, $$|C(G)|$$, the subgroup commutativity degree $$\mathrm{sd}(G)$$ of $$G$$, that is, the probability that $$HK=KH$$ for arbitrary $$H,K\leq G$$, the number of factorizations $$G=HK$$ where $$H,K\leq G$$, and the sum and product of all $$|H|$$ where $$H\in L(G)$$, $$H\in N(G)$$, or $$H\in C(G)$$. Of course, these numbers can only be computed for groups $$G$$ for which $$L(G)$$ is well-known and so the author studies them mainly for abelian, Hamiltonian, and dihedral groups, for $$p$$-groups with a cyclic maximal subgroup and for groups with only cyclic Sylow subgroups. Section 2.4 (Generalizations of subgroup lattices) is concerned with the lattice $$\mathrm{FL}(G)$$ of fuzzy subgroups of $$G$$ and the final Chapter 3 (Further research) contains some 50 problems on the subjects mentioned above. ##### MSC: 20-02 Research exposition (monographs, survey articles) pertaining to group theory 20D30 Series and lattices of subgroups 20E07 Subgroup theorems; subgroup growth 20D60 Arithmetic and combinatorial problems involving abstract finite groups
# All Questions 35 views ### Is this homomorphism in general surjective? Let $R$ be a commutative ring and $I$ an ideal of $R$. Pick a fix $0 \not= a \in I$ and consider the map $\phi: R \to I$ given by $r \mapsto ra$. Is this map surjective? 25 views ### To test following system of linear equation for equivalency Let F be field of complex numbers I have two system of equations $x_1 - x_2 =0$ $2x_1 + x_2 =0$ And $3x_1 + x_2 -0$ $x_1 + x_2 =0$ The definition says that each if equation in first system ... 82 views 119 views ### Proving set is dense Let $A$ be a dense set of real numbers in $[0,1]$. I need to prove that $B=\{na : a \in A, n \in \mathbb{N}\}$ is dense in $[0,\infty)$. This is very intuitive but I fail to prove it. Any tips? 245 views ### To prove in a Group Left identity and left inverse implies right identity and right inverse Let $G$ be a nonempty set closed under an associative product, which in addition satisfies : A. There exists an $e$ in $G$ such that $a \cdot e=a$ for all $a \in G$. B. Given $a \in G$, ... 152 views ### Determine if the following series is convergent or divergent: $S=\sum_{k=1}^\infty (-1)^{k+1} \frac{k}{k^{2}+1}$ Now, I started by saying: consider, $\sum_{k=1}^\infty \left\lvert \frac{k}{k^{2}+1} \right\rvert$ , if this converges, that means S ... 65 views ### Paths and connectivity of graphs I am trying to show that for a graph on $n\ge 3$ vertices with minimum degree of all vertices $\ge k/2$, G connected that G has a path of length k. I know if n is greater than k but n/2 is less than ... 47 views ### Logarithmic question In the following question I fail to understand why the A option is correct. I understand that D is wrong, and that B and C are correct, but why is A correct? If $3^x=4^{x-1}$, then $x$cannot be ... 45 views ### For what value of $k$ is the vector field solenoidal Problem: Let $\mathbf{r} = x \hat{i} + y \hat{j} + z \hat{k}$ be the position vector of a general point in $3$-space, and let $s= |\mathbf{r}|$ be the length of $\mathbf{r}$. For what value of the ... 178 views ### Prove that the set of all integers $>0$ is the smallest inductive set An inductive set is a set $I$ such that $1 \in I$ and if $x \in I$ then $x+1 \in I.$ Some authors define the set of all integers $>0$ as the smallest inductive set, say Apostol's Analysis. But I ... 105 views ### Choosing new teammates My sister gave me a combinatorical riddle. It doesn't appear to be hard, but I ask you if my thoughts are right, just for certainty. Here it is. Assume you belong to a group of $100$ people, and ... 55 views ### Non-abelian group $G$ satisfying $(a \cdot b)^i=a^i \cdot b^i,$ for two consecutive integers. Given an example of a non-abelian group $G$ satisfying $(a \cdot b)^i=a^i \cdot b^i, \forall a, b \in G$ for two consecutive integers. This is question 5 from Herstein Page 35. I have proved that ... 126 views ### Pseudo-Surreal numbers are analogous to? I've been exploring surreal numbers. Real equivalent of the surreal number {0.5|} I see that pseudo-surreal numbers seem to have an interesting branch of game theory. Still having a form of {x|y}, ... 59 views ### Integration of Exponential and Logarithms, $\int_{z-1}^z \log(\frac{1}{z-y}) \exp (-| y| ^{3}) \, dy$ The integral I am dealing with is: $$\frac{3}{2 \Gamma \left(\frac{1}{3}\right)}\int_{z-1}^z \log \left(\frac{1}{z-y}\right) \exp \left(-\left| y\right| ^{3}\right) \, dy$$ where $z\in \mathbb{R}$ ... 15 views ### can we say in finite measure that $f_n \to f$ in measure iff every subsequence of $f_n$ coverge almost every where to f Assume $(X, \mu )$ be finite measure space then can we say that $f_n \to f$ in measure iff every subsequence of $f_n$ coverge almost every where to f . My tries : I know a theorem that states ... 81 views 145 views 62 views 471 views ### How to prove that the exponential function is the inverse of the natural logarithm by power series definition alone The exponential function has the well-known power series representation/definition: $e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$ And the natural logarithm has the less well-known power series ... 368 views ### Ball and urn method (counting problems) How many ordered triples $(a, b, c)$ of positive integers exist with the property that $abc = 500$? Since, $500 = 2^2 5^3$ I believe this can be solved using Ball and Urn let $a = 2^{x_1}5^{y_1}$ ... 24 views ### On Procedure Constrained Matrix Factorization Given rank $r$ $A\in\Bbb R^{n\times m}$, is there a procedure to find $XY=A$ such that $X\in\Bbb R^{n\times r}$, $Y\in\Bbb R^{r\times m}$ with property that $\max_{i,j}|x_{i,j}|$, $\max_{i,j}|y_{i,j}|$... 62 views ### Distribution equation (x-a)T=0 I have to solve the equation $(x-a)T=0$ , T is a distribution. By definition : $(x-a)\int T(x)\varnothing (x)=0$ I know if I pose $X=x-a$ I find $XT(X)=0$ and $T(X)=\delta(X)$. But I stuck to find ... 50 views 367 views ### What is the intuition behind / How can we interpret the eigenvalues and eigenvectors of Euclidean Distance Matrices? Given a set of points $x_1,x_2,\dots,x_m$ in the euclidean space $\mathbb{R}^n$, we can form a $m\times m$ Euclidean Distance Matrix $D$ where $D_{ij}={\|x_i-x_j\|}^2$. We know a little bit about ... 48 views ### Cauchy sequence under a uniform continuous function Let , $f:(1,4)\to \mathbb R$ be uniformly continuous and $\{a_n\}$ be a Cauchy sequence in $(1,2)$. Consider: $x_n=a_n^2f(a_n^2)$ and $y_n=\frac{1}{1+a_n^2}f(a_n^2)$ Then which is correct ?... I am solving warm up problem 1.2 from Concrete Mathematics book. I've got the right answer by induction: $$f(0) = 0\\ f(n) = 3f(n-1) + 2,$$ But I can not figure how to simplify it to the closed ... Let $G$ be a finie group, and $H$ be a core-free subgroup of $G$ (that is to say, there is no nontrivial normal subgroup of $G$ contained in $H$). Denote by $\Omega$ the set of right cosets of $H$ in \$...
# Ordinal utility Ordinal utility theory states that while the utility of a particular good or service cannot be measured using a numerical scale bearing economic meaning in and of itself, pairs of alternative bundles (combinations) of goods can be ordered such that one is considered by an individual to be worse than, equal to, or better than the other. This contrasts with cardinal utility theory, which generally treats utility as something whose numerical value is meaningful in its own right. The concept was first introduced by Pareto in 1906.[1] ## Indifference curve mappings When a large number of bundles of goods are compared, the preferences of the individual can be seen. This information is usually put together on a graph called an indifference map. One of these is shown below: Each indifference curve is a set of points, each representing a combination of quantities of two goods or services, all of which combinations the consumer is equally satisfied with. The further a curve is from the origin, the greater is the level of utility. The slope of the curve (the negative of the marginal rate of substitution of X for Y) at any point shows the rate at which the individual is willing to trade off good X against good Y maintaining the same level of utility. The curve is convex to the origin as shown assuming the consumer has a diminishing marginal rate of substitution. It can be shown that consumer analysis with indifference curves (an ordinal approach) gives the same results as that based on cardinal utility theory — i.e., consumers will consume at the point where the marginal rate of substitution between any two goods equals the ratio of the prices of those goods (the equi-marginal principle). ## Revealed preference Revealed preference theory addresses the problem of how to observe ordinal preference relations in the real world. The challenge of revealed preference theory lies in part in determining what goods bundles were foregone, on the basis of the being less liked, when individuals are observed choosing particular bundles of goods.[2] [3] ## Ordinal utility functions An ordinal utility function describing a consumer's preferences over, say, two goods can be written as $u(x, y)$ where x and y are the quantities of the goods consumed. Both partial derivatives of this function are positive if the consumer prefers more of both goods. But the same preferences could be expressed as another utility function that is a monotonic transformation of u: $g(x, y) \equiv f(u(x, y)),$ where f is any globally increasing function. Utility functions g and u give rise to identical indifference curve mappings. Thus in ordinal utility theory, there is no concept of diminishing marginal utility, which would correspond to the second derivative of utility being negative. For example, even if u has a negative second derivative with respect to x, the equivalent utility function g may have a positive second derivative with respect to x. 2. ^ Chiaki Hara (6 June 1998). "7th Toiro-kai meeting (1997/1998)". |chapter= ignored (help)
### Stata Annotated Output Poisson Regression This page shows an example of poisson regression analysis with footnotes explaining the output. The data collected were academic information on 316 students. The response variable is days absent during the school year (daysabs), from which we explore its relationship with math standardized tests score (mathnce), language standardized tests score  (langnce) and gender (female). As assumed for a poisson model our response variable is a count variable, and each subject has the same length of observation time. Had the observation time for subjects varied, the poisson model would need to be adjusted to account for the varying length of observation time per subject. This point is discussed later in the page. Also, the poisson model, as compared to other count models (i.e., negative binomial or zero-inflated models), is assumed the appropriate model. In other words, we assume that the dependent variable is not over-dispersed and does not have an excessive number of zeros. The first half of this page interprets the coefficients in terms of poisson regression coefficients and the second half interprets the coefficients in terms of incidence rate ratios. We also run the estat ic command to calculate the likelihood ratio chi-square statistic. use http://www.ats.ucla.edu/stat/stata/notes/lahigh, clear generate female = (gender == 1) poisson daysabs mathnce langnce female Iteration 0: log likelihood = -1547.9709 Iteration 1: log likelihood = -1547.9709 Poisson regression Number of obs = 316 LR chi2(3) = 175.27 Prob > chi2 = 0.0000 Log likelihood = -1547.9709 Pseudo R2 = 0.0536 ------------------------------------------------------------------------------ daysabs | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- mathnce | -.0035232 .0018213 -1.93 0.053 -.007093 .0000466 langnce | -.0121521 .0018348 -6.62 0.000 -.0157483 -.0085559 female | .4009209 .0484122 8.28 0.000 .3060348 .495807 _cons | 2.286745 .0699539 32.69 0.000 2.149638 2.423852 ------------------------------------------------------------------------------ estat ic ------------------------------------------------------------------------------ Model | Obs ll(null) ll(model) df AIC BIC -------------+---------------------------------------------------------------- . | 316 -1635.608 -1547.971 4 3103.942 3118.965 ------------------------------------------------------------------------------ ### Iteration Log, Model Summary and estat ic Iteration 0: log likelihood = -1547.9709 Iteration 1: log likelihood = -1547.9709a Poisson regression Number of obsc = 316 LR chi2(3)d = 175.27 Prob > chi2e = 0.0000 Log likelihood = -1547.9709b Pseudo R2f = 0.0536 estat ic ------------------------------------------------------------------------------ Model | Obs ll(null)d ll(model)d df AIC BIC -------------+---------------------------------------------------------------- . | 316 -1635.608 -1547.971 4 3103.942 3118.965 ------------------------------------------------------------------------------ a. Iteration Log - This is a listing of the log likelihood at each iteration. Poisson regression uses maximum likelihood estimation, which is an iterative procedure to obtain parameter estimates. If you are familiar with other regression models that use maximum likelihood (e.g., logistic regression), you may notice this iteration log behaves differently. Specifically, the log likelihood at iteration 0 does not correspond to the likelihood for the empty (or null) model. This is evident when we look under ll(null) from the estat ic command, which provides the log likelihood for the empty model. The log likelihood for the fitted model is given in the last iteration of the iteration log and under ll(model) from estat ic; note that both values are equal (unlike ll(null) and the log likelihood from iteration 0). The log likelihood for the fitted model is then used with ll(null) to calculate the Likelihood ratio chi-square test statistic. b. Log Likelihood - This is the log likelihood of the fitted model. It is used in the calculation of the Likelihood Ratio (LR) chi-square test of whether all predictor variables' regression coefficients are simultaneously zero and in tests of nested models. c. Number of obs - This is the number of observations used in the poisson regression. It may be less than the number of cases in the dataset if there are missing values for some variables in the model. By default, Stata does a listwise deletion of incomplete cases. d. LR chi2(3), ll(null) and ll(model) from estat ic - This is the LR test statistic for the omnibus test that at least one predictor variable regression coefficient is not equal to zero in the model. The degrees of freedom (the number in parenthesis) of the LR test statistic is defined by the number of predictor variables (3). LR chi2(3) is calculated as -2*[ll(null) - ll(model)] = -2*[-1635.608 - (-1547.971)] = 175.274. e. Prob > chi2 - This is the probability of getting a LR test statistic as extreme as, or more so, than the one observed under the null hypothesis; the null hypothesis is that all of the regression coefficients are simultaneously equal to zero. In other words, this is the probability of obtaining this chi-square test statistic (175.274) if there is in fact no effect of the predictor variables. This p-value is compared to a specified alpha level, our willingness to accept a Type I error, which is typically set at 0.05 or 0.01. The small p-value from the LR test,  p < 0.00001, would lead us to conclude that at least one of the regression coefficients in the model is not equal to zero. The parameter of the chi-square distribution used to test the null hypothesis is defined by the degrees of freedom in the prior line, chi2(3). f. Pseudo R2 - This is McFadden's pseudo R-squared. It is calculated as 1 - ll(model)/ll(null) = 0.0536. Poisson regression does not have an equivalent to the R-squared found in OLS regression; however, many have tried to derive an equivalent measure.  There are a variety of pseudo-R-square statistics.  Because this statistic does not mean what R-square means in OLS regression (the proportion of variance of the response variable explained by the predictors), we suggest interpreting this statistic with caution. ### Parameter Estimates ------------------------------------------------------------------------------ daysabsg | Coef.h Std. Err.i zj P>|z|j [95% Conf. Interval]k -------------+---------------------------------------------------------------- mathnce | -.0035232 .0018213 -1.93 0.053 -.007093 .0000466 langnce | -.0121521 .0018348 -6.62 0.000 -.0157483 -.0085559 female | .4009209 .0484122 8.28 0.000 .3060348 .495807 _cons | 2.286745 .0699539 32.69 0.000 2.149638 2.423852 ------------------------------------------------------------------------------ g. daysabs - This is the response variable in the poisson regression. Underneath daysabs are the predictor variables and the intercept (_cons). h. Coef. - These are the estimated poisson regression coefficients for the model. Recall that the dependent variable is a count variable, and poisson regression models the log of the expected count as a function of the predictor variables. We can interpret the poisson regression coefficient as follows: for a one unit change in the predictor variable, the difference in the logs of expected counts is expected to change by the respective regression coefficient, given the other predictor variables in the model are held constant. mathnce - This is the poisson regression estimate for a one unit increase in math standardized test score, given the other variables are held constant in the model. If a student were to increase her mathnce test score by one point, the difference in the logs of expected counts would be expected to decrease by 0.0035 unit, while holding the other variables in the model constant. langnce - This is the poisson regression estimate for a one unit increase in language standardized test score, given the other variables are held constant in the model. If a student were to increase her langnce test score by one point, the difference in the logs of expected counts would be expected to decrease by 0.0122 unit while holding the other variables in the model constant. female -  This is the estimated poisson regression coefficient comparing females to males, given the other variables are held constant in the model. The difference in the logs of expected counts is expected to be 0.4010 unit higher for females compared to males, while holding the other variables constant in the model. _cons - This is the poisson regression estimate when all variables in the model are evaluated at zero. For males (the variable female evaluated at zero) with zero mathnce and langnce test scores, the log of the expected count for daysabs is 2.2867 units. Note that evaluating mathnce and langnce at zero is out of the range of plausible test scores. If the test scores were mean-centered, the intercept would have a natural interpretation: the log of the expected count for males with average mathnce and langnce test scores. i. Std. Err. - These are the standard errors of the individual regression coefficients. They are used both in the calculation of the z test statistic, superscript j, and the confidence interval of the regression coefficient, superscript k. j. z and P>|z| - These are the test statistic and p-value, respectively, that the null hypothesis that an individual predictor's regression coefficient is zero given that the rest of the predictors are in the model. The test statistic z is the ratio of the Coef. to the Std. Err. of the respective predictor. The z value follows a standard normal distribution which is used to test against a two-sided alternative hypothesis that the Coef. is not equal to zero. The probability that a particular z test statistic is as extreme as, or more so, than what has been observed under the null hypothesis is defined by P>|z|. mathnce - The z test statistic testing the slope for mathnce on daysabs is zero, given the other variables are in the model, is (-0.0035/0.0018) -1.93, with an associated p-value of 0.053.  If we set our alpha level at 0.05, we would fail to reject the null hypothesis and conclude the poisson regression coefficient for mathnce is not statistically different from zero given langnce and female are in the model. langnce - The z test statistic testing the slope for langnce on daysabs is zero, given the other variables are in the model, is (-0.0122/0.0018) -6.62, with an associated p-value of <0.0001.  If we set our alpha level at 0.05, we would reject the null hypothesis and conclude the poisson regression coefficient for langnce is statistically different from zero given mathnce and female are in the model. female -  The z test statistic testing the difference between the log of expected counts between males and females on daysabs is zero, given the other variables are in the model, is (0.4009/0.04841) -8.28, with an associated p-value of <0.0001.  If we set our alpha level at 0.05, we would reject the null hypothesis and conclude that the coefficient for female is statistically different from zero given mathnce and langnce are in the model. _cons - The z test statistic testing _cons is zero, given the other variables are in the model and evaluated at zero, is (2.2867/0.0670) -32.69, with an associated p-value of <0.0001. If we set our alpha level at 0.05, we would reject the null hypothesis and conclude that _cons on daysabs has been found to be statistically different from zero given mathnce, langnce and female are in the model and evaluated at zero. k. [95% Conf. Interval] - This is the confidence interval (CI) of an individual poisson regression coefficient, given the other predictors are in the model. For a given predictor variable with a level of 95% confidence, we'd say that we are 95% confident that upon repeated trials 95% of the CI's would include the "true" population poisson regression coefficient. It is calculated as Coef. ± (zα/2)*(Std.Err.), where zα/2 is a critical value on the standard normal distribution. The CI is equivalent to the z test statistic: if the CI includes zero, we'd fail to reject the null hypothesis that a particular regression coefficient is zero, given the other predictors are in the model. An advantage of a CI is that it is illustrative; it provides information on where the "true" parameter may lie and the precision of the point estimate. ### Incidence Rate Ratio Interpretation The following is the interpretation of the poisson regression in terms of incidence rate ratios, which can be obtained by poisson, irr after running the poisson model or by specifying the irr option when the full model is specified. This part of the interpretation applies to the output below. Before we interpret the coefficients in terms of incidence rate ratios, we must address how we can go from interpreting the poisson regression coefficients as a difference between the logs of expected counts to incidence rate ratios. In the discussion above, poisson regression coefficients were interpreted as the difference between the log of expected counts, where formally, this can be written as β = log( μx+1) - log( μx ), where β is the regression coefficient, μ is the expected count and the subscripts represent where the predictor variable, say x, is evaluated at x and x+1 (implying a one unit change in the predictor variable x). Recall that the difference of two logs is equal to the log of their quotient, log( μx+1) - log( μx ) = log( μx+1 /  μx ), and therefore, we could have also interpretted the parameter estimate as the log of the ratio of expected counts: This explains the "ratio" in incidence rate ratios. In addition, what we referred to as a count can also be called a rate. By definition a rate is the number of events per time (or space), which our response variable qualifies as. Hence, we could also interpret the poisson regression coefficients as the log of the rate ratio: This explains the "rate" in incidence rate ratio. Finally, the rate at which events occur is called the incidence rate; thus we arrive at being able to interpret the coefficients in terms of incidence rate ratios from our interpretation above. Also, note that each subject in our sample was followed for one school year. If this was not the case (i.e., some subjects were followed for  half a year, some for a year and the rest for two years) and we were to neglect the exposure time, our poisson regression estimate would be biased, since our model assumes all subjects had the same follow up time. If this was an issue, we would use the exposure option, exposure(varname), where varname corresponds to the length of time an individual was followed to adjust the poisson regression estimates. poisson daysabs mathnce langnce female, irr Iteration 0: log likelihood = -1547.9709 Iteration 1: log likelihood = -1547.9709 Poisson regression Number of obs = 316 LR chi2(3) = 175.27 Prob > chi2 = 0.0000 Log likelihood = -1547.9709 Pseudo R2 = 0.0536 ------------------------------------------------------------------------------ daysabs | IRRa Std. Err. z P>|z| [95% Conf. Interval]b -------------+---------------------------------------------------------------- mathnce | .996483 .0018149 -1.93 0.053 .9929321 1.000047 langnce | .9879214 .0018127 -6.62 0.000 .984375 .9914806 female | 1.493199 .072289 8.28 0.000 1.35803 1.641823 ------------------------------------------------------------------------------ a. IRR - These are the incidence rate ratios for the poisson model shown earlier. We obtain at the incidence rate ratio by exponentiating the poisson regression coefficient mathnce - This is the estimated rate ratio for a one unit increase in math standardized test score, given the other variables are held constant in the model. If a student were to increase his mathnce test score by one point, his rate ratio for daysabs would be expected to decrease by a factor of 0.9965, while holding all other variables in the model constant. langnce - This is the estimated rate ratio for a one unit increase in language standardized test score, given the other variables are held constant in the model. If a student were to increase his langnce test score by one point, his rate ratio for daysabs would be expected to decrease by a factor 0.9880, while holding all other variables in the model constant. female -  This is the estimated rate ratio comparing females to males, given the other variables are held constant in the model. Females compared to males, while holding the other variable constant in the model, are expected to have a rate 1.493 times greater for daysabs. b. [95% Conf. Interval] - This is the CI for the rate ratio given the other predictors are in the model. For a given predictor with a level of 95% confidence, we'd say that we are 95% confident that upon repeated trials, 95% of the CI's would include the "true" population incidence rate ratio, given the other variables are in the model. The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.
# Why is the normal contact force horizontal on an inclined ladder? There is only one force acting on the ladder which is its weight and it acts vertically downwards. Then why does the normal contact force from the vertical wall act horizontally on the ladder? There must be a horizontal force acting on the wall to exert a horizontal force on the ladder. What causes the horizontal force on the wall and what is it called? • I have updated my answer to give you, what I think is, a more definitive explanation as to why there needs to be a horizontal reaction force at the wall. Hope it helps. Dec 27, 2019 at 16:39 Think about how a ladder stands up in real life. Would the ladder stay in the orientation shown in your image if the ground were ice? No! The reason? Friction. The friction force, represented by $$\vec{F}_{ff}$$ in the figure acts to prevent the ladder from sliding to the right. There are actually 5 forces acting on this ladder: • $$\vec{F}_g$$: the gravitational force (aka the "weight" force), which pushes the ladder toward the ground • $$\vec{F}_w$$: The normal force of the wall on the ladder, which prevents the ladder from falling into the wall. • $$\vec{F}_{fw}$$: The friction force of the wall on the ladder, which prevents the ladder from sliding down the wall • $$\vec{F}_f$$: The normal force of the floor on the ladder, which prevents the ladder from falling through the ground. • $$\vec{F}_{ff}$$: The friction force of the floor on the ladder, which prevents the ladder from sliding to the right. • I think your explanation in the first paragraph, while a correct intuitive explanation, does not explain why there is a need for a horizontal friction force when the only external forces acting on the ladder are vertical. I think that is what the OP is asking. Dec 27, 2019 at 15:14 I feel like there is something missing in this diagram, which is torque. In reality, there is a torque on the ladder, due to gravity, which causes it to want to rotate counterclockwise around the point where it touches the floor. This torque is "responsible" in some sense for the force of the top of the ladder against the wall (and the counterbalancing force of the foot of the ladder against the floor's friction.) I don't see any torques in your free body diagram, although I do see an angle "alpha" at the base of the ladder, which is suggestive that maybe there should be some. If you haven't covered torque yet, this is not a great problem to try to work through. • There is nothing missing in the diagram. The torque, or moment, that the gravitational force has about the points where the ladder contacts the ground and wall are taken into account when summing the moments about those points and setting them to zero for static equilibrium. The locations where the ladder contacts the wall and floor offer no moment reaction (like a hinge support on a simply supported beam has no moment reaction). Dec 27, 2019 at 15:52 There must be a horizontal force acting on the wall to exert a horizontal force on the ladder. What causes the horizontal force on the wall and what is it called? Actually, there must be a horizontal force acting somewhere on the ladder to require an equal and opposite normal reaction force on the wall for equilibrium. That horizontal force acting on the ladder is the friction force at the base of the ladder. So what your question really boils down to is, why is there a friction force at the base of the ladder? @Bunji has given you an intuitive explanation. The following is in terms of the gravitational force acting on the ladder. To answer that question note that any force can be resolved into mutually perpendicular components. Therefore $$F_g$$ can be resolved into two components, one acting down and parallel to the ladder, $$F_{g}sinα$$, and one perpendicular to the ladder, $$F_{g}cosα$$. At the base of the ladder, this force down and parallel to the ladder has a vertical downward component acting on the ground and a horizontal component acting on the ground to the right. Per Newton's third law these forces have equal and opposite reaction forces as shown in the free body diagram of the ladder at the base. One of those is the horizontal friction force acting to the left. For equilibrium you then need a horizontal reaction force on the wall for the sum of the horizontal forces on the ladder to be zero. All of the above is intended only to explain the reason for a normal reaction force at the wall. Given that you now have 4 unknown reaction forces and one known force, $$F_g$$. So solve for the 4 unknowns, you need 4 equations. From here you should be able to identify the needed equations if you realize that the sum of the moments where the ladder contacts the ground and the wall have to be zero for equilibrium. Hope this helps. Normal forces are always perpendicular to the direction of possible slipping since they do no work. Zero work means they must be perpendicular to any displacement or motion. Since the ladder can slip downwards at the top, the only possible direction for the normal force is horizontally. You can think of normal forces as enforcers of a specific constraint. In this case the ladder cannot interpenetrate through the wall. So a force that resist pushing into the wall must be perpendicular to the wall. Normal forces act perpendicular to the surface in contact. The force acting in the ladder is actually somewhere in between the horizontal and vertical forces shown. Those are just the components of the normal force. Actually if you see the diagram , body is in rotational inertia, and rod is not perfectly vertical or horizontal, thing which work here is frictional force between rod and surfaces, friction always try to maintain the inertia, here the friction which acts is static friction. • Hmmm.. I don't recall statics problems ever talking in terms of "rotational inertia" which is defined, as I understand it, as the measure of an object's resistance to change in rotation. The ladder is in static equilibrium and therefore does not, by definition. undergo rotation. If there is no rotation there is no resistance to a change in rotation. To prevent rotation, the sum of the moments anywhere on the ladder have to equal zero. Dec 27, 2019 at 15:59
## Ising 2d Program Ising2d simulates $N$ Ising model spins on a $L \times L$ square lattice with periodic boundary conditions. The Metropolis Monte Carlo algorithm is used. The goal of this simulation is to explore the properties of the 2d Ising model, including high and low temperature behavior and the nature of the phase transition between a ferromagnetic state at low temperatures and a paramagnetic state at high temperatures. Problem: Simulation of the two-dimensional Ising model Use Program Ising2d to explore some of the properties of the two-dimensional Ising model on a square lattice at a given temperature $T$ and external magnetic field $H$. First choose $N=L^2 = 32^2$ and set $H=0$. The initial orientation of the spins is all spins up. 1. Choose $T = 10$ and run until equilibrium has been established. Is the orientation of the spins random and the mean magnetization approximately equal to zero? What is a typical size of a domain, a region of parallel spins? 2. Choose a low temperature such as $T = 0.5$. Are the spins still random or is there a preferred direction? You will notice that $\overline{M} \approx 0$ for sufficient high $T$ and $\overline{M} \neq 0$ for sufficiently low $T$. Hence, there is an intermediate value of $T$ at which $\overline{M}$ first becomes nonzero. 3. Choose $L=4$ and $T=2.0$. Does the sign of $M$ change during the simulation? Choose a larger value of $L$ and observe if the sign of $M$ changes. Will the sign of $M$ change for $L \gg 1$? Should a theoretical calculation of $\langle M\rangle$ yield $\langle M \rangle \neq 0$ or $\langle M \rangle = 0$ for $T < T_c$? 4. Start at $T=4$ and determine the temperature dependence of $m$, the zero-field susceptibility $\chi$, the mean energy per spin $E/N$, and the specific heat $C$. Decrease the temperature in intervals of 0.2 until $T \approx 1.6$, equilibrating for at least $1000\,$mcs before collecting data at each value of $T$. Describe the qualitative temperature dependence of these quantities. (When the simulation is stopped, the mean magnetization and the mean of the absolute value of the magnetization is returned.) Because $M$ sometimes changes sign for small systems, the value of $\langle |m| \rangle$ is a more accurate representation of the magnetization. For this reason the susceptibility is given by $$\chi = \frac{1}{kT}\big[\langle M^2\rangle -\langle|M|\rangle^2\big], \label{eq:5/chiabs}$$ rather than by using $\langle M \rangle$. 5. Set $T = T_c \approx 2.269$ and choose $L \geq 128$. Obtain $\langle m \rangle$ for $H = 0.01$, 0.02, 0.04, 0.08, and 0.16. Make sure you equilibrate the system at each value of $H$ before collecting data. Make a log-log plot of $m$ versus $H$ and estimate the critical exponent $\delta$ using $$|m| \sim |H|^{1/\delta} \qquad (T = T_c). \label{eq:5/delta}$$ Problem: The Ising model in an external magnetic field Use Program Ising2d to simulate the Ising model on a square lattice at a given temperature. Choose $N=L^2=32^2$. Run for at least 200 Monte Carlo steps per spin at each value of the field. 1. Set $H=0.2$ and $T = 3$ and estimate the approximate value of the magnetization. Then change the field to $H = 0.1$ and continue updating the spins (do not press New) so that the simulation is continued from the last microstate. Note the value of $m$. Continue this procedure with $H=0$, then $H = -0.1$ and then $H=-0.2$. Do your values of $m$ change abruptly as you change the field? Is there any indication of a phase transition as you change $H$? 2. Repeat the same procedure as in part (a) at $T = 1.8$ which is below the critical temperature. What happens now? Is there evidence of a sudden change in $m$ as you change the direction of the field? Problem: Simulations of metastability 1. Use Program Ising2d to simulate the Ising model in a magnetic field. Choose $L=64$, $T=1$, and $H=0.7$. Run until the system reaches equilibrium. You will notice that most of the spins are aligned with the magnetic field. 2. Pause the simulation and let $H=-0.7$; we say that we have “flipped” the field. Continue the simulation after the flip of the field and watch the spins. What is the equilibrium state of the system after the flip of the field? Do the spins align themselves with the magnetic field immediately after the flip? Monitor $m$ as a function of time. Is there a time interval during which the mean value of $m$ does not change appreciably? At what time does $m$ change sign? Is the time when $m$ becomes negative the same each time you do the simulation? (The program uses a different random number seed each time it is run.) 3. Keep the temperature fixed at $T=1$, set $H = 0.6$, and flip the field as in part (b). How does the time that $m$ changes sign compare to the mean time for $|H|=0.7$? Problem: Finite-size scaling and critical exponents We expect that if the correlation length $\xi(T)$ is less than the linear dimension $L$ of the system, our simulations will yield results comparable to an infinite system. However, for $T$ close to $T_c$, the results of simulations will be limited by finite-size effects. Because we can only simulate finite systems, it is difficult to obtain estimates for the critical exponents $\alpha$, $\beta$, and $\gamma$ by varying the temperature. The effects of finite system size can be made quantitative by the following argument, which assumes that the only important length near the critical point is the correlation length. Consider, for example, the critical behavior of $\chi$. If the correlation length $\xi \gg 1$, (all lengths are measured in terms of the lattice spacing) but is much less than $L$, a power law behavior is expected to hold. However, if $\xi$ is comparable to $L$, $\xi$ cannot change appreciably and $$\chi \sim |\epsilon|^{-\gamma}\label{eq:5/gamma}$$ is no longer applicable. This qualitative change in the behavior of $\chi$ and other physical quantities occurs for $$\xi \sim L \sim |T - T_c|^{-\nu}.\label{eq:5/lenscale}$$ We invert (\ref{eq:5/lenscale}) and write $$|T - T_c| \sim L^{-1/\nu}. \label{eq:5/distance}$$ If $\xi$ and $L$ are approximately the same size, we can substitute \eqref{eq:5/distance} into \eqref{eq:5/gamma} to obtain $$\chi(T=T_c) \sim [L^{-1/\nu}]^{-\gamma} \sim L^{\gamma/\nu}. \label{eq:5/finitechi}$$ The relation (\ref{eq:5/finitechi}) between $\chi$ and $L$ at $T=T_c$ is consistent with the fact that a phase transition is defined only for infinite systems. We can use the relation (\ref{eq:5/finitechi}) to determine the ratio $\gamma/\nu$. This method of analysis is known as finite-size scaling. 1. Use Program Ising2d to estimate $\chi$ at $T=T_c$ for different values of $L$. Make a log-log plot of $\chi$ versus $L$ and use the scaling relation \eqref{eq:5/finitechi} to determine the ratio $\gamma/\nu$. Use the exact result $\nu=1$ to estimate $\gamma$. Then use the same reasoning to determine the exponent $\beta$ and compare your estimates for $\beta$ and $\gamma$ with the exact values $\beta = 1/8$ and $\gamma = 7/4$. 2. Make a log-log plot of $C$ versus $L$. If your data for $C$ are sufficiently accurate, you will find that the log-log plot of $C$ versus $L$ is not a straight line but shows curvature. The reason is that the exponent $\alpha$ equals zero for the two-dimensional Ising model, and $C \sim C_0 \ln L$. Is your data for $C$ consistent with this form? The constant $C_0$ is approximately 0.4995. ## Resources Problems 5.14, 5.19, 5.22 and 5.38 in Statistical and Thermal Physics: With Computer Applications, 2nd ed., Harvey Gould and Jan Tobochnik, Princeton University Press (2021). OSP Projects:
# Mean and Standard Deviation of 100 Observations Were Found to Be 40 and 10 Respectively. - Mathematics Mean and standard deviation of 100 observations were found to be 40 and 10 respectively. If at the time of calculation two observations were wrongly taken as 30 and 70 in place of 3 and 27 respectively, find the correct standard deviation. #### Solution Given: Number of observations, n = 100 Mean, $\bar{x}$ = 40 Standard deviation, $\sigma$   = 10 We know that $x = \frac{\sum_{} x_i}{100}$ $\Rightarrow \frac{\sum_{} x_i}{100} = 40$ $\Rightarrow \sum_{} x_i = 4000$ ∴ Correct $\sum_{} x_i = 4000 - \left( 30 + 70 \right) + \left( 3 + 27 \right) = 3930$ Correct mean = $\frac{\text{ Correct } \sum_{} x_i}{100} = \frac{3930}{100} = 39 . 3$ Now, Incorrect variance, $\sigma^2 = \frac{\sum_{} x_i^2}{100} - \left( 40 \right)^2$ $\Rightarrow \frac{\sum_{} x_i^2}{100} = 100 + 1600 = 1700$ $\Rightarrow \sum_{} x_i^2 = 170000$ Correct $\sum_{} x_i^2 = 170000 - {30}^2 - {70}^2 + 3^2 + {27}^2 = 164939$ ∴ Correct standard deviation $= \sqrt{\frac{164939}{100} - \left( 39 . 3 \right)^2}$ $= \sqrt{1649 . 39 - 1544 . 49}$ $= \sqrt{104 . 9}$ $= 10 . 24$ Is there an error in this question or solution? #### APPEARS IN RD Sharma Class 11 Mathematics Textbook Chapter 32 Statistics Exercise 32.6 | Q 8 | Page 42
# 2.1  Recognizing Bit Vectors Non-negative bit vectors of a given width are recognized by the following predicate: Definition 2.1.1   (bvecp) Let and . Then Note that no natural number has a well-defined width, e.g., is a 4-bit vector, but more generally a -bit vector for all . 0 is a -bit vector for all . (bvecp-monotone) Let , , and . If and is an -bit vector, then is an -bit vector. PROOF: Since and , we have , which implies (bvecp-shift-down) Let , , and . If is an -bit vector, then is an -bit vector. PROOF: By Definition 2.1.1, (bvecp-shift-up) Let , , and . If is an -bit vector, then is an -bit vector. PROOF: By Definition 2.1.1, (bvecp-product) Let , , and . If is an -bit vector and is an -bit vector, then is an -bit vector. PROOF: and (bvecp-1-rewrite) Let . If is a 1-bit vector, then . PROOF:   . David Russinoff 2017-08-01
How Different are Men and Women? • 4.4k I definitely agree that the hormonal explanations for gender are more important than ideas about reincarnation, which are purely speculative. There is also the possibility of neuroscience leading to new findings. One important area is trying to establish whether or not it is true that gender dysphoric individuals have physical differences here, possibly related to hormones in brain development before birth, and even afterwards. There is also a lot that is not understood about the genetics of gender. One aspect is how gender differentiation, which was previously thought to be due to the sex chromosomes is not that simple. One gene which has been identified as extremely important is Foxl2. Apparently, this switches on or off certain processes in sexual differentiation. The nature of sex chromosomes is an important area, although, mostly, sometimes people may exaggerate the importance of chromosomes. Most people have never had chromosome tests. Recently, I read that it has come to light that more men have chromosome disorders than previously thought. This includes XXY of Klinefelter's Syndrome and XYY. The nature of XYY chromosomes is of significance because it was found to be more prominent and associated with those who had committed crimes. • 4.4k It does seem that the nature of gender has been exaggerated so much culturally. Of course, in animal kingdoms there is sexual performance, so some may be due to the instinctual or biological patterns of nature. However, the sociology of gender has been important in pointing to the cultural aspects. In particular, the postmodernist deconstruction of gender was extremely important in the development of critical theory about gender and its dynamics. • 4.4k I am just also writing in response to your mention of thinking about gender in relation to race. It does seem that sexual inequality and racial inequality may have coexisted. It involves biological differences being used as a basis for subordination. During the last century there were major shifts in questioning racism and sexism. In particular, feminism identified the existence of a patriarchy in history. Thinking about the nature of biological differences and the political aspects of this has been an important area. It has led to people querying gender essentialism. It is likely that in the aftermath of postmodernism, there are still a lot of questions, especially the interplay of biology, culture and politics. Mainstream religion, especially fundamentalism was an important dynamic force. In the twentieth first century, it may be that there is a void of uncertainty, especially in the 'post-truth' world. • 8.8k In terms of identity men and women or trans do not exist. Those terms are societal shorthand - useful tools to make communicating a bit easier. But all that exists are unique individuals. The second the individual starts to accept these generalizations as actually defining them, the soul loses its wings. Radical freedom, where can you be whatever you want to be as long as you free your mind from external constraints, is an interesting notion, but it's more of an aspirational concept than something that really exists. Even in your metaphorical language ("the soul loses its wings"), you allude to obvious limitations. No matter how much I wish to cast aside the external restraints of my nature and nurture, I won't be able to fly (as in literally fly). The problem with freedom is that it is a pretty slippery concept of questionable metaphysical construct. What I mean is that there must be a driver that determines why you choose A over B, and if you've discounted your genetic composition and you've discounted your environment as being causative of that decision, then what is left? Do you mean to say that your soul, acting alone, based upon its nature, decided without constraint? Are you not then really just arguing that nature (as opposed to nurture) made you act as you did, meaning, basically, "you were born that way." • 7k If you start with the enlightenment image of the white man as 'thinking thing', you get a physically feminised white man in relation to the physically hyper-masculine black. This results in the need for the white woman to be ultra feminine (empty-headed), to make the white man look masculine by comparison, whereas the black woman is physically the amazon. Such is Cleaver's insight, and it still rules the unconscious to a great extent. What this means is that the question of whether gender is more so physical or mental (cultural/ brain chemical) is already racialised. It already depends on which racial stereotype is being considered, and it is usually the white one. • 4.4k The whole interplay between gender and racism in power is important as well as the way in which stereotypes impact on life. This involves the concept of otherness. One essay on this is 'The Other Question: Stereotype, discrimination and the discourse of colonialism. He speaks of power in discourse, saying how it involves 'articulation of difference_ racial and sexual. Such an articulation becomes crucial if it is held that the body is always simultaneously (if conflictually) inscribed in both the economy of discourse, dominanation and power.' • 9.8k I suppose you could view it as a radical free choice position. I believe one can only explore that which is truly authentic to the self when one is free of external pressures on the mind. That includes both nature and nurture, and thus societally-constructed gender identities, whether they're traditional or trans. In terms of identity men and women or trans do not exist. Those terms are societal shorthand - useful tools to make communicating a bit easier. But all that exists are unique individuals. The second the individual starts to accept these generalizations as actually defining them, the soul loses its wings. In order for there to be "radical free choice" or anything near it, there would have to be no human nature. Nothing built in. We would have to be born as blank slates. • 9.8k Culture exaggerates sexual differences where they statistically occur, and invents them everywhere else. I don't think biological sexual differences are just "statistical." I think they are obvious and significant. To deny this is to ignore the evidence of your senses. That doesn't mean we are destined and condemned to living out societal expectations, but it's not some trivial artifact of our troglodyte past. Always good to be able to use "troglodyte" in a post. • 9.8k Thinking about the nature of biological differences and the political aspects of this has been an important area. It has led to people querying gender essentialism. It is likely that in the aftermath of postmodernism, there are still a lot of questions, especially the interplay of biology, culture and politics. I think what you write is true, but that doesn't mean that those "querying gender essentialism" have got it right. Denying who we irrefutably are for political purposes is not liberation, it's foolishness. • 9.8k Do you mean to say that your soul, acting alone, based upon its nature, decided without constraint? Are you not then really just arguing that nature (as opposed to nurture) made you act as you did, meaning, basically, "you were born that way." I agree with much of what you say, but I don't think @Tzeentch's position requires that we be completely ruled by our nature. I think it would have to mean that our true self, our soul, comes from somewhere outside of either nature or nurture. • 9.8k The whole interplay between gender and racism in power is important as well as the way in which stereotypes impact on life. This involves the concept of otherness. I think overemphasizing the parallel between racial oppression and sexual discrimination is a mistake. The situations are different. • 7k I don't think biological sexual differences are just "statistical." I think they are obvious and significant. To deny this is to ignore the evidence of your senses. If that were the case, there would be no need to differentiate them by artificial means such as designated clothing, hairstyles etc. In the days when I had long hair and a child in a pushchair, I was frequently mistaken for a woman from a short distance - despite the beard. Anecdotes of serious misidentifications with 'ladyboys' in foreign parts have also reached me, so I take your claim of infallibility on the subject with a deal of scepticism. • 9.8k If that were the case, there would be no need to differentiate them by artificial means such as designated clothing, hairstyles etc. I don't think that's true. I don't deny there are social pressures to conform to accepted sexual behaviors, but that's clearly, to me at least, not all there is to it. • 7k I don't think that's true. You are not saying anything. What is the need to differentiate the sexes by dress and hairstyle, then? I'm saying it's because you need to know who to fight and who to fuck, and you can't always tell by size, shape, sound... If you can always tell, then there must be some other reason. • 9.8k What is the need to differentiate the sexes by dress and hairstyle, then? Why can't it be both - biology and society? • 8.8k I agree with much of what you say, but I don't think Tzeentch's position requires that we be completely ruled by our nature. I think it would have to mean that our true self, our soul, comes from somewhere outside of either nature or nurture. The problem then comes from statements like "being true to yourself," as if your soul is a certain way, that you were made a certain way, which would continue to demand that you be controlled by the nature of your soul. I'm not sure why it matters if by "nature" we mean genetic composition or soul. That is, if I have a Hanover soul, I gotta be Hanoveresque, which means I can't be T Clarkesque. If I have a male soul, I have to be a male. I don't see where this give me more freedom. • 7k It can't be either, if you mean by 'it' the answer to my question to you. I have already given you personal testimony that people cannot always 'obviously' distinguish the sexes. This is why they have tests in sport, and why we had a female pope. Some species do have clear markers for sex of size, or plumage or shape, but humans do not. Manboobs are generally smaller than womanboobs, but small womanboobs can be smaller than merely medium manboobs.That is to say, the boobs thing is a statistical difference. Nor does one sex have colourful plumage or horns. So we exaggerate the differences with cultural codes. • 7.6k A very informative post Jack. I'm in your debt. Clearly there's been a lot of research since I last touched a book on biology. My files are outdated; nevertheless, since something is better than nothing, I'm ok with hanging onto what I learned many suns ago in college. That's that. I wish there was someone here on the forum who knew more about the sexual revolution and no, I'm not talking about the one that happened in the 60s - 70s. At what stage in the evolution of life, did it undergo the mitosis-to-meiosis transformation and why? Sex, from what I know, is the dominant mode of reproduction in metazoans. The natural question then is this: is the LGBITQ community a sign of a reproductive revolution (asexual $\to$ sexual $\to$ ?)?. From a mathematical perspective it makes perfect sense - more combinations & permutations there are, the better it is. I haven't worked out the deatails though so don't ask me to explain. Have an awesome day Jack. • 9.8k I have already given you personal testimony that people cannot always 'obviously' distinguish the sexes. This is why they have tests in sport, and why we had a female pope. Some species do have clear markers for sex of size, or plumage or shape, but humans do not. Manboobs are generally smaller than womanboobs, but small womanboobs can be smaller than merely medium manboobs.That is to say, the boobs thing is a statistical difference. Nor does one sex have colourful plumage or horns. The fact that there are strong, aggressive women and physically weaker, less assertive men is no evidence at all that there are not significant biological differences between men and women. I've heard that some people eat peas with their knives. That doesn't make me think that there is no difference between a knife and a fork. I personally usually eat them with a spoon. • 7k The fact that there are strong, aggressive women and physically weaker, less assertive men is no evidence at all that there are not significant biological differences between men and women. I agree. It would be ridiculous to suggest there are no significant biological differences. What on earth made you think I suggested anything of the sort? Men almost never become pregnant. • 9.8k Men almost never become pregnant. Do you believe that's the only significant difference? • 7k Did your Mummy and Daddy not explain the facts of life to you? • 1.3k Did your Mummy and Daddy not explain the facts of life to you? • 466 I would add that I am trying to explore the ideas around essentialism, relating to gender and sexuality. To what extent are men and women different, or what it means to be a man or woman and how this question is explored introspectively? "Male" and "female" are simply words people use. There are many others, of course, but there is no inherent content in a word (be it uttered or written) or grouping of words. In simplest form, we understand meaning (and attempt to convey it) in words by virtue of context - where/when the word is used, by whom it is spoken, to whom it is directed, the language community within which it is used, etc. "Biology" is no different than any other word. Some people mean one thing, other people understand something else, and the world turns. In this case, we are talking about essentialism - what is it, from a biological perspective, that justifies including some organism in group A and excluding them from B? Essentially, the biology split between male and female is in the context of sexual reproduction: it hinges on what an organism contributes to its offspring: males provide the smaller gamete while females provide the larger gamete. In this way, the use of male and female regarding a specific reproductive act is unambiguous. Where biology becomes increasingly ambiguous is the extent to which the use of "male" and/or "female" is abstracted away from a particular reproductive act. On the first level, organisms that contribute the larger gamete exclusively are female, organisms that contribute the smaller gamete exclusively are male, and organisms that contribute both are hermaphrodites. On the second level, organisms are grouped together - those have reproduced with one another are in the same group (species) while other organisms that have not reproduced with them are not in the group. On the third level, the criteria for group membership is expanded - organisms that are the offspring of the reproducing organism/s (parent/s) are added to the group irrespective of whether the offspring will ever reproduce. Not just are offspring added, but so are other organisms that are believed to be similar to the reproducing organisms (e.g. siblings of the parent/s). Whatever the structural account of how gametes (large or small) are made (e.g. gonads), species members that have the structural potential of making large ones are called called female, those capable of producing small ones are male, and those that have the potential to do both are hermaphrodites. The move here (rather than the particular steps) is what is at issue - the act of reproduction and naming the participants (by class) turns into naming other non-participants by abstraction. The question is, what characteristic makes the use of "male" or "female" warranted in the case of an organism that either has a) not yet reproduced or b) is incapable of reproduction (e.g. injured such that gonads are non-present or non-functional). Putting aside the taxonomical issue of what a species is, at some point characteristics of the organisms aside from contributing the larger or smaller gamete begin to be considered - those characteristics that are found with greater frequency (or exclusively) in males than in females (and vice versa) are then deemed "male". The utility in associating other characteristics with potential gamete contribution (even if a factual impossibility) varies. Sometimes it is helpful in describing anatomy, sometimes it is helpful in predicting a disease process, etc. Each of the extended uses of "male" and "female" need to be evaluated on their own merit (do they convey any substance in an acceptable manner). The biological use case of "male" and "female" are not, however, prescriptive, rather they are descriptive of statistically meaningful trends (i.e. characteristics that occur with sufficient frequency). Equally important, they are not statements of "natural law" (i.e. a limitation on how the natural world might be). Where the difficulty arises, in my mind, is when people try to subsume the biological underpinnings of sex (gamete contribution) and speak as if the correlative characteristics are what is essential to the biological categorization. I grant to you in advance that the words/concepts of male and female preceded biology and that how sexual reproduction happens is utterly irrelevant to the development of those ideas/words outside of a more contemporary biological understanding of sex. It is precisely this type of co-development that ends up causing confusion about what "essentialism" can even mean because the great weight of history and historical uses is against the contemporary technical usage of a word. From my perspective, discussions of biology in conversations about sex/gender are really just rhetorical devices - appeals to authority to validate a person's claims. In large part, this relates to something another poster alluded to (whose name I might add later when I look it up it was you) when mentioning the hardware of anatomy and whether such anatomy fundamentally dictates/limits experience/preference. If, for instance, you haven't a certain part of your brain, is there some essential difference between you and a person that has that part? If having that part of the brain is highly correlated with being in the biological bucket of male, then aren't males essentially different than females? Does a single example of a male not having that part of the brain or a female having it change whether that feature is essential to male/female? The inclusion criteria for what is male/female from a biological perspective is never the same as the essential criteria being discussed - we know in advance that there is almost certain to be less than a perfect correlation (every male has it and every female does not). It is, therefore, a foregone conclusion both that any alleged claim regarding an essential characteristic will have exceptions and that the person making the claim will ignore those exceptions. Once we have some understanding (if not agreement) about what we mean by "essentialism" from a biological perspective, we can take up how it relates to your areas of interest. Suffice it to say, I am sympathetic to gender being performative and society enforcing/teaching individuals how to play the part (even if that part changes over time). In the same way that society molds our desires and identities with everything else (need for chocolate, being Scottish), it should come as no surprise that people believe that sex/gender is a core, immutable part of their identity that is actually based in their very being (biology). • 3.9k Suffice it to say, I am sympathetic to gender being performative and society enforcing/teaching individuals how to play the part (even if that part changes over time). And perhaps that social part is already shaped by inborn gender disposition rather than being dictated solely by culture. “Instead of the young being socialized by society, as many people believe, they may flesh out their gender roles largely by themselves through observation and emulation of models of the gender they identify with. In our fellow primates, we have scattered evidence that the young selectively attend to same-sex models. For example, a recent orangutan study in the Sumatran forest by Beatrice Ehmann and colleagues showed that pre-pubertal daughters eat the same foods as their mother, whereas same-aged sons have a more diverse diet. Having paid attention to a wider range of models, including adult males, young males consume foods that their mother never touches. Similarly, Elizabeth Lonsdorf observed how juvenile chimpanzees at Gombe National Park, in Tanzania, learn from their mother how to extract termites by dipping twigs into the insects’ nests. Daughters faithfully copy the exact fishing technique of their mother, whereas sons do not. Despite both spending equal time with their mom, daughters seem to watch her more intently during termite feeding. These examples don’t yet amount to gender roles. It is much easier to measure tool-use and food habits in the forest than social attitudes and norms. But primate culture studies are evolving and will no doubt include social measures in the future. At the very least, current evidence suggests that young apes choose which adult models to emulate based on their own gender identity. Young males look for male models, young females for female models. I would therefore not exclude gender socialization in our fellow primates, nor for that matter in other animals.” (Frans De Waal, The Gendered Ape, Essay 3: Do Only Humans Have Genders?) • 1.9k Even in your metaphorical language ("the soul loses its wings"), you allude to obvious limitations. There are biological and physical realities, of course. People can't fly, if you don't breathe you die, etc. But I don't view identity as a reality. It's a set of beliefs we have about ourselves. Or, If identity can be said to be real and impose limitations on the individual, my view would be that reason is the means to transcend it. It's something we can control, or even dispose of altogether, if we want to, and if we develop the tools to understand it. Even if one chooses to keep some concept of identity for the sake of interpersonal communication, there is likewise no reason one should come to view it as truly defining oneself or growing attached. In order for there to be "radical free choice" or anything near it, there would have to be no human nature. Nothing built in. We would have to be born as blank slates. But in all seriousness, I view 'human nature' more as tendencies we humans have when we're not in control. If you let go of the steering wheel in your car, you'll probably not end up going straight and crashing into a tree. We override our natural tendencies all the time, showing that we can be in control, if we want to. • 8.5k We override our natural tendencies all the time, showing that we can be in control, if we want to. How on earth would you know? Do your thoughts all have labels on them declaring their origin? • 7.6k During the last century there were major shifts in questioning racism and sexism. Good observation: Women could've been, practically are, a distinct race. Men just use women for making copies of themselves (babies). Did you know, female infanticide was a major problem in India & China a few decades ago? Ultrasonographers in India were forbidden by law to disclose the sex of the fetus - this spawned a market of back alley abortion "clinics" but that's another story. What's interesting is Godzilla (1998) could reproduce without the aid of a male (parthenogenesis). Jesus' virgin birth maybe God's way of saying men are redundant/superfluous/unnecessary. Explains why women are the first category of hostages set free and men are, absit iniuria, dispensable. Of interest to you maybe the Amazons, feared tribe of warrior women. Myth/fact I dunno! • 4.4k It is hard to know the reality of Amazons and other aspects of mythic fables, including the idea of a matriarchy preceding a patriarchy. There are statues of goddesses, but it is difficult to know what this represents historically. Ideas about gods and gender are diverse, with the Hindus having some androgynous deities. In Christianity, there is a mixed picture because the Virgin mother is presented as a female role model against a background of Christianity and its patriarchal elements. The Virgin Mary may be contrasted with Mary Magdalene, who some have seen as Jesus's wife based on aspects of Gnostic writings. In some countries, there has been infanticide of female infants. The current reproductive technology has the power to choose the sex of the child being conceived. Perhaps, at some point biological men will be able to give birth. The story of the 'pregnant man', and there may have been a number of these caused a lot of sensation. However, it was different from a biological man giving birth because it involved a biological female having taken male hormones but still having female internal organs and fertility. Nevertheless, unless the person was trans I am not sure that many men would wish to give birth. • 9.8k I view 'human nature' more as tendencies we humans have when we're not in control. You and I have very different understandings of human nature. bold italic underline strike code quote ulist image url mention reveal
Feeds: Posts ## Happy October 23 What is so special about October “23”? Well, as Dr. Paul Stanford has proven in his lectures there are interesting facts about every number! Here is what he has to say about the number 23. If you have any other facts to add or if you see any mistakes in my transcription of Dr. Stanford’s notes please use the comments below. # 23 is… The largest number not the sum of distinct powers. With 23 people in a room, odds are that two share a birthday (better than 50:50.) Prime, smallest odd prime not a twin. Woodall number. $23=3 \cdot 2^3-1$ One of the only two numbers that need 9 cubes. (The other is 239.) $23=1^3+1^3+1^3+1^3+1^3+1^3+1^3+2^3+2^3$ If negatives allowed, $23=2^3+2^3+2^3+(-1)^3$ $23=0 \cdot 0!+1 \cdot 1!+2 \cdot 2!+3 \cdot 3!$ 23 is the smallest number of rigid rods that brace a square. First prime where 23rd roots of unity form cyclotomic integers without unique factorization. Number of trees with eight nodes. Factor of $2^{11}-1$ $2^{23}-1$ is composite: $47 \cdot 178481$ Sophie Germain prime: 2(23)+1 also prime. Wedderburn-Etherington number. The first pillar prime.
# How do you solve (x - 3) * 2- 2x = 0? ###### Question: How do you solve (x - 3) * 2- 2x = 0? #### Similar Solved Questions ##### Consider the reaction A2 + B2 ⇌ 2AB. If the initial concentration of both A2 and... Consider the reaction A2 + B2 ⇌ 2AB. If the initial concentration of both A2 and B2 is 4.0 M, and after 10 minutes the reaction appears to stop. The concentration of [A2] is now 2.0M. d. We can discuss the rate in terms of [A2], [B2], or [AB]. How are they related to each other?   ... ##### Assets 300000 basic value 400000 nonrecourse debt 500000 basic 5000000 value if partners capita; account is... assets 300000 basic value 400000 nonrecourse debt 500000 basic 5000000 value if partners capita; account is (1000000) what is minimun gain charge back if partnership taxable income is 20000 and iabilies are reduce by 150000... ##### Find f' (x) forJ(r? + 2) cot(r) f(z) cos(r) csc(z)f' (z) Find f' (x) for J(r? + 2) cot(r) f(z) cos(r) csc(z) f' (z)... ##### The measured conductivity of a 0.05445 M solution is 7451 µS/cm. Calculate the molar conductivity in... The measured conductivity of a 0.05445 M solution is 7451 µS/cm. Calculate the molar conductivity in SI units.... ##### Question 4 Consider the following situation, where P is deposited into a savings acount and... Question 4 Consider the following situation, where P is deposited into a savings acount and three withdrawals are made. Find P 70 50 60 i=... ##### At what temperature (in K) does a reaction startistop being spontaneous under standard conditions if the reaction has a AHPrxn +133.3 kJlmol and AS?rxn = +121.6 JI(mol-K)?8251096598273 At what temperature (in K) does a reaction startistop being spontaneous under standard conditions if the reaction has a AHPrxn +133.3 kJlmol and AS?rxn = +121.6 JI(mol-K)? 825 1096 598 273... ##### Li two U FL your an uranswble expaersioni against 0.54 H Lthrough :1Not saved Submit Quiz Next li two U FL your an uranswble expaersioni against 0.54 H Lthrough : 1 Not saved Submit Quiz Next... ##### Suppose that a poll taken among 100 students at a university turns out that 57 of... Suppose that a poll taken among 100 students at a university turns out that 57 of them favor a diet containing meats, while 39 of them favor a vegetarian diet and 4 of them did not answer the poll with either of the two options (meat or vegetarian). (i) Compute the probability that the observed samp... ##### How do you find the mean of the random variable x? X= 5,10,15,20,25 P(x) = 1/5, 1/5 , 1/5, 1/5, 1/5 What is the variance and standard deviation of the random variable x? What is the standard deviation of the random variable x?... ##### A diver springs upward from a board that is 3.10 meters above the water. At the... A diver springs upward from a board that is 3.10 meters above the water. At the instant she contacts the water her speed is 9.07 m/s and her body makes an angle of 70.5° with respect to the horizontal surface of the water. Determine her initial velocity.... ##### 5. (1 point) Library/UMN/calculusStewartccc/s_11_8_3.pg Consider the power series (_Yx" n= Vn+3 'ni' Find the radius of convergence R. If it is infinite, type 'infin- ity" or "inf" . ldovq <181 1D 3fgm9no Qwl svn4 gino wol ;alov Answer: Rhotiundu (2hovnrnWhat is the interval of convergence? Answer (in interval notation):Lioon_iAnswer(s) submitted:C- _- A~sus--EraUS-63 MMU ![7818__ (inioq L esn92 Iowoq %d} Tobtefto^(incorrect) 5. (1 point) Library/UMN/calculusStewartccc/s_11_8_3.pg Consider the power series (_Yx" n= Vn+3 'ni' Find the radius of convergence R. If it is infinite, type 'infin- ity" or "inf" . ldovq <181 1D 3fgm9no Qwl svn4 gino wol ;alov Answer: R hotiundu (2hovnrn What ... ##### Prove that if $f$ is a one-to-one odd function, then $f^{-1}$ is an odd function. Prove that if $f$ is a one-to-one odd function, then $f^{-1}$ is an odd function.... ##### Determine the validity of the following statments_The set {Mmz|Is linearly dependet subset of M 3x3 (the set of all matrices of order 3*3 )True Fales Determine the validity of the following statments_ The set {M mz| Is linearly dependet subset of M 3x3 (the set of all matrices of order 3*3 ) True Fales... ##### On heating $1,2,4$ -butanetriol in the presence of an acid catalyst, a cyclic ether of molecular formula $mathrm{C}_{4} mathrm{H}_{8} mathrm{O}_{2}$ was obtained in $81-88 %$ yield. Suggest a reasonable structure for this product. On heating $1,2,4$ -butanetriol in the presence of an acid catalyst, a cyclic ether of molecular formula $mathrm{C}_{4} mathrm{H}_{8} mathrm{O}_{2}$ was obtained in $81-88 %$ yield. Suggest a reasonable structure for this product.... ##### 1.60 A. what is its power? (1 poin 10 A heating element has a resistance of... 1.60 A. what is its power? (1 poin 10 A heating element has a resistance of 180 Ω and draws a current of O288 w 0460 w 0113w ○ 520 w... ##### Growth is a widely held economic goal primarily because it creates a more equal distribution of... Growth is a widely held economic goal primarily because it creates a more equal distribution of wealth and income True or False True False... ##### Two sides of a right triangle measure 10 in. and 8 in.a. Writing Explain why this is not enough information to be sure of the length of the third side.b. Give two possible values for the length of the third side. Two sides of a right triangle measure 10 in. and 8 in. a. Writing Explain why this is not enough information to be sure of the length of the third side. b. Give two possible values for the length of the third side.... ##### QUESTION 6What is the relationship between the two cyclohexane chairs below?CH3 CH3CHzCH3A) conformational isomers B) constitutional isomers C) stereoisomers D) different compounds QUESTION 6 What is the relationship between the two cyclohexane chairs below? CH3 CH3 CHz CH3 A) conformational isomers B) constitutional isomers C) stereoisomers D) different compounds... ##### 1. If a rock is thrown into the air at a 30° degree angle with an... 1. If a rock is thrown into the air at a 30° degree angle with an velocity of 20m/s, what is the maximum height reached by the rock? 2. If a bullet is fired from an gun at a 60° angle with a velocity of 300m/s, how far does it reach before hitting the ground? 3. If the velocity function of a... ##### Loa sidar 4k vecter e8d 6(x,y) =+ 28 yCaulak th uorh along th eath & (t) . (6 , 4+t) loa sidar 4k vecter e8d 6(x,y) = + 2 8 y Caulak th uorh along th eath & (t) . (6 , 4+t)... Consider the situation shown in Figure $\mathrm{P} 34.1 .$ An electric field of 300 $\mathrm{V} / \mathrm{m}$ is confined to a circular area $d=10.0 \mathrm{cm}$ in diameter and directed outward perpendicular to the plane of the figure. If the field is increasing at a rate of 20.0 $\mathrm{V} / \mat... 1 answer ##### CAL Page TUI IU Question 1 (1 point) Genetic Insights Co. purchases an asset for$14,057.... CAL Page TUI IU Question 1 (1 point) Genetic Insights Co. purchases an asset for \$14,057. This asset qualifies as a seven- year recovery asset under MACRS. The seven-year fixed depreciation percentages for years 1, 2, 3, 4, 5, and 6 are 14.29%, 24.49%, 17.49%, 12.49%, 8.93%, and 8.93%, respectively.... ##### A neighborhood transformer on the top of a utility pole transforms 19.0 kV, 60.0 Hz alternating... A neighborhood transformer on the top of a utility pole transforms 19.0 kV, 60.0 Hz alternating voltage down to 245.0 V to be used inside a house. If the primary coil of the transformer has 8890.0 turns, then how many turns does the secondary coil have? HEM Tries 0/12 想交讨论... ##### We were unable to transcribe this imageRequired: Hint: It may be helpful to complete a general... We were unable to transcribe this imageRequired: Hint: It may be helpful to complete a general model diagram for direct materials, direct labor, and variable manufacturing overhead before attempting to answer any of the requirements. 1. What is the standard cost of a single backpack? 2. What was the... ##### Usc logic enlvc the pmblemoin India ater lilics grow extrcmely fast puna Fev dast that eacn dai dnublrdthe ara cutetCO 447 covered Ilu" pond. Huw long would Iake ? suchlilies COYer Fennd} Usc logic enlvc the pmblem oin India ater lilics grow extrcmely fast puna Fev dast that eacn dai dnublrdthe ara cutetCO 447 covered Ilu" pond. Huw long would Iake ? suchlilies COYer Fennd}... ##### In the movie Jumanji, a character dies forever when they haveused all of their 3 lives. After passing away, a character can besaved but the other players have to wait certain period of timebefore doing so. This waiting period appears to follow aLognormal distribution ( Revival period is Y with Y= e^x in which xis normally distributed) The mean waiting period is onehundred seconds with a standard deviation of thirty seconds. Pleasefigure out the following questions:1) Find the mean and SD of of t In the movie Jumanji, a character dies forever when they have used all of their 3 lives. After passing away, a character can be saved but the other players have to wait certain period of time before doing so. This waiting period appears to follow a Lognormal distribution ( Revival period is Y with Y... ##### Determine the value(s) of k so that the linear system-3kl [x3 Determine the value(s) of k so that the linear system -3 kl [x3... ##### How do you solve the system of equations 2x + y = 17 and - 2x - 10y = 10? How do you solve the system of equations 2x + y = 17 and - 2x - 10y = 10?... ##### If two vectors are perpendicular, then their dot product is equal to Vectors with a dol product of zero are also described aS orthogonal ORTHOCONAL Determine whelher Ihe veclors are orthogonal: Veclors 5. m = (3,7); " = (-28,12) (3,-21; h (40,-1518. u = 12i 8j; * = 9+61and are vectors and kis rea number Inen:PROPERTIES ol Dol Producls If two vectors are perpendicular, then their dot product is equal to Vectors with a dol product of zero are also described aS orthogonal ORTHOCONAL Determine whelher Ihe veclors are orthogonal: Veclors 5. m = (3,7); " = (-28,12) (3,-21; h (40,-151 8. u = 12i 8j; * = 9+61 and are vectors and kis... ##### Find an equation of the tangent plane to the parametric surface at the stated point.$x=u, y=v, z=u^{2}+v^{2} ;(1,2,5)$ Find an equation of the tangent plane to the parametric surface at the stated point. $x=u, y=v, z=u^{2}+v^{2} ;(1,2,5)$... ##### Natc thc Skatc was mvO physics student whose main non-physics intcrcst life was high ~Prd = skatcboarding: In particular. Nale ould often don protective suit of Bounce-Tex. which hc invcnted. and after working up high prcd his skalebourd. uuld collide ith sonie object this Way: he got gut fecl for the physicsl properties of collisions and succccded combining his passions . On onc Occsion thc Skatc . with mnsi Ol 19 Kg; ineluding his armtor; hurled himsclf against 80 [ kg stationary ETAME Usa Natc thc Skatc was mvO physics student whose main non-physics intcrcst life was high ~Prd = skatcboarding: In particular. Nale ould often don protective suit of Bounce-Tex. which hc invcnted. and after working up high prcd his skalebourd. uuld collide ith sonie object this Way: he got gut fecl f... ##### UOA (oIul4wnm JuQurtlon >UeaylIesurtea atting cuzankn ol thc raton I| (uncUionMth +ak> =HanteUdinueat]CatanHMEt UOA (o Iul4wnm Ju Qurtlon > Ueayl Iesurtea atting cuzankn ol thc raton I| (uncUion Mth + ak> = Hante Udinue at] CatanHMEt... ##### Problem 7.2 It is known that the connecting rod AB exerts a 500-lb force on the... Problem 7.2 It is known that the connecting rod AB exerts a 500-lb force on the crank BC directed down and to the left along the centerline of AB. Deter- mine the moment of the force about C. VC 252... ##### We were unable to transcribe this imageWh od with yar manager dicity and M5 flare- mem... We were unable to transcribe this imageWh od with yar manager dicity and M5 flare- mem bahy Vital i TIF Client Goals but pole. Ondered medication by the phostian include IV... ##### Q4. (5 points) Rank the following radicals in order of decreasing stability (most stable to least stable).A. B 0IV > I > II > III III > I > II > IV III > II > I> IV III > IV > II > [ II > III > I > IV Q4. (5 points) Rank the following radicals in order of decreasing stability (most stable to least stable). A. B 0 IV > I > II > III III > I > II > IV III > II > I> IV III > IV > II > [ II > III > I > IV... ##### Thanks for your HELP!!! 16. Moya Corporation adopted a plan of liquidation last year. All but... Thanks for your HELP!!! 16. Moya Corporation adopted a plan of liquidation last year. All but a nominal amount of Moya's assets are distributed to its shareholders within the year. Which of the following statements is not true? A) The liquidation of Moya Corporation means the corporation has und... ##### Queston 5P{z-2.131=Reliel ansru edSelec one:Hataed oul ol0.9821Ralquexion0.9334OueshansGiven the probability distribution of arv. Xas TollowsNdlyct ansactcdAdeendulDlAnr qucxionf(xjo50 k0.150.25Jnen P(1<x<t}-Selcc one;0.250.10 0.35OueshaneHow many codes with digits be formed trom the set {0,1,2.,3,4,5,6,7,8,9} knowing that the rst and tne last digits are the sameNotiel unsredHucted oulolSelec one:AOE queyion5040 30240 1002 IODOOQunstonoLer EIX) - Tnen E(x+21=Ncivet ansandWarepdout DlSelech Queston 5 P{z-2.131= Reliel ansru ed Selec one: Hataed oul ol 0.9821 Ralquexion 0.9334 Oueshans Given the probability distribution of arv. Xas Tollows Ndlyct ansactcd AdeendulDl Anr qucxion f(xjo50 k0.150.25 Jnen P(1<x<t}- Selcc one; 0.25 0.10 0.35 Oueshane How many codes with digits be formed... -- 0.026489--
# How to evaluate $\int_{-\infty}^{\infty}(1+kx^2)^{-2}dx$ Can someone give a hint to evaluate the following integral? $$\int_{-\infty}^{\infty}(1+kx^2)^{-2}dx$$ where $k>0$. - By taking $x=\frac{1}{\sqrt{k}}\tan(\theta)$ so, you have $$\int\frac{dx}{(1+kx^2)^2}\longrightarrow\int\frac{dt}{(1+\tan^2(t))\sqrt{k}}$$ which is elementary. - +1 for (the method and) the exact amount of information needed to lead the OP to a full solution. –  Did Oct 29 '12 at 7:48 @did: The exact amount needed depends on the OP. It may well have been better to give a hint that leads the OP to figure out the substitution for himself, rather than just tell him what it is. –  Hurkyl Oct 29 '12 at 7:56 Thanks for the answer. Can you just explain me how did you guess the transformation $x=\frac{1}{\sqrt{k}}\tan(\theta)$? –  Kumara Oct 29 '12 at 9:10 @Kumara: The intuitive idea that I took that substitution was: the statements like $(1+x^2)$ may be convert to a simpler version if one took $x=\tan(t)$. So for example; $(1+x^2)^2$ would be $(1+\tan^2(t))^2=\cos^{-4}(t)$. And of course you know why we choose a coefficient constant $\frac{1}{\sqrt{k}}$. This is a standard method but not the only one as Hurkyl (+1) suggested. –  Babak S. Oct 29 '12 at 11:36 @Kumara: Quadratic polynomials appearing in integrands are very often simplified by completing the square (not needed here) and then making an appropriate trigonometric substitution so that you can use the pythagorean identities to simplify it further. You've probably seen integrals like $$\int \sqrt{1 + x^2} \, dx$$ and this is the same idea. –  Hurkyl Oct 29 '12 at 18:14 Hint: Try integrating by parts integral $$\int\frac{dx}{1+kx^2}$$ - By integration by parts, I see $\int_{-\infty}^{\infty}\frac{dx}{1+kx^2}dx=2k \int_{-\infty}^{\infty}\left(\frac{x}{1+kx^2}\right)^2dx$. After that... –  Kumara Oct 29 '12 at 9:13 Partial fractions would work, if you're comfortable with complex numbers. - Can you elaborate a bit more? –  Kumara Oct 29 '12 at 9:16 @Kumara: It's hard to guess what more you want to hear, since the complete algorithm for partial fractions should be in your book. I'm guessing it's the appearance of complex numbers: the roots of $1 + kx^2$ are complex, so when you factor it into linears, complex numbers will appear. (Of course, if the method of partial fractions reminds you to treat quadratics with imaginary roots by trigonometric substitution, that's just as good) –  Hurkyl Oct 29 '12 at 18:18
# Question #0147a Feb 17, 2017 False #### Explanation: "Percent" or "%" means "out of 100" or "per 100", Therefore 30% can be written as $\frac{30}{100}$ and 12% can be written as $\frac{12}{100}$. When dealing with percents the word "of" means "times" or "to multiply". We can rewrite this problem as: Is $\frac{30}{100} \times 129 = \frac{12}{100} \times 30$? $\frac{3870}{100} = \frac{360}{100}$ $38.70 \ne 3.60$ The two values are not equal so this is false.
# Homework 4 (due April 2) 1. The $n$th Hermite polynomial is $H_n(t,x) = {(-t)^n\over n!} e^{x^2\over 2t} {d^n\over dx^n} e^{-{x^2\over 2t}}$. Show that the $H_n$ play the role that the monomials ${x^n\over n}$ play in ordinary calculus, $dH_{n+1} (t, B_t) = H_n(t, B_t).$ 2. The backward equation for the Ornstein-Uhlenbeck process is ${\partial u\over \partial t} = {1\over 2} {\partial^2 u\over \partial x^2} -\rho x {\partial u\over \partial x}.$ Show that $v(t,x) =u(t,xe^{\rho t})$ satisfies $e^{2\rho t} {\partial v\over \partial t} = {1\over 2} {\partial^2 v\over \partial x^2}$ and transform this to the heat equation by $\tau= {1-e^{-2\rho t}\over 2\rho}$. Use this to derive Mehler's formula for the transition probabilities, $p(t,x,dy)= {e^{ - { \rho(y-xe^{-\rho t})^2\over (1-e^{-2\rho t})}}\over \sqrt{ 2\pi \left( { (1-e^{-2\rho t})\over 2\rho}\right)}} dy.$ 3. i. Show that $X(t)=(1-t)\int_0^t {1\over 1-s} dB(s)$ is the solution of $dX(t)= -{X(t)\over 1-t} dt + dB(t), \qquad 0\le t<1,\qquad X(0)=0.$ ii. Show that $X(t)$ is Gaussian and find the mean and covariance. iii. Show that for $0=t_0< t_1<\cdots< t_n<1$ the variables $\frac{X(t_i)}{1-t_i} - \frac{X(t_{i-1}) }{ 1-t_{i-1}}$ are independent iv. Show that the finite dimensional distributions are given by $P(X(t_1)\in dx_1,\ldots, X(t_n)\in dx_n) = \prod_{i=1}^n p(t_i-t_{i-1}, x_i-x_{i-1}) \frac{p(1-t_n, -x_n)}{p(1,0)} dx_1\cdots dx_n$ where $p(t,x)$ is the Gaussian kernel. v. Show that $X(t)$ is equal in distribution to a Brownian motion conditioned to have $B(1)=0$. It is the Brownian Bridge. vi. For fixed constants $a$ and $b$ solve the stochastic differential equation $dX(t) = {b-X(t)\over 1-t} dt + dB(t),\qquad 0\le t<1\qquad X(0) =a.$ This is the Brownian Bridge from $a$ to $b$. 4. Consider the general linear stochastic differential equation $dX_t = [ A(t) X_t + a(t) ] dt + \sigma(t) dB_t,\qquad X_0=x,$ where $B_t$ is an $r$-dimensional Brownian motion independent of the initial vector $x\in {\bf R^d}$ and the $d\times d$, $d\times 1$ and $d\times r$ matrices $A(t)$, $a(t)$ and $\sigma(t)$ are non-random. Show that the solution is given by $X_t= \Phi(t) [ x+\int_0^t \Phi^{-1}(s) a(s) ds + \int_0^t\Phi^{-1}(s) \sigma(s) dB_s ]$ where $\Phi$ is the $d\times d$ matrix solution of $\dot \Phi(t) = A(t) \Phi(t), \qquad \Phi(0) = I.$ 5. A common model for interest rates is the Vasicek model, $dr(t) = (\theta-\alpha r(t)) dt + \sigma dB(t)$. Relate it to the Ornstein-Uhlenbeck process. The discount function is $Z_{t, T}(\omega) = E[ e^{-\int_t^T r(s) ds} ~|~{\cal F}(t) ].$ i. Show that in fact $Z_{t,T}$ is only a function of $r(t)$ (which we may as well call $Z_{t,T}(r(t))$.) ii. Fix $t$ and show that $Z_{t,T}(r)$ is the solution of the equation ${\partial Z\over\partial T} =(\theta-\alpha r) {\partial Z\over \partial r} + \sigma^2{\partial^2 Z\over \partial r^2} - r Z$ with $Z_{t,t}=1$. iii. Show that the continuously compounded interest rate $R_{t,T}= -(T-t)^{-1}\ln Z_{t,T}$ is of the special form $R(t,T) = a(T-t) + b(T-t) r(t)$ and find the functions $a(t)$ and $b(t)$. iv. Repeat i.-iii. for the CIR model $dr (t) = (\alpha -\beta r(t)) dt + \sigma\sqrt{r(t)} dB(t)$ v. Compute the mean $E[r(t)]$ and the variance ${\rm Var}(r(t))$ for the Vasicek and CIR models. 6. Let $X_1(\cdot)$ and $X_2(\cdot)$ solve the two constant coefficient sde's $dX_1(t) = b dt + \sigma_1 dB(t)$ and $dX_2(t) = bdt + \sigma_2 dB(t)$. How big is ${P(X_1(t_1)\in dx_1,\ldots,X_1(t_n)\in dx_n)\over P(X_2(t_1)\in dx_1,\ldots,X_2(t_n)\in dx_n)}$ as $n$ becomes large? 7. If $P$ and $\tilde P$ are equivalent and $\frac{d\tilde P }{ dP }= Z$ show that $\frac{d P }{ d\tilde P }= \frac1{Z}$. Let $P^{a,b}_x$ denote the probability measure on $C ([0, T ])$ corresponding to the solution of the stochastic differential equation $dX (t) = \sigma(t, X (t))dB(t) + b(t, X (t))dt,\quad X (0) = x$ where $a = \sigma\sigma^T$. Let $b_1 \neq b_2$. Write expressions for $\frac{dP^{ a,b_1}_ x }{dP^{ a,b_2}_ x }$ and $\frac{dP^{ a,b_2}_ x }{dP^{ a,b_1}_ x }$ using the Cameron-Martin-Girsanov formula. Is the second the inverse of the first, or not? Find an explanation. 8. Let $\alpha(x) = (\alpha_1 (x), . . . , \alpha_n(x))$ be a smooth function from ${}R^n$ to ${}R^n$ . Consider the partial differential equation for $x \in {}R^n$ , and $t > 0$, $\frac{\partial u}{\partial t} = \frac12 \sum^n_{i=1} \frac{\partial^2 u}{\partial x^2_i} +\sum_{i=1}^n\alpha_i (x) \frac{\partial u}{\partial x_i}$, $u(0, x) = f (x).$ i. Use the Girsanov theorem to show that the solution is $u(t, x) =E_x[ e^{\int_0^t\alpha(B(s))dB(s)-\frac12\int_0^t |\alpha(B(s))|^2 ds}f(B(t))].$ ii. Suppose that $\alpha(x) = \nabla\gamma (x)$ for some function $\gamma : R^n\to R$. Use Ito's formula to show that in this case $u(t, x) = e^{-\gamma (x)} E_x[e^{\gamma (B(t))- \frac12\int_0^t (\nabla\gamma^2 (B(s))+\Delta\gamma (B(s)))ds }f (B(t))].$ iii. Use the Feynman-Kac formula to show that $v(t, x) = e^{\gamma (x)} u(t, x)$ is the solution of $\frac{\partial v}{\partial t} = \frac12 \Delta v - \frac12 (\nabla\gamma^2 + \Delta\gamma )v.$
# Solar zenith angle The solar zenith angle is the oul' angle between the sun’s rays and the vertical direction, be the hokey! It is closely related to the bleedin' solar altitude angle, which is the angle between the feckin' sun’s rays and a feckin' horizontal plane. G'wan now and listen to this wan. Since these two angles are complementary, the bleedin' cosine of either one of them equals the bleedin' sine of the feckin' other. They can both be calculated with the same formula, usin' results from spherical trigonometry.[1][2] At solar noon, the zenith angle is at a minimum and is equal to latitude minus solar declination angle. Whisht now and eist liom. This is the oul' basis by which ancient mariners navigated the feckin' oceans.[3] Solar zenith angle is normally used in combination with the bleedin' solar azimuth angle to determine the oul' position of the Sun as observed from a given location on the surface of the bleedin' Earth. ## Formula ${\displaystyle \cos \theta _{s}=\sin \alpha _{s}=\sin \Phi \sin \delta +\cos \Phi \cos \delta \cos h}$ where • ${\displaystyle \theta _{s}}$ is the feckin' solar zenith angle • ${\displaystyle \alpha _{s}}$ is the solar altitude angle, ${\displaystyle \alpha _{s}}$ = 90° – ${\displaystyle \theta _{s}}$ • ${\displaystyle h}$ is the bleedin' hour angle, in the feckin' local solar time. • ${\displaystyle \delta }$ is the current declination of the Sun • ${\displaystyle \Phi }$ is the local latitude. ## Derivation of the feckin' formula usin' the feckin' subsolar point and vector analysis While the oul' formula can be derived by applyin' the oul' cosine law to the bleedin' zenith-pole-Sun spherical triangle, the bleedin' spherical trigonometry is a relatively esoteric subject. Arra' would ye listen to this. By introducin' the coordinates of the subsolar point and usin' vector analysis, the bleedin' formula can be obtained straightforward without incurrin' the oul' use of spherical trigonometry.[4] In the Earth-Centered Earth-Fixed (ECEF) geocentric Cartesian coordinate system, let ${\displaystyle (\phi _{s},\lambda _{s})}$ and ${\displaystyle (\phi _{o},\lambda _{o})}$ be the bleedin' latitudes and longitudes, or coordinates, of the feckin' subsolar point and the oul' observer's point, then the bleedin' upward-pointin' unit vectors at the two points, ${\displaystyle \mathbf {S} }$ and ${\displaystyle \mathbf {V} _{oz}}$, are ${\displaystyle \mathbf {S} =\cos \phi _{s}\cos \lambda _{s}{\mathbf {i} }+\cos \phi _{s}\sin \lambda _{s}{\mathbf {j} }+\sin \phi _{s}{\mathbf {k} }}$, ${\displaystyle \mathbf {V} _{oz}=\cos \phi _{o}\cos \lambda _{o}{\mathbf {i} }+\cos \phi _{o}\sin \lambda _{o}{\mathbf {j} }+\sin \phi _{o}{\mathbf {k} }}$. where ${\displaystyle {\mathbf {i} }}$, ${\displaystyle {\mathbf {j} }}$ and ${\displaystyle {\mathbf {k} }}$ are the oul' basis vectors in the bleedin' ECEF coordinate system. Now the oul' cosine of the feckin' solar zenith angle, ${\displaystyle \theta _{s}}$, is simply the dot product of the above two vectors ${\displaystyle \cos \theta _{s}=\mathbf {S} \cdot \mathbf {V} _{oz}=\sin \phi _{o}\sin \phi _{s}+\cos \phi _{o}\cos \phi _{s}\cos(\lambda _{s}-\lambda _{o})}$. Note that ${\displaystyle \phi _{s}}$ is the oul' same as ${\displaystyle \delta }$, the declination of the oul' Sun, and ${\displaystyle \lambda _{s}-\lambda _{o}}$ is equivalent to ${\displaystyle -h}$, where ${\displaystyle h}$ is the oul' hour angle defined earlier, the cute hoor. So the feckin' above format is mathematically identical to the one given earlier. Additionally, Ref. C'mere til I tell ya. [4] also derived the formula for solar azimuth angle in a similar fashion without usin' spherical trigonometry. ### Minimum and Maximum The daily minimum of the bleedin' solar zenith angle as an oul' function of latitude and day of year for the year 2020. The daily maximum of the solar zenith angle as a function of latitude and day of year for the feckin' year 2020. At any given location on any given day, the feckin' solar zenith angle, ${\displaystyle \theta _{s}}$, reaches its minimum, ${\displaystyle \theta _{min}}$, at local solar noon when the bleedin' hour angle ${\displaystyle h=0}$, or ${\displaystyle \lambda _{s}-\lambda _{o}=0}$, namely, ${\displaystyle \cos \theta _{min}=\cos(|\phi _{o}-\phi _{s}|)}$, or ${\displaystyle \theta _{min}=|\phi _{o}-\phi _{s}|}$. Whisht now and listen to this wan. If ${\displaystyle \theta _{min}>90^{\circ }}$, it is polar night. And at any given location on any given day, the bleedin' solar zenith angle, ${\displaystyle \theta _{s}}$, reaches its maximum, ${\displaystyle \theta _{max}}$, at local midnight when the oul' hour angle ${\displaystyle h=-180^{\circ }}$, or ${\displaystyle \lambda _{s}-\lambda _{o}=-180^{\circ }}$, namely, ${\displaystyle \cos \theta _{max}=\cos(180^{\circ }-|\phi _{o}+\phi _{s}|)}$, or ${\displaystyle \theta _{max}=180^{\circ }-|\phi _{o}+\phi _{s}|}$. Whisht now and eist liom. If ${\displaystyle \theta _{max}<90^{\circ }}$, it is polar day. ### Caveats The calculated values are approximations due to the distinction between common/geodetic latitude and geocentric latitude, bejaysus. However, the two values differ by less than 12 minutes of arc, which is less than the feckin' apparent angular radius of the bleedin' sun. The formula also neglects the bleedin' effect of atmospheric refraction.[5] ## Applications ### Sunrise/Sunset Sunset and sunrise occur (approximately) when the oul' zenith angle is 90°, where the feckin' hour angle h0 satisfies[2] ${\displaystyle \cos h_{0}=-\tan \Phi \tan \delta .}$ Precise times of sunset and sunrise occur when the upper limb of the Sun appears, as refracted by the feckin' atmosphere, to be on the horizon. ### Albedo A weighted daily average zenith angle, used in computin' the oul' local albedo of the Earth, is given by ${\displaystyle {\overline {\cos \theta _{s}}}={\frac {\int _{-h_{0}}^{h_{0}}Q\cos \theta _{s}{\text{d}}h}{\int _{-h_{0}}^{h_{0}}Q{\text{d}}h}}}$ where Q is the instantaneous irradiance.[2] ### Summary of special angles For example, the oul' solar elevation angle is : • 90° if you are on the bleedin' equator, a day of equinox, at a holy solar hour of twelve • near 0° at the oul' sunset or at the bleedin' sunrise • between -90° and 0° durin' the night (midnight) An exact calculation is given in position of the bleedin' Sun. Arra' would ye listen to this shite? Other approximations exist elsewhere.[6] Approximate subsolar point dates vs latitude superimposed on a world map, the example in blue denotin' Lahaina Noon in Honolulu
## Controllability of one-dimensional viscous free boundary flows B. Geshkovski, E. Zuazua. Controllability of one-dimensional viscous free boundary flows. Siam. J. Control. Optim (2021), Vol. 59, No. 3, pp. 1830–1850. https://doi.org/10.1137/19M1285354 Abstract. In… View More Controllability of one-dimensional viscous free boundary flows ## The turnpike property in semilinear control D. Pighin The turnpike property in semilinear control ESAIM Control Optim. Calc. Var (2021) Abstract.An exponential turnpike property for a semilinear control problem is proved.… View More The turnpike property in semilinear control ## Nonnegative control of finite-dimensional linear systems Lohéac J., Trélat E., Zuazua E. Nonnegative control of finite-dimensional linear systems. Ann. I. H. Poincare-An., Vol. 38, No. 2, pp. 301-346. (2021) DOI: https://doi.org/10.1016/j.anihpc.2020.07.004… View More Nonnegative control of finite-dimensional linear systems ## Null-controllability of perturbed porous medium gas flow B. Geshkovski,Null-controllability of perturbed porous medium gas flow. ESAIM:COCV, vol. 26, No. 85 (2020). DOI: 10.1051/cocv/2020009 Abstract: In this work, we investigate the null-controllability of… View More Null-controllability of perturbed porous medium gas flow ## DyCon blog: Q-learning for finite-dimensional problems Spain. 29.10.2020. Our team member Carlos Esteve made a contribution to the DyCon Blog about “Q-learning for finite-dimensional problems“: Reinforcement Learning (RL) is, together with… View More DyCon blog: Q-learning for finite-dimensional problems ## Shape turnpike for linear parabolic PDE models Lance G., Trélat E., Zuazua E. Shape turnpike for linear parabolic PDE models  Syst. Control. Lett. Vol. 142 (2020). DOI: 10.1016/j.sysconle.2020.104733 Abstract: We introduce and… View More Shape turnpike for linear parabolic PDE models ## Turnpike in optimal shape design G. Lance, E. Trélat, E. Zuazua Turnpike in optimal shape design IFAC-PAPERSONLINE. (ISSN: 24058963). 52(16): 496-501. DOI: 10.1137/17M1119160 Abstract: We investigate the turnpike problem in… View More Turnpike in optimal shape design ## Output controllability in a long-time horizon Martin Lazar, Jerôme Lohéac. Output controllability in a long-time horizon. Automatica, Vol. 113 (2020). DOI: 10.1016/j.automatica.2019.108762 Abstract. In this article we consider a linear finite… View More Output controllability in a long-time horizon ## DyCon blog If you are looking for our DyCon ERC project output, don’t miss out: Our DyCon Toolbox for computational methods and tools Our DyCon blog for… View More DyCon blog ## Averaged Control View More Averaged Control ## Wavecontrol Manual PDF   |   Download Code… A Matlab guide for the numerical approximation of the exact control and stabilization of the wave equation This webpage contains… View More Wavecontrol ## Greedy algorithm for Parametric Vlasov-Fokker-Planck System PDF version…  |   Download Code… 1. Numerical experiments Consider the one dimensional linear Vlasov-Fokker-Planck (VPFP) as following. \begin{cases} \delta\pt_tf + \sigma_1v\delta\pt_x f – \frac{\sigma_2}{\epsilon} \delta\pt_x\phi\delta\pt_v… View More Greedy algorithm for Parametric Vlasov-Fokker-Planck System ## Kolmogorov equation Read PDF version  |   Download Code 1 Introduction We are interested in the numerical discretization of the Kolmogorov equation [12] where $\mu>0$ is a diffusive function… View More Kolmogorov equation ## Turnpike property for functionals involving L1−norm We want to study the following optimal control problem: \begin{equation*} \left(\mathcal{P}\right) \ \ \ \ \ \ \ \hat{u}\in\argmin_{u\in L^2_T} \left\{J\left(u\right)=\alpha_c \norm{u}_{1,T} + \frac{\beta}{2}\norm{u}^2_{T}+\alpha_s \norm{Lu}_{1,T} + \frac{\gamma}{2}\norm{Lu-z}_{T}^2\right\}, \end{equation*} View More Turnpike property for functionals involving L1−norm ## Conservation laws in the presence of shocks PDF version… The problem We analyze a model tracking problem for a 1D scalar conservation law. It consists in optimizing the initial datum so to… View More Conservation laws in the presence of shocks ## Numerical aspects of LTHC of Burgers equation This issue is motivated by the challenging problem of sonic-boom minimization for supersonic aircrafts, which is governed by a Burgers-like equation. The travel time of the signal to the ground is larger than the time scale of the initial disturbance by orders of magnitude and this motivates our study of large time control of the sonic-boom propagation… View More Numerical aspects of LTHC of Burgers equation ## Long time control and the Turnpike property The turnpike property establishes that, when a general optimal control problem is settled in large time, for most of the time the optimal control and trajectories remain exponentially close to the optimal control and state of the corresponding steady-state or static optimal control problem… View More Long time control and the Turnpike property ## Control of PDEs involving non-local terms Relevant models in Continuum Mechanics, Mathematical Physics and Biology are of non-local nature. Moreover, these models are applied for the description of several complex phenomena for which a local approach is inappropriate or limiting. In this setting, classical PDE theory fails because of non-locality. Yet many of the existing techniques can be tuned and adapted, although this is often a delicate matter… View More Control of PDEs involving non-local terms
# Boundary Value Problem matematikawan I have a BVP of the form u" + f(x)u = g(x) , u(0)=u(1)= 0 where f(x) and g(x) are positive functions. I suspect that u(x) < 0 in the domain 0 < x < 1. How do I go proving this. I have try proving by contradiction. Assuming first u > 0 but I can't deduce that u" > 0 which contradict that u has a maximum in the domain. Or because my conjecture is wrong In the theory of elliptic PDE there are comparison theorems, you can try such a type argument here. Perhaps I describe details later matematikawan matematikawan Thanks. Interesting suggestion wrobel. So to which DE should I compare my equation ? Although I would prefer f(x) and g(x) to be general, I would still be content if f(x) is a monotonic increasing function since I'll be solving later on a specific DE with f(x) known. Well. First consider a boundary value problem $$-u''(x)=h(x)\in C[0,1],\quad u(0)=u(1)=0.$$ (We can replace the spaces ##C^k[0,1]## with the Sobolev or Holder spaces , they are also suitable. Actually I will reason very rough , it is only to illustrate the general idea. ) Solving this problem we obtain a bounded operator ##P:C[0,1]\to C^2[0,1]## defined by the formula $$u(x)=Ph=x\int_0^1d\xi\int_0^\xi h(s)ds-\int_0^xd\xi\int_0^\xi h(s)ds.$$ It is easy to see that if ##h\ge 0## then ##Ph\ge 0##. In your case the equation is as follows $$-u''=f(x)u-g(x),\quad f,g>0.\qquad (*)$$ Assume that ##f,g\in C[0,1]## then introduce constants Assume that the constants ##F,G## are such that the problem $$-U''=F U-G,\quad U(0)=U(1)=0$$ has a solution ##U(x)<0,\quad x\in(0,1)##. Then the problem (*) has a solution ##\tilde u(x)## such that ##U\le \tilde u\le 0##. Indeed, consider an operator $$\mathcal F(u)=P(f(x)u-g(x)).$$ This operator takes the set $$W=\{u\in C[0,1]\mid U\le u\le 0\}$$ to itself. Moreover, The operator ##\mathcal F## is a compact operator in ##W## with respect to ##C[0,1]## topology. By the Schauder fixed point theorem we get a fixed point of the operator ##\mathcal F##. This fixed point is the solution ##\tilde u##. Last edited: matematikawan matematikawan Wow! What a solution. Thank you very much Wrobel. I need some time to properly understand the solution. Hope you don't mind if I ask again in case I do have problem understanding the argument. matematikawan Sorry to come back again to this thread. If I understand correctly, the mapping $\mathcal F : W \rightarrow W$ where $W=\{u\in C[0,1]\mid U\le u\le 0\}$ required the statement if $h\le 0$ then $Ph=x\int_0^1d\xi\int_0^\xi h(s)ds-\int_0^xd\xi\int_0^\xi h(s)ds \le 0 .$ Also in the equation $-U''=F U-G,\quad U(0)=U(1)=0,$ $G=\min_{x\in[0,1]}g(x)$ instead of maximum value. from this: It is easy to see that if h≥0h\ge 0 then Ph≥0Ph\ge 0. it follows that if ##v\le V## then ##Pv\le PV## and particularly the mapping F:W→W\mathcal F : W \rightarrow W where W={u∈C[0,1]∣U≤u≤0} W=\{u\in C[0,1]\mid U\le u\le 0\} required the statement if h≤0h\le 0 then Ph=x∫10dξ∫ξ0h(s)ds−∫x0dξ∫ξ0h(s)ds≤0. Ph=x\int_0^1d\xi\int_0^\xi h(s)ds-\int_0^xd\xi\int_0^\xi h(s)ds \le 0 . Also in the equation O, for me that is a hardest point in whole the argument, I every time confuse in it:) taking into account that ##f,F>0## we see that if ##0\ge u\ge U## then ## f(x)u-g(x)\ge FU-g(x)## that is clear. Then it must be ## FU-g(x)\ge FU-G ## so that ##-g\ge- G,\quad g\le G##. It seems everything has been written ok Last edited: matematikawan You are right Wrobel. I overlooked that U is negative (the assumption). Sorry. The result now is consistent with an example that I have. u" + 4u = x^2 , u(0)=u(1)=0 Solution: u(x) = (2x^2 + sin(1-2x)/sin(1) - 1)/8 Lower bound solution F=4, G=1 U" + 4U = 1 , U(0)=U(1)=0 Solution: U(x)=-sin(1-x)sin(x)/2cos(1)
Review # Genesis of Suicide Terrorism See allHide authors and affiliations Science  07 Mar 2003: Vol. 299, Issue 5612, pp. 1534-1539 DOI: 10.1126/science.1078854 ## Abstract Contemporary suicide terrorists from the Middle East are publicly deemed crazed cowards bent on senseless destruction who thrive in poverty and ignorance. Recent research indicates they have no appreciable psychopathology and are as educated and economically well-off as surrounding populations. A first line of defense is to get the communities from which suicide attackers stem to stop the attacks by learning how to minimize the receptivity of mostly ordinary people to recruiting organizations. According to the U.S. Department of State report Patterns of Global Terrorism 2001 (1), no single definition of terrorism is universally accepted; however, for purposes of statistical analysis and policy-making: “The term ‘terrorism’ means premeditated, politically motivated violence perpetrated against noncombatant targets by subnational groups or clandestine agents, usually intended to influence an audience.” Of course, one side's “terrorists” may well be another side's “freedom fighters” (Fig. 1). For example, in this definition's sense, the Nazi occupiers of France rightly denounced the “subnational” and “clandestine” French Resistance fighters as terrorists. During the 1980s, the International Court of Justice used the U.S. Administration's own definition of terrorism to call for an end to U.S. support for “terrorism” on the part of Nicaraguan Contras opposing peace talks. For the U.S. Congress, “‘act of terrorism’ means an activity that—(A) involves a violent act or an act dangerous to human life that is a violation of the criminal laws of the United States or any State, or that would be a criminal violation if committed within the jurisdiction of the United States or of any State; and (B) appears to be intended (i) to intimidate or coerce a civilian population; (ii) to influence the policy of a government by intimidation or coercion; or (iii) to affect the conduct of a government by assassination or kidnapping.” (2). When suitable, the definition can be broadened to include states hostile to U.S. policy. Apparently, two official definitions of terrorism have existed since the early 1980s: that used by the Department of State “for statistical and analytical purposes” and that used by Congress for criminal proceedings. Together, the definitions allow great flexibility in selective application of the concept of terrorism to fluctuating U.S. priorities. The special category of “State-sponsored terrorism” could be invoked to handle some issues (3), but the highly selective and politically tendentious use of the label terrorism would continue all the same. Indeed, there appears to be no principled distinction between “terror” as defined by the U.S. Congress and “counterinsurgency” as allowed in U.S. armed forces manuals (4). Rather than attempt to produce a stipulative and all-encompassing definition of terrorism, this article restricts its focus to “suicide terrorism” characterized as follows: the targeted use of self-destructing humans against noncombatant—typically civilian—populations to effect political change. Although a suicide attack aims to physically destroy an initial target, its primary use is typically as a weapon of psychological warfare intended to affect a larger public audience. The primary target is not those actually killed or injured in the attack, but those made to witness it. The enemy's own information media amplify the attack's effects to the larger target population. Through indoctrination and training and under charismatic leaders, self-contained suicide cells canalize disparate religious or political sentiments of individuals into an emotionally bonded group of fictive kin who willfully commit to die spectacularly for one another and for what is perceived as the common good of alleviating the community's onerous political and social realities. ## Recent History Suicide attack is an ancient practice with a modern history (supporting online text). Its use by the Jewish sect of Zealots (sicari) in Roman-occupied Judea and by the Islamic Order of Assassins (hashashin) during the early Christian Crusades are legendary examples (5). The concept of “terror” as systematic use of violence to attain political ends was first codified by Maximilien Robespierre during the French Revolution. He deemed it an “emanation of virtue” that delivers “prompt, severe, and inflexible” justice, as “a consequence of the general principle of democracy applied to our country's most pressing needs.” (6). The Reign of Terror, during which the ruling Jacobin faction exterminated thousands of potential enemies, of whatever sex, age, or condition, lasted until Robespierre's fall (July 1794). Similar justification for state-sponsored terror was common to 20th-century revolutions, as in Russia (Lenin), Cambodia (Pol Pot), and Iran (Khomeini). Whether subnational (e.g., Russian anarchists) or state-supported (e.g., Japanese kamikaze), suicide attack as a weapon of terror is usually chosen by weaker parties against materially stronger foes when fighting methods of lesser cost seem unlikely to succeed. Choice is often voluntary, but typically under conditions of group pressure and charismatic leadership. Thus, the kamikaze(“divine wind”) first used in the battle of the Philippines (November 1944) were young, fairly well educated pilots who understood that pursuing conventional warfare would likely end in defeat. When collectively asked by Adm. Takijiro Onishi to volunteer for “special attack” (tokkotai) “transcending life and death,” all stepped forward, despite assurances that refusal would carry no shame or punishment. In the Battle of Okinawa (April 1945) some 2000kamikaze rammed fully fueled fighter planes into more than 300 ships, killing 5000 Americans in the most costly naval battle in U.S. history. Because of such losses, there was support for using the atomic bomb to end World War II (7). The first major contemporary suicide terrorist attack in the Middle East was the December 1981 destruction of the Iraqi embassy in Beirut (27 dead, over 100 wounded). Its precise authors are still unknown, although it is likely that Ayatollah Khomeini approved its use by parties sponsored by Iranian intelligence. With the assassination of pro-Israeli Lebanese President Bashir Gemayel in September 1982, suicide bombing became a strategic political weapon. Under the pro-Iranian Lebanese Party of God (Hezbollah), this strategy soon achieved geopolitical effect with the October 1983 truck-bomb killing of nearly 300 American and French servicemen. American and France abandoned the multinational force policing Lebanon. By 1985, these attacks arguably led Israel to cede most of the gains made during its 1982 invasion of Lebanon. In Israel-Palestine, suicide terrorism began in 1993, with attacks by Hezbollah-trained members of the Islamic Resistance Movement (Hamas) and Palestine Islamic Jihad (PIJ) aimed at derailing the Oslo Peace Accords (8). As early as 1988, however, PIJ founder Fathi Shiqaqi established guidelines for “exceptional” martyrdom operations involving human bombs. He followed Hezbollah in stressing that God extols martyrdom but abhors suicide: “Allah may cause to be known those who believe and may make some of you martyrs, and Allah may purify those who believe and may utterly destroy the disbelievers”; however, “no one can die except by Allah's leave” (9,10) (Fig. 2). The recent radicalization and networking through Al-Qaida of militant Islamic groups from North Africa, Arabia, and Central and Southeast Asia stems from the Soviet-Afghan War (1979–1989). With financial backing from the United States, members of these various groups were provided opportunities to pool and to unify doctrine, aims, training, equipment, and methods, including suicide attack. Through its multifaceted association with regional groups (by way of finance, personnel, and logistics), Al-Qaida aims to realize flexibly its global ambition of destroying Western dominance through local initiatives to expel Western influences (11). According to Jane's Intelligence Review: “All the suicide terrorist groups have support infrastructures in Europe and North America.” (12). Calling the current wave of radical Islam “fundamentalism” (in the sense of “traditionalism”) is misleading, approaching an oxymoron (supporting online text). Present-day radicals, whether Shi'ite (Iran, Hezbollah) or Sunni (Taliban, Al-Qaida), are much closer in spirit and action to Europe's post-Renaissance Counter-Reformation than to any traditional aspect of Moslem history. The idea of a ruling ecclesiastical authority, a state or national council of clergy, and a religious police devoted to physically rooting out heretics and blasphemers has its clearest historical model in the Holy Inquisition. The idea that religion must struggle to assert control over politics is radically new to Islam (13). ## Dubious Public Perceptions Recent treatments of Homeland Security research concentrate on how to spend billions to protect sensitive installations from attack (14, 15). But this last line of defense is probably easiest to breach because of the multitude of vulnerable and likely targets (including discotheques, restaurants, and malls), the abundance of would-be attackers (needing little supervision once embarked on a mission), the relatively low costs of attack (hardware store ingredients, no escape needs), the difficulty of detection (little use of electronics), and the unlikelihood that attackers would divulge sensitive information (being unaware of connections beyond their operational cells). Exhortations to put duct tape on windows may assuage (or incite) fear, but will not prevent massive loss of life, and public realization of such paltry defense can undermine trust. Security agencies also attend to prior lines of defense, such as penetrating agent-handling networks of terrorist groups, with only intermittent success. A first line of defense is to prevent people from becoming terrorists. Here, success appears doubtful should current government and media opinions about why people become human bombs translate into policy (see also supporting online text on contrary academic explanations). Suicide terrorists often are labeled crazed cowards bent on senseless destruction who thrive in the midst of poverty and ignorance. The obvious course becomes to hunt down terrorists while simultaneously transforming their supporting cultural and economic environment from despair to hope. What research there is, however, indicates that suicide terrorists have no appreciable psychopathology and are at least as educated and economically well off as their surrounding populations. ## Psychopathology: A Fundamental Attribution Error U.S. President George W. Bush initially branded 9/11 hijackers “evil cowards.” For U.S. Senator John Warner, preemptive assaults on terrorists and those supporting terrorism are justified because: “Those who would commit suicide in their assaults on the free world are not rational and are not deterred by rational concepts” (16). In attempting to counter anti-Moslem sentiment, some groups advised their members to respond that “terrorists are extremist maniacs who don't represent Islam at all” (17). Social psychologists have investigated the “fundamental attribution error,” a tendency for people to explain behavior in terms of individual personality traits, even when significant situational factors in the larger society are at work. U.S. government and media characterizations of Middle East suicide bombers as craven homicidal lunatics may suffer from a fundamental attribution error: No instances of religious or political suicide terrorism stem from lone actions of cowering or unstable bombers. Psychologist Stanley Milgram found that ordinary Americans also readily obey destructive orders under the right circumstances (18). When told by a “teacher” to administer potentially life-threatening electric shocks to “learners” who fail to memorize word pairs, most comply. Even when subjects stressfully protest as victims plead and scream, use of extreme violence continues—not because of murderous tendencies but from a sense of obligation in situations of authority, no matter how trite. A legitimate hypothesis is that apparently extreme behaviors may be elicited and rendered commonplace by particular historical, political, social, and ideological contexts. With suicide terrorism, the attributional problem is to understand why nonpathological individuals respond to novel situational factors in numbers sufficient for recruiting organizations to implement policies. In the Middle East, perceived contexts in which suicide bombers and supporters express themselves include a collective sense of historical injustice, political subservience, and social humiliation vis-à-vis global powers and allies, as well as countervailing religious hope (supporting online text on radical Islam's historical novelty). Addressing such perceptions does not entail accepting them as simple reality; however, ignoring the causes of these perceptions risks misidentifying causes and solutions for suicide bombing. There is also evidence that people tend to believe that their behavior speaks for itself, that they see the world objectively, and that only other people are biased and misconstrue events (19). Moreover, individuals tend to misperceive differences between group norms as more extreme than they really are. Resulting misunderstandings—encouraged by religious and ideological propaganda—lead antagonistic groups to interpret each other's views of events, such as terrorism/freedom-fighting, as wrong, radical, and/or irrational. Mutual demonization and warfare readily ensue. The problem is to stop this spiral from escalating in opposing camps (Fig. 3). ## Poverty and Lack of Education Are Not Reliable Factors Across our society, there is wide consensus that ridding society of poverty rids it of crime (20). According to President Bush, “We fight poverty because hope is the answer to terror. … We will challenge the poverty and hopelessness and lack of education and failed governments that too often allow conditions that terrorists can seize” (21). At a gathering of Nobel Peace Prize laureates, South Africa's Desmond Tutu and South Korea's Kim Dae Jong opined, “at the bottom of terrorism is poverty”; Elie Wiesel and the Dalai Lama concluded, “education is the way to eliminate terrorism” (22). Support for this comes from research pioneered by economist Gary Becker showing that property crimes are predicted by poverty and lack of education (23). In his incentive-based model, criminals are rational individuals acting on self-interest. Individuals choose illegal activity if rewards exceed probability of detection and incarceration together with expected loss of income from legal activity (“opportunity costs”). Insofar as criminals lack skill and education, as in much blue-collar crime, opportunity costs may be minimal; so crime pays. Such rational-choice theories based on economic opportunities do not reliably account for some types of violent crimes (domestic homicide, hate killings). These calculations make even less sense for suicide terrorism. Suicide terrorists generally are not lacking in legitimate life opportunities relative to their general population. As the Arab press emphasizes, if martyrs had nothing to lose, sacrifice would be senseless (24): “He who commits suicide kills himself for his own benefit, he who commits martyrdom sacrifices himself for the sake of his religion and his nation… . The Mujahed is full of hope” (25). Research by Krueger and Maleckova suggests that education may be uncorrelated, or even positively correlated, with supporting terrorism (26). In a December 2001 poll of 1357 West Bank and Gaza Palestinians 18 years of age or older, those having 12 or more years of schooling supported armed attacks by 68 points, those with up to 11 years of schooling by 63 points, and illiterates by 46 points. Only 40% of persons with advanced degrees supported dialogue with Israel versus 53% with college degrees and 60% with 9 years or less of schooling. In a comparison of Hezbollah militants who died in action with a random sample of Lebanese from the same age group and region, militants were less likely to come from poor homes and more likely to have had secondary-school education. Nevertheless, relative loss of economic or social advantage by educated persons might encourage support for terrorism. In the period leading to the first Intifada (1982–1988), the number of Palestinian men with 12 years or more of schooling more than doubled; those with less schooling increased only 30%. This coincided with a sharp increase in unemployment for college graduates relative to high school graduates. Real daily wages of college graduates fell some 30%; wages for those with only secondary schooling held steady. Underemployment also seems to be a factor among those recruited to Al-Qaida and its allies from the Arabian peninsula (27). ## The Institutional Factor: Organizing Fictive Kin Although humiliation and despair may help account for susceptibility to martyrdom in some situations, this is neither a complete explanation nor one applicable to other circumstances. Studies by psychologist Ariel Merari point to the importance of institutions in suicide terrorism (28). His team interviewed 32 of 34 bomber families in Palestine/Israel (before 1998), surviving attackers, and captured recruiters. Suicide terrorists apparently span their population's normal distribution in terms of education, socioeconomic status, and personality type (introvert vs. extrovert). Mean age for bombers was early twenties. Almost all were unmarried and expressed religious belief before recruitment (but no more than did the general population). Except for being young, unattached males, suicide bombers differ from members of violent racist organizations with whom they are often compared (29). Overall, suicide terrorists exhibit no socially dysfunctional attributes (fatherless, friendless, or jobless) or suicidal symptoms. They do not vent fear of enemies or express “hopelessness” or a sense of “nothing to lose” for lack of life alternatives that would be consistent with economic rationality. Merari attributes primary responsibility for attacks to recruiting organizations, which enlist prospective candidates from this youthful and relatively unattached population. Charismatic trainers then intensely cultivate mutual commitment to die within small cells of three to six members. The final step before a martyrdom operation is a formal social contract, usually in the form of a video testament. From 1996 to 1999 Nasra Hassan, a Pakistani relief worker, interviewed nearly 250 Palestinian recruiters and trainers, failed suicide bombers, and relatives of deceased bombers. Bombers were men aged 18 to 38: “None were uneducated, desperately poor, simple-minded, or depressed. … They all seemed to be entirely normal members of their families” (30). Yet “all were deeply religious,” believing their actions “sanctioned by the divinely revealed religion of Islam.” Leaders of sponsoring organizations complained, “Our biggest problem is the hordes of young men who beat on our doors.” Psychologist Brian Barber surveyed 900 Moslem adolescents during Gaza's first Intifada (1987–1993) (31). Results show high levels of participation in and victimization from violence. For males, 81% reported throwing stones, 66% suffered physical assault, and 63% were shot at (versus 51, 38, and 20% for females). Involvement in violence was not strongly correlated with depression or antisocial behavior. Adolescents most involved displayed strong individual pride and social cohesion. This was reflected in activities: for males, 87% delivered supplies to activists, 83% visited martyred families, and 71% tended the wounded (57, 46, and 37% for females). A follow-up during the second Intifada (2000–2002) indicates that those still unmarried act in ways considered personally more dangerous but socially more meaningful. Increasingly, many view martyr acts as most meaningful. By summer 2002, 70 to 80% of Palestinians endorsed martyr operations (32). Previously, recruiters scouted mosques, schools, and refugee camps for candidates deemed susceptible to intense religious indoctrination and logistical training. During the second Intifada, there has been a surfeit of volunteers and increasing involvement of secular organizations (allowing women). The frequency and violence of suicide attacks have escalated (more bombings since February 2002 than during 1993–2000); planning has been less painstaking. Despite these changes, there is little to indicate overall change in bomber profiles (mostly unmarried, average socioeconomic status, moderately religious) (28, 30). In contrast to Palestinians, surveys with a control group of Bosnian Moslem adolescents from the same time period reveal markedly weaker expressions of self-esteem, hope for the future, and prosocial behavior (30). A key difference is that Palestinians routinely invoke religion to invest personal trauma with proactive social meaning that takes injury as a badge of honor. Bosnian Moslems typically report not considering religious affiliation a significant part of personal or collective identity until seemingly arbitrary violence forced awareness upon them. Thus, a critical factor determining suicide terrorism behavior is arguably loyalty to intimate cohorts of peers, which recruiting organizations often promote through religious communion (supporting online text on religion's role) (33). Consider data on 39 recruits to Harkat al-Ansar, a Pakistani-based ally of Al-Qaida. All were unmarried males, most had studied theQuran. All believed that by sacrificing themselves they would help secure the future of their “family” of fictive kin: “Each [martyr] has a special place—among them are brothers, just as there are sons and those even more dear” (34). A Singapore Parliamentary report on 31 captured operatives fromJemaah Islamiyah and other Al-Qaida allies in Southeast Asia underscores the pattern: “These men were not ignorant, destitute or disenfranchised. All 31 had received secular education… . Like many of their counterparts in militant Islamic organizations in the region, they held normal, respectable jobs… . As a group, most of the detainees regarded religion as their most important personal value… secrecy over the true knowledge of jihad, helped create a sense of sharing and empowerment vis-à-vis others.” (35). Such sentiments characterize institutional manipulation of emotionally driven commitments that may have emerged under natural selection's influence to refine or override short-term rational calculations that would otherwise preclude achieving goals against long odds. Most typically, such emotionally driven commitments serve as survival mechanisms to inspire action in otherwise paralyzing circumstances, as when a weaker person convincingly menaces a stronger person into thinking twice before attempting to take advantage. In religiously inspired suicide terrorism, however, these emotions are purposely manipulated by organizational leaders, recruiters, and trainers to benefit the organization rather than the individual (supporting online text on religion) (36). ## Rational Choice Is the Sponsor's Prerogative, Not the Agent's Little tangible benefit (in terms of rational-choice theories) accrues to the suicide bomber, certainly not enough to make the likely gain one of maximized “expected utility.” Heightened social recognition occurs only after death, obviating personal material benefit. But for leaders who almost never consider killing themselves (despite declarations of readiness to die), material benefits more likely outweigh losses in martyrdom operations. Hassan cites one Palestinian official's prescription for a successful mission: “a willing young man… nails, gunpowder, a light switch and a short cable, mercury (readily obtainable from thermometers), acetone … . The most expensive item is transportation to an Israeli town” (30). The total cost is about $150. For the sponsoring organization, suicide bombers are expendable assets whose losses generate more assets by expanding public support and pools of potential recruits. Shortly after 9/11, an intelligence survey of educated Saudis (ages 25 to 41) concluded that 95% supported Al-Qaida (37). In a December 2002 Pew Research Center survey on growing anti-Americanism, only 6% of Egyptians viewed America and its “War on Terror” favorably (38). Money flows from those willing to let others die, easily offsetting operational costs (training, supporting personnel, safe houses, explosives and other arms, transportation, and communication). After a Jerusalem supermarket bombing by an 18-year-old Palestinian female, a Saudi telethon raised more than$100 million for “the Al-Quds Intifada.” Massive retaliation further increases people's sense of victimization and readiness to behave according to organizational doctrines and policies structured to take advantage of such feelings. In a poll of 1179 West Bank and Gaza Palestinians in spring 2002, 66% said army operations increased their backing for suicide bombings (39). By year's end, 73% of Lebanese Moslems considered suicide bombings justifiable (38). This radicalization of opinion increases both demand and supply for martyrdom operations. A December 2002 UN report credited volunteers with swelling a reviving Al-Qaida in 40 countries (40). The organization's influence in the larger society—most significantly its directing elites—increases in turn. ## Priorities for Homeland Security The last line of defense against suicide terrorism—preventing bombers from reaching targets—may be the most expensive and least likely to succeed. Random bag or body searches cannot be very effective against people willing to die, although this may provide some semblance of security and hence psychological defense against suicide terrorism's psychological warfare. A middle line of defense, penetrating and destroying recruiting organizations and isolating their leaders, may be successful in the near term, but even more resistant organizations could emerge instead. The first line of defense is to drastically reduce receptivity of potential recruits to recruiting organizations. But how? It is important to know what probably will not work. Raising literacy rates may have no effect and could be counterproductive should greater literacy translate into greater exposure to terrorist propaganda (in Pakistan, literacy and dislike for the United States increased as the number of religious madrasa schools increased from 3000 to 39,000 since 1978) (27, 38). Lessening poverty may have no effect, and could be counterproductive if poverty reduction for the entire population amounted to a downward redistribution of wealth that left those initially better off with fewer opportunities than before. Ending occupation or reducing perceived humiliation may help, but not if the population believes this to be a victory inspired by terror (e.g., Israel's apparently forced withdrawal from Lebanon). If suicide-bombing is crucially (though not exclusively) an institution-level phenomenon, it may require finding the right mix of pressure and inducements to get the communities themselves to abandon support for institutions that recruit suicide attackers. One way is to so damage the community's social and political fabric that any support by the local population or authorities for sponsors of suicide attacks collapses, as happened regarding the kamikaze as a by-product of the nuclear destruction of Hiroshima and Nagasaki. In the present world, however, such a strategy would neither be morally justifiable nor practical to implement, given the dispersed and distributed organization of terrorist institutions among distantly separated populations that collectively number in the hundreds of millions. Likewise, retaliation in kind (“tit-for-tat”) is not morally acceptable if allies are sought (41). Even in more localized settings, such as the Israeli-Palestinian conflict, coercive policies alone may not achieve lasting relief from attack and can exacerbate the problem over time. On the inducement side, social psychology research indicates that people who identify with antagonistic groups use conflicting information from the other group to reinforce antagonism (19). Thus, simply trying to persuade others from without by bombarding them with more self-serving information may only increase hostility. Other research suggests that most people have more moderate views than what they consider their group norm to be. Inciting and empowering moderates from within to confront inadequacies and inconsistencies in their own knowledge (of others as evil), values (respect for life), and behavior (support for killing), and other members of their group (42), can produce emotional dissatisfaction leading to lasting change and influence on the part of these individuals (43). Funding for civic education and debate may help, also interfaith confidence-building through intercommunity interaction initiatives (as Singapore's government proposes) (35). Ethnic profiling, isolation, and preemptive attack on potential (but not yet actual) supporters of terrorism probably will not help. Another strategy is for the United States and its allies to change behavior by directly addressing and lessening sentiments of grievance and humiliation, especially in Palestine (where images of daily violence have made it the global focus of Moslem attention) (44) (Fig. 4). For no evidence (historical or otherwise) indicates that support for suicide terrorism will evaporate without complicity in achieving at least some fundamental goals that suicide bombers and supporting communities share. Of course, this does not mean negotiating over all goals, such as Al-Qaida's quest to replace the Western-inspired system of nation-states with a global caliphate, first in Moslem lands and then everywhere (see supporting online text for history and agenda of suicide-sponsoring groups). Unlike other groups, Al-Qaida publicizes no specific demands after martyr actions. As with an avenging army, it seeks no compromise. But most people who currently sympathize with it might. Perhaps to stop the bombing we need research to understand which configurations of psychological and cultural relationships are luring and binding thousands, possibly millions, of mostly ordinary people into the terrorist organization's martyr-making web. Study is needed on how terrorist institutions form and on similarities and differences across organizational structures, recruiting practices, and populations recruited. Are there reliable differences between religious and secular groups, or between ideologically driven and grievance-driven terrorism? Interviews with surviving Hamas bombers and captured Al-Qaida operatives suggest that ideology and grievance are factors for both groups but relative weights and consequences may differ. We also need to investigate any significant causal relations between our society's policies and actions and those of terrorist organizations and supporters. We may find that the global economic, political, and cultural agenda of our own society has a catalyzing role in moves to retreat from our world view (Taliban) or to create a global counterweight (Al-Qaida). Funding such research may be difficult. As with the somewhat tendentious and self-serving use of “terror” as a policy concept (45), to reduce dissonance our governments and media may wish to ignore these relations as legitimate topics for inquiry into what terrorism is all about and why it exists. This call for research may demand more patience than any administration could politically tolerate during times of crisis. In the long run, however, our society can ill afford to ignore either the consequences of its own actions or the causes behind the actions of others. Potential costs of such ignorance are terrible to contemplate. The comparatively minor expense of research into such consequences and causes could have inestimable benefit. SOM Text View Abstract
# Angle btween Coordinate Vector and Normal Vector of Facet in a Convex Polytope, Asking for a Counterexample ## Definitions Let $\mathcal{C}$ be a convex polytope in $\mathbb{R}^{D}$ with $K$-facets $F_{1},\ldots,F_{K}$. I denote the normal vector of the $k^\mathrm{th}$ facet as $\mathbf{w}\_k=(w_{k1},\ldots,w_{kD})$. In the sequel, I will use $k$ as the index of $K$ facets and $d$ as the index of $D$ dimensions. Namely, $d\in \{1,\ldots,D\}$ and $k\in \{1,\ldots,K\}$. Let $\mathbf{p}=(p_{1},\ldots,p_{D})$ be a point in $\mathbb{R}^{D}$. Define $L_{d}=\{\mathbf{p}+\theta\mathbf{u}_{d}|\theta\in \mathbb{R}\},$ where $\mathbf{u}_{d}$ is the vector of the form $(0,\ldots,0,1,0,\ldots,0)$ with a $1$ only at the $d^{\mathrm{th}}$ dimension. For $k=1,\ldots, K$, define $G_{k}=\{d|L_{d}\cap F_{k}\neq \emptyset\}.$ Define $f:\mathbb{R}^{D}\times\mathbb{R}^{D}\rightarrow [0,1]$ as $f(\mathbf{x},\mathbf{y})=\frac{|\mathbf{x}^\mathrm{T}\mathbf{y}|}{\left\|\mathbf{x}\right\|\left\|\mathbf{y}\right\|}.$ ## My conjecture For any $\mathbf{p}\in \mathrm{int}\mathcal{C}$, there exist $d$ and $k$ such that $d\in G_{k}$ and $f(\mathbf{u}\_{d},\mathbf{w}\_{k})=\max \{f(\mathbf{u}\_{1},\mathbf{w}\_{k}),\ldots,f(\mathbf{u}\_{D},\mathbf{w}\_{k})\}$. Can anyone provide a counterexample? ### An illustrative example in $\mathbb{R}^2$ In particular, if we restrict ourself in $\mathbb{R}^2$, the above conjecture can be restated as follows: Let $p$ be a point in the interior of a convex polygon $\mathcal{C}$. Let $L_x$ and $L_y$ be two lines through $p$, which are parallel to $x$-axis and $y$-axis respectively. Consider all acute angles at intersections of $L_x$ $L_y$ and $\partial \mathcal{C}$, there is at least one angle $\geq$45°. The figure below gives an example. I haven't found any counterexample in $\mathbb{R}^2$, and that's why I'm considering to generalise this conjecture into high dimensional space. Finally, any problem reformulation is also welcome. - Your conjecture (at this writing) asserts what appears to me to be an obvious equality, and says nothing about the maximum value with respect to d. You might edit your conjecture to be more in align with your example. Gerhard "Ask Me About System Design" Paseman, 2011.09.14 –  Gerhard Paseman Sep 14 '11 at 15:46 In case Gerhard's point isn't clear, your conjecture has the form, $c = \max \lbrace a, b, c, d, e, \ldots \rbrace$: it only states that the max of a finite set of numbers is one of the numbers. –  Joseph O'Rourke Sep 14 '11 at 16:04 $\def\u{{\bf u}}\def\p{{\bf p}}\def\q{{\bf q}}$ Consider all the points of intersection of the lines $L_d$ with the hyperplanes $H_k$ defining the facets $F_k$. Let $\q$ be the one closest to $\p$; suppose $\q=L_d\cap H_k$. Then $(d,k)$ is a desired pair. Firstly, $\q$ should belong to $F_k$, otherwise the segment $[\p,\q]$ would intersect the boundary of a polytope at a point on another facet; thus $d\in G_k$. Next, let $\q_1,\dots,\q_D$ be the intersection points of the hyperplane $H_k$ with the lines $L_1,\dots,L_D$ (some of these points may be ideal). Then $\|\p-\q\|=\min_i\|\p-\q_i\|$ which is equivalent to your relation.
# American Institute of Mathematical Sciences November  2016, 15(6): 2509-2526. doi: 10.3934/cpaa.2016047 ## On the existence and uniqueness of a limit cycle for a Liénard system with a discontinuity line 1 School of Science, Jiangnan University, Wuxi, 214122, China 2 Department of Mathematics, College of William and Mary, Williamsburg, Virginia, 23187-8795 3 Institute for Intelligent Systems, the University of Johannesburg, South Africa 4 Department of Mathematics, Tongji University, Shanghai, 200092, China Received  November 2015 Revised  May 2016 Published  September 2016 In this paper, we investigate the existence and uniqueness of crossing limit cycle for a planar nonlinear Liénard system which is discontinuous along a straight line (called a discontinuity line). By using the Poincaré mapping method and some analysis techniques, a criterion for the existence, uniqueness and stability of a crossing limit cycle in the discontinuous differential system is established. An application to Schnakenberg model of an autocatalytic chemical reaction is given to illustrate the effectiveness of our result. We also consider a class of discontinuous piecewise linear differential systems and give a necessary condition of the existence of crossing limit cycle, which can be used to prove the non-existence of crossing limit cycle. Citation: Fangfang Jiang, Junping Shi, Qing-guo Wang, Jitao Sun. On the existence and uniqueness of a limit cycle for a Liénard system with a discontinuity line. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2509-2526. doi: 10.3934/cpaa.2016047 ##### References: show all references ##### References: [1] Jitsuro Sugie, Tadayuki Hara. Existence and non-existence of homoclinic trajectories of the Liénard system. Discrete & Continuous Dynamical Systems - A, 1996, 2 (2) : 237-254. doi: 10.3934/dcds.1996.2.237 [2] Sze-Bi Hsu, Junping Shi. Relaxation oscillation profile of limit cycle in predator-prey system. Discrete & Continuous Dynamical Systems - B, 2009, 11 (4) : 893-911. doi: 10.3934/dcdsb.2009.11.893 [3] Mats Gyllenberg, Yan Ping. The generalized Liénard systems. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 1043-1057. doi: 10.3934/dcds.2002.8.1043 [4] Tong Yang, Fahuai Yi. Global existence and uniqueness for a hyperbolic system with free boundary. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 763-780. doi: 10.3934/dcds.2001.7.763 [5] Dominique Blanchard, Nicolas Bruyère, Olivier Guibé. Existence and uniqueness of the solution of a Boussinesq system with nonlinear dissipation. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2213-2227. doi: 10.3934/cpaa.2013.12.2213 [6] Jianhe Shen, Maoan Han. Bifurcations of canard limit cycles in several singularly perturbed generalized polynomial Liénard systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3085-3108. doi: 10.3934/dcds.2013.33.3085 [7] Jihua Yang, Erli Zhang, Mei Liu. Limit cycle bifurcations of a piecewise smooth Hamiltonian system with a generalized heteroclinic loop through a cusp. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2321-2336. doi: 10.3934/cpaa.2017114 [8] Na Li, Maoan Han, Valery G. Romanovski. Cyclicity of some Liénard Systems. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2127-2150. doi: 10.3934/cpaa.2015.14.2127 [9] Hany A. Hosham, Eman D Abou Elela. Discontinuous phenomena in bioreactor system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2955-2969. doi: 10.3934/dcdsb.2018294 [10] Yingshu Lü. Symmetry and non-existence of solutions to an integral system. Communications on Pure & Applied Analysis, 2018, 17 (3) : 807-821. doi: 10.3934/cpaa.2018041 [11] Kota Ikeda. The existence and uniqueness of unstable eigenvalues for stripe patterns in the Gierer-Meinhardt system. Networks & Heterogeneous Media, 2013, 8 (1) : 291-325. doi: 10.3934/nhm.2013.8.291 [12] Peter Markowich, Jesús Sierra. Non-uniqueness of weak solutions of the Quantum-Hydrodynamic system. Kinetic & Related Models, 2019, 12 (2) : 347-356. doi: 10.3934/krm.2019015 [13] Jaume Llibre, Claudia Valls. On the analytic integrability of the Liénard analytic differential systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 557-573. doi: 10.3934/dcdsb.2016.21.557 [14] Bin Liu. Quasiperiodic solutions of semilinear Liénard equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 137-160. doi: 10.3934/dcds.2005.12.137 [15] Robert Roussarie. Putting a boundary to the space of Liénard equations. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 441-448. doi: 10.3934/dcds.2007.17.441 [16] Hiroshi Watanabe. Existence and uniqueness of entropy solutions to strongly degenerate parabolic equations with discontinuous coefficients. Conference Publications, 2013, 2013 (special) : 781-790. doi: 10.3934/proc.2013.2013.781 [17] Ran Zhuo, Wenxiong Chen, Xuewei Cui, Zixia Yuan. Symmetry and non-existence of solutions for a nonlinear system involving the fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1125-1141. doi: 10.3934/dcds.2016.36.1125 [18] Tomás Caraballo, David Cheban. Almost periodic and asymptotically almost periodic solutions of Liénard equations. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 703-717. doi: 10.3934/dcdsb.2011.16.703 [19] Wenbin Liu, Zhaosheng Feng. Periodic solutions for $p$-Laplacian systems of Liénard-type. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1393-1400. doi: 10.3934/cpaa.2011.10.1393 [20] Isaac A. García, Jaume Giné, Jaume Llibre. Liénard and Riccati differential equations related via Lie Algebras. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 485-494. doi: 10.3934/dcdsb.2008.10.485 2018 Impact Factor: 0.925
E. Petya and Pipes time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output A little boy Petya dreams of growing up and becoming the Head Berland Plumber. He is thinking of the problems he will have to solve in the future. Unfortunately, Petya is too inexperienced, so you are about to solve one of such problems for Petya, the one he's the most interested in. The Berland capital has n water tanks numbered from 1 to n. These tanks are connected by unidirectional pipes in some manner. Any pair of water tanks is connected by at most one pipe in each direction. Each pipe has a strictly positive integer width. Width determines the number of liters of water per a unit of time this pipe can transport. The water goes to the city from the main water tank (its number is 1). The water must go through some pipe path and get to the sewer tank with cleaning system (its number is n). Petya wants to increase the width of some subset of pipes by at most k units in total so that the width of each pipe remains integer. Help him determine the maximum amount of water that can be transmitted per a unit of time from the main tank to the sewer tank after such operation is completed. Input The first line contains two space-separated integers n and k (2 ≤ n ≤ 50, 0 ≤ k ≤ 1000). Then follow n lines, each line contains n integers separated by single spaces. The i + 1-th row and j-th column contain number cij — the width of the pipe that goes from tank i to tank j (0 ≤ cij ≤ 106, cii = 0). If cij = 0, then there is no pipe from tank i to tank j. Output Print a single integer — the maximum amount of water that can be transmitted from the main tank to the sewer tank per a unit of time. Examples Input 5 70 1 0 2 00 0 4 10 00 0 0 0 50 0 0 0 100 0 0 0 0 Output 10 Input 5 100 1 0 0 00 0 2 0 00 0 0 3 00 0 0 0 4100 0 0 0 0 Output 5 Note In the first test Petya can increase width of the pipe that goes from the 1st to the 2nd water tank by 7 units. In the second test Petya can increase width of the pipe that goes from the 1st to the 2nd water tank by 4 units, from the 2nd to the 3rd water tank by 3 units, from the 3rd to the 4th water tank by 2 units and from the 4th to 5th water tank by 1 unit.
# Subsequence In mathematics, a subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements. For example, the sequence ${\displaystyle \langle A,B,D\rangle }$ is a subsequence of ${\displaystyle \langle A,B,C,D,E,F\rangle }$ obtained after removal of elements ${\displaystyle C}$, ${\displaystyle E}$, and ${\displaystyle F}$. The relation of one sequence being the subsequence of another is a preorder. The subsequence should not be confused with substring ${\displaystyle \langle A,B,C,D\rangle }$ which can be derived from the above string ${\displaystyle \langle A,B,C,D,E,F\rangle }$ by deleting substring ${\displaystyle \langle E,F\rangle }$. The substring is a refinement of the subsequence. The list of all subsequences for the word "apple" would be "a, ap, al, ae, app, apl, ape, ale, appl, appe, aple, apple, p, pp, pl, pe, ppl, ppe, ple, pple, l, le, e". ## Common subsequence Given two sequences X and Y, a sequence Z is said to be a common subsequence of X and Y, if Z is a subsequence of both X and Y. For example, if ${\displaystyle X=\langle A,C,B,D,E,G,C,E,D,B,G\rangle }$ and ${\displaystyle Y=\langle B,E,G,C,F,E,U,B,K\rangle }$ then a common subsequence of X and Y could be ${\displaystyle Z=\langle B,E,E\rangle .}$ This would not be the longest common subsequence, since Z only has length 3, and the common subsequence ${\displaystyle \langle B,E,E,B\rangle }$ has length 4. The longest common subsequence of X and Y is ${\displaystyle \langle B,E,G,C,E,B\rangle }$. ## Applications Subsequences have applications to computer science,[1] especially in the discipline of bioinformatics, where computers are used to compare, analyze, and store DNA, RNA, and protein sequences. Take two sequences of DNA containing 37 elements, say: SEQ1 = ACGGTGTCGTGCTATGCTGATGCTGACTTATATGCTA SEQ2 = CGTTCGGCTATCGTACGTTCTATTCTATGATTTCTAA The longest common subsequence of sequences 1 and 2 is: LCS(SEQ1,SEQ2) = CGTTCGGCTATGCTTCTACTTATTCTA This can be illustrated by highlighting the 27 elements of the longest common subsequence into the initial sequences: SEQ1 = ACGGTGTCGTGCTATGCTGATGCTGACTTATATGCTA SEQ2 = CGTTCGGCTATCGTACGTTCTATTCTATGATTTCTAA Another way to show this is to align the two sequences, i.e., to position elements of the longest common subsequence in a same column (indicated by the vertical bar) and to introduce a special character (here, a dash) in one sequence when two elements in the same column differ: SEQ1 = ACGGTGTCGTGCTAT-G--C-TGATGCTGA--CT-T-ATATG-CTA- | || ||| ||||| |  | |  | || |  || | || |  ||| SEQ2 = -C-GT-TCG-GCTATCGTACGT--T-CT-ATTCTATGAT-T-TCTAA Subsequences are used to determine how similar the two strands of DNA are, using the DNA bases: adenine, guanine, cytosine and thymine.
# Validity of the thin wall approximation Inspired by the question How can I understand the tunneling problem by Euclidean path integral where the quadratic fluctuation has a negative eigenvalue?, I decided to come back to the first paper by Coleman, Fate of the false vacuum: Semiclassical theory, in order to better understand the limits of validity of the approximations made. Going through the calculations, I stopped reflecting on the thin wall approximation. I found two “not-so-clear”(to my level of comprehension) points: 1. I read a lot of articles in the literature, but none of them describe in a clear manner why the instanton solution $\phi_1$ can be approximated by the expression (4.10) of the original paper. It is an important statement because equation (4.18) is often cited as the condition for the validity of the thin-wall approximation. Someone can help me in explaining this approximation? 2. Furthermore, I see that often is it said that inside the bubble there is the true vacuum, and outside there is the false one. However this is a huge simplification, since the wall is thin but finite. How can the bubble thickness be expressed in terms of the problem variables?
Question # A solid sphere of radius R made of a material of bulk modulus B surrounded by a liquid in a cylindrical container.A massless  piston of area A floats on the surface of the liquid. Find the fractional decreases in the radius of the sphere $$\left( \frac { d R }{ R } \right)$$ when a mass M is placed on the piston to compress the liquid: A (3MgAB) B (2MgAB) C (Mg3AB) D (Mg2AB) Solution ## The correct option is C $$\left( \dfrac { Mg }{ 3AB } \right)$$Given, Radius of sphere, $$R$$ Mass placed on massless piston, $$M$$.  Area of piston, $$A$$ Change in pressure $$\Delta P=\dfrac{\Delta F}{A}=\dfrac{Mg-0}{A}=\dfrac{Mg}{A}$$ Volume of sphere, $$v=\dfrac{4}{3}\pi {{R}^{3}}$$ Small decrease in volume, $$-dv=d\left( \dfrac{4}{3}\pi {{R}^{^{3}}} \right)=4\pi {{R}^{2}}dR$$ Bulk modulus, $$B$$   $$B=\dfrac{dp}{-\dfrac{dv}{v}}=\dfrac{\dfrac{Mg}{A}}{-\dfrac{4\pi {{R}^{2}}dR}{\dfrac{4}{3}\pi {{R}^{3}}}}=\dfrac{Mg}{-3A\dfrac{dR}{R}}$$  $$-\dfrac{dR}{R}=\dfrac{Mg}{3AB}$$ Hence, fractional decrease in radius of sphere is $$\dfrac{Mg}{3AB}$$ Physics Suggest Corrections 0 Similar questions View More People also searched for View More
# Use of Slutsky equation I know that the Slutsky equation is defined as: $\frac{\partial x_1^s}{\partial p_1} = \frac{\partial x_1^m}{\partial p_1} + x_1^o \frac{\partial x_1^m}{\partial m}$ My problem is right now is making use of this information given (I am aware of how to take partial derivatives) but cannot seem to understand how to apply it to problem sets. Here's an example (I'm more concerned about the steps on how to get to the answer not just the answer); A consumer has preferences given by $U(x_1,x_2)= x_1^2x_2$ (a) Derive the demand curves for $x_1, x_2$ when prices and income are given by $p_1, p_2$ and $m$ $x_1^*=2m/3p_1$ and $x_2^*=m/3p_2$ -I think I understood how to do that (b) Illustrate the equilibrium on a diagram when $p_1$ = $p_2$ $=$ 1 and $I$ = $12 • the way I did this was by graphing and simply finding the equllibrium point graphically based on the Demands for goods$1$and$2$on the budget line (c) Calculate the exact income and substitution effects for$x_1$when$p_1$rises to$3. -The only way I'm currently able to do this is without Calculus, as described in this video which doesn't seem to sit well with me being that the Slutsky Equation is defined very clearly with use of calculus. I just don't know how to apply it. (d) Explain your exact results using the appropriate Slutsky equation. • same problem here. Note: I'm no simply looking for someone to "do my homework" I'm primary interest is in knowing how to apply the Slutsky equation when facing similar problems. Utility function $u(x_1, x_2) = x_1^2x_2$. Q. Derive the demand for $x_1$ and $x_2$ as a function of $p_1$, $p_2$ and $m$. Here are the demand functions for $x_1$ and $x_2$: $$x_1(p_1, p_2, m) = \frac{2m}{3p_1}$$ $$x_2(p_1, p_2, m) = \frac{m}{3p_2}$$ Q. Illustrate the equilibrium in a diagram when $p_1=1$, $p_2=1$ and $m=12$. Q. Suppose $p_1$ rises to 3. Calculate the substitution effect and income effect. If $p_1$ rises to 3, the new equilibrium choice is $\left(\frac{8}{3}, 4\right)$. To find the Substitution effect and Income effect using Slutsky approach, we will find the equilibrium at new set of prices when the consumer has just enough money to buy the old equilibrium bundle i.e. we will find the demand at prices $(3,1)$ when income is $m' = 3(8) + 1(4) = 28$. Substituting this data in the demand functions, we get the equilibrium choice as: $\left(\frac{56}{9}, \frac{28}{3}\right)$. Here is how the situation looks in graph: Substitution effect = $\displaystyle\frac{56}{9} - 8 = -\frac{16}{9}$ Income effect = $\displaystyle\frac{8}{3} - \frac{56}{9} = - \frac{32}{9}$ Q. Explain your exact results using the appropriate Slutsky equation. Slutsky equation: Change in Demand = Change in Demand due to substitution effect + Change in Demand due to income effect $$\Delta x_1 = \Delta^s x_1 + \Delta^i x_1 = -\frac{16}{9} - \frac{32}{9} = -\frac{16}{3}$$ The Slutsky equation links Hicksian and Marshallian demand functions. Hicksian demand minimizes the cost necessary to reach a certain utility. In your question, you've labeled this $x_1^s$ (although i'm not familiar with this notation). Because Hicksian demand holds utility constant, it measures the pure substitution effect. Marshallian demand maximizes utility given a fixed income. In your question, you've labeled this $x_1^m$. Because Marshallian demand holds income constant, price affects Marshallian demand both because a price increase for one good makes the other good more attractive (substitution effect) and because it decreases the different kinds of good baskets you can buy (income effect). Note that, in order to find $x_1^* = 2m/3p_1$ and $x_2^* = m/3p_2$, you maximized utility given a fixed income, by solving $MU_{x_1}/MU_{x_2} = p_1/p_2$ with $m=p_1x_1+p_2x_2$. And so, what you've found and labeled $x_1^*$ and $x_2^*$ are Marshallian demand. That takes us back to the Slutsky equation $$\frac{\partial x_1^s}{\partial p_1} = \frac{\partial x_1^m}{\partial p_1}+x_1\frac{\partial x_1^m}{\partial m}$$ What is this saying? Remember, Hicksian demand is pure substitution effect. So $\frac{\partial x_1^s}{\partial p_1}$ is the substitution effect. And Marshallian demand includes the substitution effect and the income effect. So what this is saying is that the substitution effect ($\frac{\partial x_1^s}{\partial p_1}$) is what you get if you start with the total effect ($\frac{\partial x_1^m}{\partial p_1}$), including income and substitution effects, and then take out the income effect ($x_1\frac{\partial x_1^m}{\partial m}$).[note below] So where does that leave you? You can calculate the income effect as $x_1\frac{\partial x_1^m}{\partial m}$. In other words, as the price rises, it rises for every one of the $x_1$ units you're buying, and so the effect on income is the price change (1) times the number of units ($x_1$). And income affects your demand for $x_1$ through $\frac{\partial x_1^m}{\partial m}$, so the income effect is $x_1\frac{\partial x_1^m}{\partial m}$. Since you have Marshallian demand, just take the derivative of $x_1^m$ with respect to $m$ to get $\frac{\partial x_1^m}{\partial m}$, and then multiply by $x_1$ to get the income effect. You can calculate the substitution effect, then, in one of two ways. You can use the Slutsky equation: calculate the total effect $\frac{\partial x_1^m}{\partial p_1}$ by taking the derivative of $x_1^m$ with respect to $p_1$, and then plug that into the Slutsky equation with the income effect to get the substitution effect, $\frac{\partial x_1^s}{\partial p_1}$. Or, you can calculate Hicksian demand $x_1^s$ directly by solving $MU_{x_1}/MU_{x_2} = p_1/p_2$ with $U(x_1,x_2) = s$. Then, take the derivative of $x_1^s$ with respect to $p_1$ to get the substitution effect. [Note: It might seem weird to "take out" the income effect by adding it. But remember that the total effect is negative, since a higher $p_1$ leads to less $x_1$. And so adding on a positive income effect (higher income leads to more consumption of normal goods) is indeed "taking out" the income effect.]
# Does Ex^∞ send homotopy inverse limits of ∞-categories to homotopy inverse limits of spaces Kan's functor $$\operatorname{Ex}^{\infty}$$ plays the role of total localization or groupoid completion in the theory of $$\infty$$-categories; specifically, it can be viewed as the left-adjoint of the inclusion $$\operatorname{Kan} \hookrightarrow \operatorname{Cat}_\infty$$. The following question came up in a discussion: Suppose $$D$$ is an inversely-directed poset, and suppose $$F:D\to \operatorname{Cat}_\infty$$ is an injectively fibrant diagram of $$\infty$$-categories (wrt the Joyal model structure). Then is it true that the canonical map $$\operatorname{Ex}^\infty(\lim F) \to \operatorname{lim}\operatorname{Ex}^\infty(F)$$ is a weak homotopy equivalence (perhaps allowing for all of the limits in question to be appropriately homotopical)? If not, is it true if $$F(d)$$ is isomorphic to the nerve of a poset for all $$d\in D$$? • If you allow "for all of the limits in question to be appropriately homotopical", then your statement becomes tautologically true since the underlying ∞-functor of Ex^∞ is a right adjoint functor of ∞-categories. For the strict version, injectively fibrant towers can be characterized by the condition that all objects are fibrant and all maps are fibrations. So if you know that Ex^∞ applied to your maps yields Kan fibrations, then your map is a weak equivalence. – Dmitri Pavlov Mar 22 at 16:36 • @DmitriPavlov I'm pretty sure that Ex^∞ is a left adjoint, not a right adjoint. In particular, if G is an ∞-groupoid and ι: Kan -> Cat_∞ is the inclusion, then for any ∞-category X, Map_{Cat_∞}(X,ιG)~Map_{Kan}(Ex^∞ X, G). If Ex^∞ is also a right adjoint in the ∞-categorical sense, this is news to me. On-the-nose, it definitely doesn't preserve arbitrary cofiltered limits of simplicial sets. – Steve Mar 22 at 17:20 • Yes, I confused the directions of all arrows in my answer, so the claim was actually meant for direct limits, not inverse limits. Still, the strict version works just fine if you know that Ex^∞ applied to maps in your tower produces Kan fibrations. – Dmitri Pavlov Mar 22 at 19:54 • If this where true, this would imply that $Ex^\infty$ commutes with countable products of $\infty$-categories up to weak equivalences (seen as directed limit of finite products). An explicit counter example with a countable product of nerves of finite directed catégorie may be found in the introduction of Thomason’s paper « Cat as a closed model category ». – Denis-Charles Cisinski Mar 23 at 9:46 • @Denis-CharlesCisinski Thanks, perfect! – Steve Mar 23 at 16:56
verde.distance_mask(data_coordinates, maxdist, coordinates=None, grid=None, projection=None)[source] Mask grid points that are too far from the given data points. Distances are Euclidean norms. If using geographic data, provide a projection function to convert coordinates to Cartesian before distance calculations. Either coordinates or grid must be given: • If coordinates is not None, produces an array that is False when a point is more than maxdist from the closest data point and True otherwise. • If grid is not None, produces a mask and applies it to grid (an xarray.Dataset). Note If installed, package pykdtree will be used instead of scipy.spatial.cKDTree for better performance. Parameters • data_coordinates (tuple of arrays) – Same as coordinates but for the data points. • maxdist (float) – The maximum distance that a point can be from the closest data point. • coordinates (None or tuple of arrays) – Arrays with the coordinates of each point that will be masked. Should be in the following order: (easting, northing, …). Only easting and northing will be used, all subsequent coordinates will be ignored. • grid (None or xarray.Dataset) – 2D grid with values to be masked. Will use the first two dimensions of the grid as northing and easting coordinates, respectively. For this to work, the grid dimensions must be ordered as northing then easting. The mask will be applied to grid using the xarray.Dataset.where method. • projection (callable or None) – If not None, then should be a callable object projection(easting, northing) -> (proj_easting, proj_northing) that takes in easting and northing coordinate arrays and returns projected easting and northing coordinate arrays. This function will be used to project the given coordinates (or the ones extracted from the grid) before calculating distances. Returns mask (array or xarray.Dataset) – If coordinates was given, then a boolean array with the same shape as the elements of coordinates. If grid was given, then an xarray.Dataset with the mask applied to it. Examples >>> from verde import grid_coordinates >>> region = (0, 5, -10, -4) >>> spacing = 1 >>> coords = grid_coordinates(region, spacing=spacing) >>> mask = distance_mask((2.5, -7.5), maxdist=2, coordinates=coords) [[False False False False False False] [False False True True False False] [False True True True True False] [False True True True True False] [False False True True False False] [False False False False False False] [False False False False False False]] >>> # Mask an xarray.Dataset directly >>> import xarray as xr >>> coords_dict = {"easting": coords[0][0, :], "northing": coords[1][:, 0]} >>> data_vars = {"scalars": (["northing", "easting"], np.ones(mask.shape))} >>> grid = xr.Dataset(data_vars, coords=coords_dict) >>> masked = distance_mask((3.5, -7.5), maxdist=2, grid=grid)
# Prove (Q+, *) is isomorphic to a proper subgroup of itself ## Homework Statement Prove that Q+, the group of positive rational numbers under multiplication, is isomorphic to a proper subgroup of itself. None ## The Attempt at a Solution [/B] Not at all sure if this is legit. Let phi: Q+ --> G phi(x) = x2, x is in Q+ We will demonstrate that G c Q+ It is a subgroup: 1=e is in G, and ab-1 = x2 y-2 = (xy-1)2 is in G It is a proper subgroup: 2 is in Q+, but sqrt(2) is not in G and indeed not in Q+ One-to-one: phi(x) = phi(y) x2 = y2 x, y > 0 x = y Onto: Take some g in G. We have that sqrt(g) satisfies phi(sqrt(g)) = sqrt(g)2 = g. Therefore, there is an element in Q+ such that phi(x)=g. Operation preservation: We have phi(x*y) = (xy)2 = x^2y2 phi(x)phi(y) = x2y2 So phi(x*y)=phi(x)*phi(y) Therefore, phi is an isomorphism between Q+ and a proper subgroup of itself.
Search All of the Math Forum: Views expressed in these public forums are not endorsed by NCTM or The Math Forum. Topic: Inequalities: a problem Replies: 10   Last Post: Jul 12, 1996 9:56 AM Messages: [ Previous | Next ] Stephen.Donnelly Posts: 14 Registered: 12/12/04 Re: Inequalities: a problem Posted: Jul 10, 1996 5:31 AM In fact I think it can be generalised to any number k of variables: FACT: Let a_1 .. a_k be distinct integers. Then \sum [a_i - a_(i+1)]^2 >= 4k - 6 . You can prove it by induction on k now. The case k=2 is clear. Then if there is a counter-example to the case k, you can easily make it into a counter-example to the case k-1 . (By obliterating the largest integer.) So this is really induction in the style of Fermat's method of infinite descent. OK? Steve Date Subject Author 7/9/96 francis d'costa 7/10/96 Lukas Geyer 7/10/96 CDJ 7/10/96 Hauke Reddmann 7/10/96 Stephen.Donnelly 7/10/96 Rene Bos 7/12/96 Chris Thompson 7/10/96 CDJ 7/10/96 David Kastrup 7/12/96 francis d'costa 7/10/96 Stephen.Donnelly
# Draw Polygons with LaTeX [closed] Good Afternoon, I would like to know how my Professor did to draw these six polygons in two photos below. 4 polygons 2 exagons I state that I use \usepackage{stix} and I would like to learn tikz-package. Thank you very much • Hi, welcome to TeX.SE! In order to learn how your professor did it you'll have to ask them, but there are several packages that'll make creating such illustrations relatively easy. TikZ, which you mention, is perhaps the biggest and most well-known one. Have you looked at the tutorials contained in its manual? If not, I recommend doing so, and then simply jumping in, trying to create your own TikZ drawings, and asking questions here when you're stuck and Google doesn't turn up an answer. All the best! – chsk Jun 27, 2021 at 20:28 • @Puck And feel free to upvote and accept the answer that fits your needs. It helps members to know that you found what you were looking for. Jun 27, 2021 at 21:25 Here's a simple way to do it in tikz with some loops and manually connecting the nodes. \documentclass[tikz, border=20]{standalone} \begin{document} \begin{tikzpicture} % 2 nodes \foreach \i in {0, 1} { \coordinate (node\i) at (360/2*\i+90:2); } \draw (node0) -- (node1); \begin{scope}[xshift=5cm] % 3 nodes \foreach \i in {0, 1, 2} { \coordinate (node\i) at (360/3*\i+90:2); } \draw (node0) -- (node1); \draw (node1) -- (node2); \draw (node2) -- (node0); \end{scope} \begin{scope}[xshift=10cm] % 4 nodes \foreach \i in {0, 1, 2, 3} { \coordinate (node\i) at (360/4*\i+90:2); } \draw (node0) -- (node1); \draw (node1) -- (node2); \draw (node2) -- (node3); \draw (node3) -- (node0); \draw (node0) -- (node2); \draw (node1) -- (node3); \end{scope} \begin{scope}[yshift=-5cm] % 5 nodes \foreach \i in {0, 1, 2, 3, 4} { \coordinate (node\i) at (360/5*\i+90:2); } \draw (node0) -- (node1); \draw (node1) -- (node2); \draw (node2) -- (node3); \draw (node3) -- (node4); \draw (node4) -- (node0); \draw (node0) -- (node2); \draw (node0) -- (node3); \draw (node1) -- (node3); \draw (node1) -- (node4); \draw (node2) -- (node4); \end{scope} \begin{scope}[xshift=5cm, yshift=-5cm] % 6 nodes \foreach \i in {0, 1, 2, 3, 4, 5} { \coordinate (node\i) at (360/6*\i+90:2); % \node at (node\i) {\Huge\color{red}\i}; } \draw (node0) -- (node1); \draw (node1) -- (node2); \draw (node2) -- (node3); \draw (node3) -- (node4); \draw (node4) -- (node5); \draw (node5) -- (node0); \draw (node0) -- (node2); \draw (node0) -- (node3); \draw (node0) -- (node4); \draw (node1) -- (node3); \draw (node1) -- (node4); \draw (node1) -- (node5); \draw (node2) -- (node4); \draw (node2) -- (node5); \draw (node3) -- (node5); \end{scope} \begin{scope}[xshift=10cm, yshift=-5cm] % 6 nodes (again) \foreach \i in {0, 1, 2, 3, 4, 5} { \coordinate (node\i) at (360/6*\i+90+rand*20:2); } \draw (node0) -- (node1); \draw (node1) -- (node2); \draw (node2) -- (node3); \draw (node3) -- (node4); \draw (node4) -- (node5); \draw (node5) -- (node0); \draw (node0) -- (node2); \draw (node0) -- (node3); \draw (node0) -- (node4); \draw (node1) -- (node3); \draw (node1) -- (node4); \draw (node1) -- (node5); \draw (node2) -- (node4); \draw (node2) -- (node5); \draw (node3) -- (node5); \end{scope} \end{tikzpicture} \end{document} ## Explanation The key elements being used here are \foreach \i in {0, 1, 2} { <do something with \i> } This is a simple loop in tikz. The code <do something with \i> is executed three times with \i taking the values 0, 1, and 2 in that order. The code used in this case is \coordinate (node\i) at (360/2*\i+90:2); which places a coordinate marker at a position and names it node\i (with the current value of \i being substituted). The position is given in polar coordinates (using degrees), the general syntax for polar coordinates in tikz is (a:r) where r is the radius and a is the angle (measured above the horizontal axis, hence the constant offset of + 90 to measure from the top of the circle). The second line here draws a small circle at the coordinates position. After the loop simply connect all of the nodes that you wish to be connected. The final irregular drawing is done by adding a random offset to each angle, which can be done using rand which is then scaled by a factor of 20 to get a noticeable effect. Each drawing (apart from the first) is placed in a scope environment and every element of this scope has a xshift and/or yshift applied. This moves the origin by the specified amount (you must put units here or it defaults to pt) ## Hints tikz loops can parse things like \foreach \i in {0, ..., 5} {} which will execute 6 times for \i taking the values 0, 1, 2, 3, 4, and 5. Or you can use \foreach \i in {0, 0.5, ..., 2} {} which will execute 5 times for \i taking the values 0, 0.5, 1, 1.5, and 2. Don't be afraid to draw things that won't be in the final picture, for example when connecting the nodes I had \node at (node\i) {\Huge \i}; within the loop which numbers each node so I can see easily what I need to connect. Finally the best way to learn tikz is to just keep using it and look things up as and when you need them. Have fun • +1 Nice that you took time to explain the process. And for the last one, a good idea to randomize a bit. But connecting manually is such a pain when you can do it with loops. Jun 27, 2021 at 14:07 • Thanks, I agree about the manual connections being a pain but for a small number of points its not too bad, hopefully anyone who needs can combine our two answers to produce their desired output without having to do that. Jun 27, 2021 at 14:10 • Thank you very much – Puck Jun 27, 2021 at 16:01 • @Willoughby You can fatorize your code by using foreach on couples suchas to draw the segments. Jun 27, 2021 at 18:28 • I forgot to say that I use stix and I use \documentclass{book}. Do these tips also apply to tex-codes with book and stix? Thanks again – Puck Jun 28, 2021 at 6:55 I think that you could find more than an answer by searching on the site, but here's a way to draw whatever complete graph you need (designed as a regular polygon). \documentclass[tikz,border=3.14mm]{standalone} \begin{document} \begin{tikzpicture} \def\R{3} \def\N{5} \draw (0,0) circle(\R); \foreach \i in {1,...,\N} { \coordinate (P-\i) at (\i*360/\N:\R); \draw (P-\i) circle(5pt); } \pgfmathtruncatemacro\n{\N-1} \foreach \i in {1,...,\n} { \pgfmathtruncatemacro\j{\i+1} \foreach \k in {\j,...,\N} \draw (P-\i) -- (P-\k); } \end{tikzpicture} \end{document} Result for N=5: Result for N=8: Result for N=19: • It would be very cool to explain a little the use of \pgfmathtruncatemacro. Jun 27, 2021 at 18:30 • @projetmbc I thought it was really self explanatory in this case. Jun 27, 2021 at 18:48 • You are right. Sorry for my excessive wish for pedagogy. :-) Jun 27, 2021 at 18:53 • @projetmbc Honestly, as someone who's starting out with Latex and tikz, I would like to know. Especally considering that Googling it leads me to random websites completely unrelated to the command. Sep 18, 2021 at 14:47 • \pgfmathtruncatemacro computes the formula inside the curly brackets and returns the truncated value of the result, i.e. always an integer. On the contrary, \pgfmathsetmacro returns a float, which is not suitable for, say, naming nodes. Sep 18, 2021 at 14:55
# Trig? Geometry Level 2 L, M and N are the midpoints of the sides AB, BC and CA of $$\Delta$$ABC. AM, BN and CL intersect at G. If AM $$= 5x − 3$$ units and GM $$= x + 2$$ units, then what is the length of AG? ×
## Wednesday, May 26, 2010 ### BloGTK: Blogging on the Train In this blog post I will explain how this post is written while riding on the train. I commute a lot. Every working day I ride the train. A single trip takes one hour. This is a nice amount of time to get things done. Reading a book, practicing a code Kata or pondering the big questions in life. Now I can add blogging to this list. Seeing that I spend a lot of time in a train, it seemed a nice way to create a greater supply of post. Although a promise is made to introduce Internet in trains by the end of the year, I have no Internet connection while I am on the train. So it is not possible to connect to the blogger site and manage my posts. So I started a search for a blogging client. The search was over quickly. I found BloGTK in the repository. Unfortunately that version is using a package which is not available in Ubuntu 10.04 - Lucid Lynx. Luckily building from source was a breeze. So this is the first of many posts while riding the train. ## Friday, May 21, 2010 ### Estimating Collisions in URL Shortening In this blog post I estimate the time before the first collision for URL shorteners such as bit.ly occurs. ### Increasing tweet density In this post I will summaries the various ways in which I will increase the number of tweets over time. In a previous blog post, I outlined reasons to start using twitter. Although the reasons I stated there are still valid, it does not help you to tweet regularly. Since I started twittering, the density of my tweets fluctuated. In this blog post I will outline various ways to produce a steadier stream of tweets. It will be a reminder for myself, if I every find myself in calmer tweet weather. List of regular tweet opportunities: • Tweet what you are reading. • Tweet what you are pondering • Tweet your experiences as a commuter Although the list isn't very long, it is a start for a steadier twitter stream. Let's find out how it will hold up against time. ## Tuesday, May 11, 2010 ### Pitfalls of Reflection In this post I will discuss a pitfall of the use of reflection in Java. Specifically how reflection can obscure and even change the semantics of a piece of code. What is wrong with the following example of JavaBean naming convention? public Boolean isCorrect() { /* implementation not shown. */ } Did you spot the capital B on the Boolean return type? According to the JavaBean naming convention the property does not fall under the special rules for booleans and should be named accordingly. public Boolean getCorrect() { /* implementation not shown. */ } Although there is a sleight syntactic difference, the apparent semantic difference is non-existent. According to William Shakespeare: A rose by any other name would smell as sweet Enter reflection. By using reflection it is possible to perform hugely different behaviour depending on the name of a method. The following whimsical example is a clear demonstration of this fact. Class aClass = ReflectedClass.class; Method method; try { method = aClass.getDeclaredMethod("isCorrect"); protect(Planet.EARTH); } catch (NoSuchMethodException e) { destroy(Planet.EARTH); } So one of the pitfalls of reflection is the hidden semantics associated with code. Stated in other words: the influence of code can not be inferred by the syntactic definition of that code.
# 246 imperial pints in centiliters ## Conversion 246 imperial pints is equivalent to 13979.22675 centiliters.[1] ## Conversion formula How to convert 246 imperial pints to centiliters? We know (by definition) that: $1\mathrm{imperialpint}\approx 56.826125\mathrm{centiliter}$ We can set up a proportion to solve for the number of centiliters. $1 ⁢ imperialpint 246 ⁢ imperialpint ≈ 56.826125 ⁢ centiliter x ⁢ centiliter$ Now, we cross multiply to solve for our unknown $x$: $x\mathrm{centiliter}\approx \frac{246\mathrm{imperialpint}}{1\mathrm{imperialpint}}*56.826125\mathrm{centiliter}\to x\mathrm{centiliter}\approx 13979.22675\mathrm{centiliter}$ Conclusion: $246 ⁢ imperialpint ≈ 13979.22675 ⁢ centiliter$ ## Conversion in the opposite direction The inverse of the conversion factor is that 1 centiliter is equal to 7.15347148940123e-05 times 246 imperial pints. It can also be expressed as: 246 imperial pints is equal to $\frac{1}{\mathrm{7.15347148940123e-05}}$ centiliters. ## Approximation An approximate numerical result would be: two hundred and forty-six imperial pints is about thirteen thousand, nine hundred and seventy-nine point two two centiliters, or alternatively, a centiliter is about zero times two hundred and forty-six imperial pints. ## Footnotes [1] The precision is 15 significant digits (fourteen digits to the right of the decimal point). Results may contain small errors due to the use of floating point arithmetic.
# Plain stopping time and conditional complexities revisited 2 ESCAPE - Systèmes complexes, automates et pavages LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier Abstract : In this paper we analyze the notion of "stopping time complexity", informally defined as the amount of information needed to specify when to stop while reading an infinite sequence. This notion was introduced by Vovk and Pavlovic (2016). It turns out that plain stopping time complexity of a binary string $x$ could be equivalently defined as (a) the minimal plain complexity of a Turing machine that stops after reading $x$ on a one-directional input tape; (b) the minimal plain complexity of an algorithm that enumerates a prefix-free set containing $x$; (c)~the conditional complexity $C(x|x*)$ where $x$ in the condition is understood as a prefix of an infinite binary sequence while the first $x$ is understood as a terminated binary string; (d) as a minimal upper semicomputable function $K$ such that each binary sequence has at most $2^n$ prefixes $z$ such that \$K(z) Document type : Other publications Domain : https://hal-lirmm.ccsd.cnrs.fr/lirmm-01803546 Contributor : Alexander Shen Connect in order to contact the contributor Submitted on : Wednesday, May 30, 2018 - 2:55:43 PM Last modification on : Friday, May 21, 2021 - 8:22:02 PM ### Identifiers • HAL Id : lirmm-01803546, version 1 • ARXIV : 1708.08100 ### Citation Mikhail Andreev, Gleb Posobin, Alexander Shen. Plain stopping time and conditional complexities revisited. 2017. ⟨lirmm-01803546⟩ Record views