text
stringlengths
100
356k
axiom-developer [Top][All Lists] ## [Axiom-developer] Re: MAC OSX 10.4 and GCL From: root Subject: [Axiom-developer] Re: MAC OSX 10.4 and GCL Date: Wed, 14 Dec 2005 14:41:53 -0500 attached are the patches. essentially i had to forcefully reorder the include files because the mac insists on including "usr/include/sys" before "usr/include" and i can't seem to override it. plus one of the graphics routines was just plain wrong so i rewrote it. it's been a while since i've hacked X11 though so there may be bugs. caveat: i haven't yet build axiom on gcl so while these may compile they may not run. you should be able to save this mail file and then do patch <mailfile once you apply the patches do: export PATH=/sw/bin:/sw/sbin:/usr/local/bin:$PATH export LIBRARY_PATH=/sw/lib export C_INCLUDE_PATH=/sw/include export CPPFLAGS="-no-cpp-precomp" export AXIOM=pwd/mnt/MACOSX (be sure to use MACOS) make AWK=awk (OSX 10.4 doesn't have nawk/gawk) =================================================================== patch 1: Makefile.pamphlet.patch =================================================================== --- Makefile.pamphlet 2005-10-30 21:05:30.000000000 -0500 +++ Makefile.pamphlet 2005-12-07 21:50:50.000000000 -0500 @@ -698,7 +698,8 @@ #GCLVERSION=gcl-2.6.5 #GCLVERSION=gcl-2.6.6 #GCLVERSION=gcl-2.6.7pre -GCLVERSION=gcl-2.6.7 +#GCLVERSION=gcl-2.6.7 +GCLVERSION=gcl-2.6.6 @ \subsubsection{The [[GCLOPTS]] configure variable} @@ -715,6 +716,12 @@ GCLOPTS="--enable-vssize=65536*2 --enable-locbfd --disable-dynsysbfd \ --disable-statsysbfd --enable-maxpage=256*1024" @ +For the MACOSX port we need the following options +<<GCLOPTS-CUSTRELOC>>= +GCLOPTS="--enable-vssize=65536*2 --enable-maxpage=256*1024 --disable-locbfd \ + --disable-statsysbfd --enable-custreloc --disable-tkconfig \ + --enable-machine=pwerpc-macosx" +@ \subsection{Makefile.axposf1v3} <<Makefile.axposf1v3>>= # System dependent Makefile for the AXP/OSF platform @@ -1872,7 +1879,7 @@ PLF=MACOSXplatform # C compiler flags CCF="-O2 -fno-strength-reduce -Wall -D_GNU_SOURCE -D${PLF} \ - -I/usr/X11/include -I/usr/include/sys" + -I/usr/X11/include -I/usr/include -I/usr/include/sys" LDF= -L/usr/X11R6/lib # C compiler to use @@ -1888,7 +1895,7 @@ DAASE=${SRC}/share # where the libXpm.a library lives XLIB=/usr/X11R6/lib -<<GCLOPTS>> +<<GCLOPTS-CUSTRELOC>> <<SRCDIRS>> PATCH=patch =================================================================== patch 2: Makefile.patch =================================================================== --- Makefile 2005-10-30 21:05:30.000000000 -0500 +++ Makefile 2005-12-07 21:51:03.000000000 -0500 @@ -1,4 +1,4 @@ -VERSION="Axiom 3.9 (September 2005)" +VERSION="Axiom 3.10 (November 2005)" SPD=$(shell pwd) SYS=$(notdir$(AXIOM)) SPAD=${SPD}/mnt/${SYS} @@ -13,7 +13,8 @@ #GCLVERSION=gcl-2.6.5 #GCLVERSION=gcl-2.6.6 #GCLVERSION=gcl-2.6.7pre -GCLVERSION=gcl-2.6.7 +#GCLVERSION=gcl-2.6.7 +GCLVERSION=gcl-2.6.6 AWK=gawk GCLDIR=${LSP}/${GCLVERSION} SRC=\${SPD}/src =================================================================== patch 3: src/lib/bsdsignal.c.pamphlet.patch =================================================================== --- src/lib/bsdsignal.c.pamphlet 2005-10-30 20:56:15.000000000 -0500 +++ src/lib/bsdsignal.c.pamphlet 2005-12-04 22:52:53.000000000 -0500 @@ -8,56 +8,223 @@ \end{abstract} \eject \tableofcontents -\eject +\newpage +\section{Executive Overview} +\section{Signals} +The system defines a set of signals that may be delivered to a process. Signal +delivery resembles the occurrence of a hardware interrupt: the signal is +normally blocked from further occurrence, the current process context is saved, +and a new one is built. A process may specify a {\sl handler} to which a signal +is delivered, or specify that a signal is to be {\sl ignored}. A process may +also specify that a default action is to be taken by the system when a signal +occurs. A signal may also be {\sl blocked}, in which case its delivery is +postponed until it is {\sl unblocked}. The action to be taken on delivery is +determined at the time of delivery. Normally, signal handlers execute on the +current stack of the process. This may be changed, on a per-handler basis, so +that signals are taken on a special {\sl signal stack}. + +Signal routines normally execute with the signal that caused their invocation +{\sl blocked}, but other signals may yet occur. A global {\sl signal mask} +defines the set of signals currently blocked from delivery to a process. +The signal mask for a process is initialized from that of its parent +(normally empty). It may be changed with a {\bf sigprocmask(2)} call, or +when a signal is delivered to the process. + +When a signal condition arises for a process, the signal is added to a set of +signals pending for the process. If the signal is not currently {\sl blocked} +by the process then it is delivered to the process. Signals may be delivered +any time a process enters the operating system (e.g., during a system call, +page fault or trap, or clock interrupt). If muliple signals are ready to be +delivered at the same time, any signals that could be caused by traps are +delivered first. Additional signals may be processed at the same time, with +each appearing to interrupt the handlers for the previous signals before +their first instructions. The set of pending signals is retuned by the +{\bf sigpending(2)} system call. When a caught signal is delivered, the current +state of the process is saved, a new signal mask is calculated (as described +below), and the signal handler is invoked. The call to the handler is arranged +so that if the signal handling routine returns normally the process will resume +execution in the context from before the signal's delivery. If the process +wishes to resume in a different context, then it must arrange to restore +the previous context itself. + +When a signal is delivered to a proces a new signal mask is installed for the +duration of the process's signal handler (or until a {\bf sigprocmask(2)} +system call is made). This mask is formed by taking the union of the current +signal mask set, the signal to be delivered, and the signal mask associated +with the handler to be invoked. + +The {\bf sigaction()} system call assigns an action for a signal specified by +{\sl sig}. If {\sl act} is non-zero, it specifies an action (SIG\_DFL, SIG\_IGN, +or a handler routine) and mask to be used when delivering the specified signal. +If {\sl oact} is non-zero, the previous handling information for the signal is +returned to the user. + +Once a signal handler is installed, it normally remains installed until another +{\bf sigaction()} system call is made, or an {\sl execve(2)} is performed. A +signal-specific default action may be reset by setting {\sl sa\_handler} to +SIG\_DFL. The defaults are process termination, possibly with core dump; +no action; stopping the process; or continuing the process. See the signal +list below for each signal's default action. If {\sl sa\_handler} is SIG\_DFL, +the default action for the signal is to discard the signal, and if a signal +is pending, the pending signal is discarded even if the signal is masked. If +{\sl sa\_handler} is set to SIG\_IGN current and pending instances of the signal + +Options may be specified by setting {\sl sa\_flags}. The meaning of the various +bits is as follows: +\begin{tabular}{ll} +SA\_NOCLDSTOP & If this bit is set when installing a catching function for\\ + & the SIGCHLD signal, the SIGCHLD signal will be generated only\\ + & when a child process exits, not when a child process stops.\\ +SA\_NOCLDWAIT & If this bit is set when calling {\sl sigaction()} for the\\ + & SIGCHLD signal, the system will not create zombie processes\\ + & when children of the calling process exit. If the calling\\ + & process subsequently issues a {\wf wait()} (or equivalent),\\ + & it blocks until all of the calling process's child processes\\ + & terminate, and then returns a value of -1 with errno set to\\ + & ECHILD.\\ +SA\_ONSTACK & If this bit is set, the system will deliver the signal to\\ + & the process on a {\sl signal stack}, specified with\\ + & {\bf sigaltstack(2)}.\\ +SA\_NODEFER & If this bit is set, further occurrences of the delivered\\ + & signal are not masked during the execution of the handler.\\ +SA\_RESETHAND & If this bit is set, the handler is reset back to SIG\_DFL\\ + & at the moment the signal is delivered.\\ +SA\_RESTART & See the paragraph below\\ +SA\_SIGINFO & If this bit is set, the handler function is assumed to be\\ + & pointed to by the sa\_sigaction member of struct sigaction\\ + & and should match the prototype shown above or as below in\\ + & EXAMPLES. This bit should not be set when assigning SIG\_DFL\\ + & or SIG\_IGN +\end{tabular} + +If a signal is caught during the system calls listed below, the call may be +forced to terminate with the error EINTR, the call may return with a data +transfer shorter than requested, or the call may be restarted. Restart of +pending calls is requested by setting the SA\_RESTART bit in {\sl sa\_flags}. +The affected system calls include {\bf open(2)}, {\bf read(2)}, {\bf write(2)}, +{\bf sendto(2)}, {\bf recvfrom(2)}, {\bf sendmsg(2)} and {\bf recvmsg(2)} +on a communications channel or a slow device (such as a terminal, but not a +regular file) and during a {\bf wait(2)} or {\bf ioctl(2)}. However, calls +that have already committed are not restarted, but instead return a partial +success (for example, a short read count). + +After a {\bf fork(2)} or {\bf vfork(2)} all signals, the signal mask, the +signal stack, and the restart/interrupt flags are inherited by the child. + +The {\bf execve(2)} system call reinstates the default action for all signals +which were caught and resets all signals to be caught on the user stack. +Ignored signals remain ignored; the signal mask remains the same; signals +that restart pending system calls continue to do so. + +The following is a list of all signals with names as in the include file +{\sl <signal.h>}: + +\begin{tabular}{lll} +{\bf NAME} & {\bf Default Action} & Description\\ +SIGHUP & terminate process & terminal line hangup\\ +SIGINT & terminate process & interrupt program\\ +SIGQUIT & create core image & quit program\\ +SIGILL & create core image & illegal instruction\\ +SIGTRAP & create core image & trace trap\\ +SIGABRT & create core image & {\bf abort(3)} call (formerly SIGIOT)\\ +SIGEMT & create core image & emulate instruction executed\\ +SIGFPE & create core image & floating-point exception\\ +SIGKILL & terminate process & kill program\\ +SIGBUS & create core image & bus error\\ +SIGSEGV & create core image & segmentation violation\\ +SIGSYS & create core image & non-existent system call invoked\\ +SIGPIPE & terminate process & write on a pipe with no reader\\ +SIGALRM & terminate process & real-time timer expired\\ +SIGTERM & terminate process & software termination signal\\ +SIGURG & discard signal & urgent condition present on socket\\ +SIGSTOP & stop process & stop (cannot be caught or ignored)\\ +SIGSTP & stop process & stop signal generated from keyboard\\ +SIGCONT & discard signal & continue after stop\\ +SIGCHLD & discard signal & child status has changed\\ +SIGTTIN & stop process & background read attempted from \\ + & & control terminal\\ +SIGTTOU & stop process & background write attempted from\\ + & & control terminal\\ +SIGIO & discard signal & I/O is possible on a descriptor (\bf fcntl(2)}\\ +SIGXCPU & terminate process & cpu time limit exceeded {\bf setrlimit(2)}\\ +SIGXFSZ & terminate process & file size limit exceeded {\bf setrlimit(2)}\\ +SIGVTALRM & terminate process & virtual time alarm {\bf setitimer(2)}\\ +SIGPROF & terminate process & profiling timer alarm {\bf setitimer(2)}\\ +SIGWINCH & discard signal & Window size change\\ +SIGINFO & discard signal & status request from keyboard\\ +SIGUSR1 & terminate process & User defined signal 1\\ +SIGUSR2 & terminate process & User defined signal 2 +\end{tabular} + +The {\sl sigaction()} function returns the value 0 if successful; otherwise +the value -1 is returned and the global variable {\sl errno} is set to indicate +the error. + +Signal handlers should have either the ANSI C prototype: +\begin{verbatim} + void handler(int); +\end{verbatim} +or the POSIX SA\_SIGINFO prototype: +\begin{verbatim} + void handler(int, siginfo\_t *info, ucontext\_t *uap); +\end{verbatim} + +The handler function should match the SA\_SIGINFO prototype if the SA\_SIGINFO +bit is set in flags. It then should be pointed to by the sa\_sigaction member +of struct sigaction. Note that you should not assign SIG\_DFL or SIG\_IGN this way. + +If the SA\_SIGINFO flag is not set, the handler function should match either +the ANSI C or traditional BSD prototype and be pointed to by the sa\_handler +member of struct sigaction. In practice, FreeBSD always sends the three +arguments of the latter and since the ANSI C prototype is a subset, both +will work. The sa\_handler member declaration in FreeBSD include files is +that of ANSI C (as required by POSIX), so a function pointer of a BSD-style +function needs to be casted to compile without warning. The traditional BSD +style is not portable and since its capabilities are a full subset of a +SA\_SIGNFO handler its use is deprecated. + +The {\sl sig} argument is the signal number, one of the SIG\ldots values from +{\sl <signal.h>}. + +The {\sl code} argument of the BSD-style handler and the si\_code member of the +info argument to a SA\_SIGINFO handler contain a numeric code explaining the +cause of the signal, usually on of the SI\_\ldots values from {\sl <sys/signal.h>} +or codes specific to a signal, i.e. one of the FPE\_\ldots values for SIGFPE. + +The {\sl uap} argument to a POSIX SA_SIGINFO handler points to an instance of +ucontext\_t. + +The {\bf sigaction()} system call will fail and no new signal handler will be +installed if one of the following occurs: +\begin{tabular}{ll} +[EFAULT] & Either {\sl act} or {\sl oact} points to memory that is not a\\ + & valid part of the process address space\\ +[EINVAL] & The {\sl sig} argument is not a valid signal number\\ +[EINVAL] & An attempt is made to ignore or supply a handler for SIGKILL\\ + & or SIGSTOP +\end{tabular} \section{MAC OSX and BSD platform change} -We needed to change [[SIGCLD]] to [[SIGCHLD]] for the [[MAC OSX]] platform -and we need to create a new platform variable. This change is made to -propogate that platform variable. -<<mac osx platform change>>= -#if defined(LINUXplatform) || defined (ALPHAplatform)|| defined(RIOSplatform) || defined(SUN4OS5platform) ||defined(SGIplatform) ||defined(HP10platform) || defined(MACOSXplatform) || defined(BSDplatform) -@ -/* -Copyright (c) 1991-2002, The Numerical ALgorithms Group Ltd. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of The Numerical ALgorithms Group Ltd. nor the - names of its contributors may be used to endorse or promote products - derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS -IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED -TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A -PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER -OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -*/ -@ <<*>>= #include "useproto.h" #include "bsdsignal.h" -#include <signal.h> +@ +The MACOSX platform is broken because no matter what you do it seems to +include files from [[/usr/include/sys]] ahead of [[/usr/include]]. On linux +systems these files include themselves which causes an infinite regression +of includes that fails. GCC gracefully steps over that problem but the +build fails anyway. On MACOSX the [[/usr/include/sys]] versions +of files are badly broken with respect to the [[/usr/include]] versions. +<<*>>= +#if defined(MACOSXplatform) +#include "/usr/include/signal.h" +#else +#include <signal.h> +#endif + #include "bsdsignal.H1" @@ -76,7 +243,12 @@ struct sigaction in,out; in.sa_handler = action; /* handler is reinstalled - calls are restarted if restartSystemCall */ -<<mac osx platform change>> +@ +We needed to change [[SIGCLD]] to [[SIGCHLD]] for the [[MAC OSX]] platform +and we need to create a new platform variable. This change is made to +propogate that platform variable. +<<*>>= +#if defined(LINUXplatform) || defined (ALPHAplatform)|| defined(RIOSplatform) || defined(SUN4OS5platform) ||defined(SGIplatform) ||defined(HP10platform) || defined(MACOSXplatform) || defined(BSDplatform) if(restartSystemCall) in.sa_flags = SA_RESTART; else in.sa_flags = 0; #elif defined(SUNplatform) @@ -98,7 +270,42 @@ @ -\eject +/* +Copyright (c) 1991-2002, The Numerical ALgorithms Group Ltd. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of The Numerical ALgorithms Group Ltd. nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED +TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER +OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ +@ +\newpage \begin{thebibliography}{99} \bibitem{1} nothing \end{thebibliography} =================================================================== patch 4: src/lib/cfuns-c.c.pamphlet.patch =================================================================== --- src/lib/cfuns-c.c.pamphlet 2005-10-30 20:56:15.000000000 -0500 +++ src/lib/cfuns-c.c.pamphlet 2005-12-04 23:10:59.000000000 -0500 @@ -49,7 +49,19 @@ #include "useproto.h" #include <stdio.h> +@ +The MACOSX platform is broken because no matter what you do it seems to +include files from [[/usr/include/sys]] ahead of [[/usr/include]]. On linux +systems these files include themselves which causes an infinite regression +of includes that fails. GCC gracefully steps over that problem but the +build fails anyway. On MACOSX the [[/usr/include/sys]] versions +of files are badly broken with respect to the [[/usr/include]] versions. +<<*>>= +#if defined(MACOSXplatform) +#include "/usr/include/unistd.h" +#else #include <unistd.h> +#endif #include <stdlib.h> #include <string.h> #if !defined(BSDplatform) =================================================================== patch 5: src/lib/edin.c.pamphlet.patch =================================================================== --- src/lib/edin.c.pamphlet 2005-10-30 20:46:40.000000000 -0500 +++ src/lib/edin.c.pamphlet 2005-12-04 21:19:50.000000000 -0500 @@ -51,7 +51,19 @@ #include "useproto.h" #include <stdlib.h> +@ +The MACOSX platform is broken because no matter what you do it seems to +include files from [[/usr/include/sys]] ahead of [[/usr/include]]. On linux +systems these files include themselves which causes an infinite regression +of includes that fails. GCC gracefully steps over that problem but the +build fails anyway. On MACOSX the [[/usr/include/sys]] versions +of files are badly broken with respect to the [[/usr/include]] versions. +<<*>>= +#if defined(MACOSXplatform) +#include "/usr/include/unistd.h" +#else #include <unistd.h> +#endif #include <string.h> #include <stdio.h> #include <sys/types.h> =================================================================== patch 6: src/lib/fnct_key.c.pamphlet.patch =================================================================== --- src/lib/fnct_key.c.pamphlet 2005-10-30 20:56:15.000000000 -0500 +++ src/lib/fnct_key.c.pamphlet 2005-12-04 21:19:59.000000000 -0500 @@ -60,8 +60,19 @@ #include "useproto.h" - +@ +The MACOSX platform is broken because no matter what you do it seems to +include files from [[/usr/include/sys]] ahead of [[/usr/include]]. On linux +systems these files include themselves which causes an infinite regression +of includes that fails. GCC gracefully steps over that problem but the +build fails anyway. On MACOSX the [[/usr/include/sys]] versions +of files are badly broken with respect to the [[/usr/include]] versions. +<<*>>= +#if defined(MACOSXplatform) +#include "/usr/include/unistd.h" +#else #include <unistd.h> +#endif #include <stdlib.h> #include <stdio.h> #include <string.h> =================================================================== patch 7: src/lib/sockio-c.c.pamphlet.patch =================================================================== --- src/lib/sockio-c.c.pamphlet 2005-10-30 20:56:03.000000000 -0500 +++ src/lib/sockio-c.c.pamphlet 2005-12-04 21:24:27.000000000 -0500 @@ -53,12 +53,28 @@ #include <stdio.h> #include <stdlib.h> +@ +The MACOSX platform is broken because no matter what you do it seems to +include files from [[/usr/include/sys]] ahead of [[/usr/include]]. On linux +systems these files include themselves which causes an infinite regression +of includes that fails. GCC gracefully steps over that problem but the +build fails anyway. On MACOSX the [[/usr/include/sys]] versions +of files are badly broken with respect to the [[/usr/include]] versions. +<<*>>= +#if defined(MACOSXplatform) +#include "/usr/include/unistd.h" +#else #include <unistd.h> +#endif #include <sys/time.h> #include <sys/stat.h> #include <errno.h> #include <string.h> +#if defined(MACOSXplatform) +#include "/usr/include/signal.h" +#else #include <signal.h> +#endif #if defined(SGIplatform) #include <bstring.h> =================================================================== =================================================================== @@ -89,7 +89,10 @@ RGB rgb; float h, f, p, q, t; int i; - + + rgb.r = 0.0; + rgb.g = 0.0; + rgb.b = 0.0; if (hsv.s == 0.0) { rgb.r = rgb.g = rgb.b = hsv.v; return (rgb); @@ -562,7 +565,29 @@ #else AllocCells(Display *dsply, Colormap colorMap, int smoothHue) #endif - +@ +This routine used to have the following code block. However this +code block makes no sense. To see why you need to know that an +XColor object looks like: +\begin{verbatim} +/* + * Data structure used by color operations + */ +typedef struct { + unsigned long pixel; + unsigned short red, green, blue; + char flags; /* do_red, do_green, do_blue */ +} XColor; +\end{verbatim} +This routine used to set the values of all of the elements of the XColor struct +except [[pixel]]. This is usually done to specify a desired color in RGB +values. To try to get a pixel value close to that color you call XAllocColor. +This routine sets up the desired color values but it never asks for the pixel +(which is really an index into the colormap of the nearest color) value that +corresponds to the desired color. In fact it uses pixel without ever giving +it a value. I've rewritten that code. +\begin{verbatim} { int i, count; @@ -578,9 +603,9 @@ hls.l = lightness; hls.s = saturation; rgb = HLStoRGB(hls); - xcolor.red = rgb.r *((1<<16)-1); - xcolor.green = rgb.g *((1<<16)-1); - xcolor.blue = rgb.b *((1<<16)-1); + xcolor.red = rgb.r *((1@<<16)-1); + xcolor.green = rgb.g *((1@<<16)-1); + xcolor.blue = rgb.b *((1@<<16)-1); xcolor.flags = DoRed | DoGreen | DoBlue; /* fprintf(stderr,"%f\t%f\t%f\n",rgb.r,rgb.g,rgb.b); @@ -597,6 +622,54 @@ return (0); } } +\end{verbatim} +<<*>>= +{ + int i, count; + float lightness; + RGB rgb; + XColor xcolor; + HLS hls; + + count = 0; + for (i = 0; i < (smoothConst + 1); i++) { + lightness = (float) (i) / (float) (smoothConst); + hls.h = (float) smoothHue; + hls.l = lightness; + hls.s = saturation; + rgb = HLStoRGB(hls); + xcolor.red = rgb.r *((1<<16)-1); + xcolor.green = rgb.g *((1<<16)-1); + xcolor.blue = rgb.b *((1<<16)-1); + xcolor.flags = DoRed | DoGreen | DoBlue; + /* + fprintf(stderr,"%f\t%f\t%f\n",rgb.r,rgb.g,rgb.b); + fprintf(stderr,"%d\t%d\t%d\n",xcolor.red,xcolor.green,xcolor.blue); + */ +@ +Here I've modified the code to actually as for the pixel (colormap index) that +most closely matches our requested RGB values. +<<*>>= + if (XAllocColor(dsply, colorMap, &xcolor)) { + pixels[count] = xcolor.pixel; + count++; + } + } + /* count says how many succeeded */ + if (count != (smoothConst+1) ) { + /* we have failed to get all of them - free the ones we got */ + FreePixels(dsply,colorMap,count); + return (0); + } + if (XAllocColorCells(dsply, colorMap, False, + plane_masks, 0, pixels, smoothConst + 1)) { + return (smoothConst + 1); + } + else { + return (0); + } +} @ \eject \begin{thebibliography}{99} =================================================================== patch 9: src/lib/util.c.pamphlet.patch =================================================================== --- src/lib/util.c.pamphlet 2005-10-30 20:46:40.000000000 -0500 +++ src/lib/util.c.pamphlet 2005-12-04 22:53:29.000000000 -0500 @@ -8,49 +8,26 @@ \end{abstract} \eject \tableofcontents -\eject -/* -Copyright (c) 1991-2002, The Numerical ALgorithms Group Ltd. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of The Numerical ALgorithms Group Ltd. nor the - names of its contributors may be used to endorse or promote products - derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS -IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED -TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A -PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER -OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -*/ -@ +\newpage <<*>>= #include "useproto.h" #include <stdlib.h> +@ +The MACOSX platform is broken because no matter what you do it seems to +include files from [[/usr/include/sys]] ahead of [[/usr/include]]. On linux +systems these files include themselves which causes an infinite regression +of includes that fails. GCC gracefully steps over that problem but the +build fails anyway. On MACOSX the [[/usr/include/sys]] versions +of files are badly broken with respect to the [[/usr/include]] versions. +<<*>>= +#if defined(MACOSXplatform) +#include "/usr/include/unistd.h" +#else #include <unistd.h> +#endif #include <sys/types.h> #include <stdio.h> #include <errno.h> @@ -206,7 +183,42 @@ return (size); } @ -\eject +/* +Copyright (c) 1991-2002, The Numerical ALgorithms Group Ltd. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of The Numerical ALgorithms Group Ltd. nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED +TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER +OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ +@ +\newpage \begin{thebibliography}{99} \bibitem{1} nothing \end{thebibliography} =================================================================== patch 10: src/lib/wct.c.pamphlet.patch =================================================================== --- src/lib/wct.c.pamphlet 2005-10-30 20:56:15.000000000 -0500 +++ src/lib/wct.c.pamphlet 2005-12-04 23:08:44.000000000 -0500 @@ -8,42 +8,7 @@ \end{abstract} \eject \tableofcontents -\eject -/* -Copyright (c) 1991-2002, The Numerical ALgorithms Group Ltd. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of The Numerical ALgorithms Group Ltd. nor the - names of its contributors may be used to endorse or promote products - derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS -IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED -TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A -PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER -OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -*/ -@ +\newpage <<*>>= /* @@ -59,15 +24,31 @@ #include <stdio.h> #include <stdlib.h> +@ +The MACOSX platform is broken because no matter what you do it seems to +include files from [[/usr/include/sys]] ahead of [[/usr/include]]. On linux +systems these files include themselves which causes an infinite regression +of includes that fails. GCC gracefully steps over that problem but the +build fails anyway. On MACOSX the [[/usr/include/sys]] versions +of files are badly broken with respect to the [[/usr/include]] versions. +<<*>>= +#if defined(MACOSXplatform) +#include "/usr/include/unistd.h" +#else #include <unistd.h> +#endif #include <string.h> #include <fcntl.h> +#if defined(MACOSXplatform) +#include "/usr/include/time.h" +#else #include <time.h> +#endif #include <ctype.h> #include <sys/types.h> #include <sys/stat.h> -/* #define PINFO *//* A floag for suprresing the printing of the file info */ +/* #define PINFO *//* A flag to suppress printing of the file info */ #define WCT /* A flag needed because ctype.h stole some * of my constants */ @@ -869,7 +850,42 @@ } @ -\eject +/* +Copyright (c) 1991-2002, The Numerical ALgorithms Group Ltd. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of The Numerical ALgorithms Group Ltd. nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED +TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER +OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ +@ +\newpage \begin{thebibliography}{99} \bibitem{1} nothing \end{thebibliography}
It's only fair to share... ## Fractions in business is part of the NUMERACY SKILLS FOR BUSINESS: FREE COURSE and Accounting Trainees and Bookkeepers need to understand fractions. We normally express money in its decimal form e.g. £12.50 or \$698,388.32, but sometimes we express money in its fraction form e.g. three quarters (3/4) of a million pounds or half (1/2) a million US dollars. A fraction represents a portion of something and it is merely the relationship between two numbers e.g.  1/4,  2/5,  12/8, etc. The bottom half of a fraction is called the denominator and the top half the numerator, i.e., in 3/4,  3  is the numerator and  4  is the denominator. As Accounting Trainees you have to be comfortable with both fractions and decimals and you often need to convert one into the other. To get the decimal form you have to divide the top figure by the bottom figure. Consider these fractions: 1/4,  2/5,  12/8 and, the corresponding decimal form is 0.25,  0.4,  1.5. Now you try – what is the decimal of these fractions: Question 1 5/6 Question 2 7/8 Question 3 3/4 Question 4 16/5 Calculate the sum of 2/3 plus 1/4. The first step is to represent each fraction as the ratio of a pair of numbers with the same denominator. For this example, we multiply the top and bottom of 2/3 by 4, and the top and bottom of 1/4 by 3. The fractions now look like 8/12 and 3/12 and both have the same denominator, which is 12. In this new form we just add the two numerators. (2/3) + (1/4) = (8/12) + (3/12) = (8 + 3) / 12 = 11/12 Now you try – calculate the sum of these fractions: Question 5 4/5 + 1/8 Question 6 1/3 + 3/7 ## Multiplication of fractions in business. If we wish to multiply 3/4 by 2/9 then what we are trying to do is to take 3/4 of 2/9, so we form the new fraction: 3/4 x 2/9 = (3 x 2) / (4 x 9) = 6/36 or 1/6 in its simplest form. In general, we multiply two fractions by forming a new fraction where the new numerator is the result of multiplying together the two numerators, and the new denominator is the result of multiplying together the two denominators. Now you try – what is the product of these fractions: Question 7 1/3 x 1/4 Question 8 3/8 x 1/7 Question 9 1/4 x 5/7
# American Institute of Mathematical Sciences September  2009, 8(5): 1469-1492. doi: 10.3934/cpaa.2009.8.1469 ## Spectral properties of general advection operators and weighted translation semigroups 1 Laboratoire de Mathématiques, CNRS UMR 6620, Université Blaise Pascal (Clermont-Ferrand 2), 63177 Aubière Cedex 2 Université de Franche–Comté, Laboratoire de Mathématiques, CNRS UMR 6623, 16, route de Gray, 25030 Besançon Cedex, France, France Received  July 2008 Revised  January 2009 Published  April 2009 We investigate the spectral properties of a class of weighted shift semigroups $(\mathcal{U}(t))_{t \geq 0}$ associated to abstract transport equations with a Lipschitz continuous vector field $\mathcal{F}$ and no--reentry boundary conditions. Generalizing the results of [25], we prove that the semigroup $(\mathcal{U}(t))_{t \geq 0}$ admits a canonical decomposition into three $C_0$-semigroups $(\mathcal{U}_1(t))_{t \geq 0}$, $(\mathcal{U}_2(t))_{t \geq 0}$ and $(\mathcal{U}_3(t))_{t \geq 0}$ with independent dynamics. A complete description of the spectra of the semigroups $(\mathcal{U}_i(t))_{t \geq 0}$ and their generators $\mathcal{T}_i$, $i=1,2$ is given. In particular, we prove that the spectrum of $\mathcal{T}_i$ is a left-half plane and that the Spectral Mapping Theorem holds: $\mathfrak{S}(\mathcal{U}_i(t))=\exp$ {$t \mathfrak{S}(\mathcal{T}_i)$}, $i=1,2$. Moreover, the semigroup $(\mathcal{U}_3(t))_{t \geq 0}$ extends to a $C_0$-group and its spectral properties are investigated by means of abstract results from positive semigroups theory. The properties of the flow associated to $\mathcal{F}$ are particularly relevant here and we investigate separately the cases of periodic and aperiodic flows. In particular, we show that, for periodic flow, the Spectral Mapping Theorem fails in general but $(\mathcal{U}_3(t))_{t \geq 0}$ and its generator $\mathcal{T}_3$ satisfy the so-called Annular Hull Theorem. We illustrate our results with various examples taken from collisionless kinetic theory. Citation: Bertrand Lods, Mustapha Mokhtar-Kharroubi, Mohammed Sbihi. Spectral properties of general advection operators and weighted translation semigroups. Communications on Pure and Applied Analysis, 2009, 8 (5) : 1469-1492. doi: 10.3934/cpaa.2009.8.1469 [1] Vladimir Müller, Aljoša Peperko. Lower spectral radius and spectral mapping theorem for suprema preserving mappings. Discrete and Continuous Dynamical Systems, 2018, 38 (8) : 4117-4132. doi: 10.3934/dcds.2018179 [2] Fabrizio Colombo, Irene Sabadini, Frank Sommen. The inverse Fueter mapping theorem. Communications on Pure and Applied Analysis, 2011, 10 (4) : 1165-1181. doi: 10.3934/cpaa.2011.10.1165 [3] Qiang Li. A kind of generalized transversality theorem for $C^r$ mapping with parameter. Discrete and Continuous Dynamical Systems - S, 2017, 10 (5) : 1043-1050. doi: 10.3934/dcdss.2017055 [4] Jan Boman. A local uniqueness theorem for weighted Radon transforms. Inverse Problems and Imaging, 2010, 4 (4) : 631-637. doi: 10.3934/ipi.2010.4.631 [5] Mike Boyle, Sompong Chuysurichay. The mapping class group of a shift of finite type. Journal of Modern Dynamics, 2018, 13: 115-145. doi: 10.3934/jmd.2018014 [6] Kai Tao. Strong Birkhoff ergodic theorem for subharmonic functions with irrational shift and its application to analytic quasi-periodic cocycles. Discrete and Continuous Dynamical Systems, 2022, 42 (3) : 1495-1533. doi: 10.3934/dcds.2021162 [7] Massimiliano Guzzo, Giancarlo Benettin. A spectral formulation of the Nekhoroshev theorem and its relevance for numerical and experimental data analysis. Discrete and Continuous Dynamical Systems - B, 2001, 1 (1) : 1-28. doi: 10.3934/dcdsb.2001.1.1 [8] Genggeng Huang. A Liouville theorem of degenerate elliptic equation and its application. Discrete and Continuous Dynamical Systems, 2013, 33 (10) : 4549-4566. doi: 10.3934/dcds.2013.33.4549 [9] Roman Romanov. Estimates of solutions of linear neutron transport equation at large time and spectral singularities. Kinetic and Related Models, 2012, 5 (1) : 113-128. doi: 10.3934/krm.2012.5.113 [10] Zhonghui Li, Xiangyong Chen, Jianlong Qiu, Tongshui Xia. A novel Chebyshev-collocation spectral method for solving the transport equation. Journal of Industrial and Management Optimization, 2021, 17 (5) : 2519-2526. doi: 10.3934/jimo.2020080 [11] Xiaohui Yu. Liouville type theorem for nonlinear elliptic equation with general nonlinearity. Discrete and Continuous Dynamical Systems, 2014, 34 (11) : 4947-4966. doi: 10.3934/dcds.2014.34.4947 [12] Pedro Teixeira. Dacorogna-Moser theorem on the Jacobian determinant equation with control of support. Discrete and Continuous Dynamical Systems, 2017, 37 (7) : 4071-4089. doi: 10.3934/dcds.2017173 [13] Ovidiu Savin. A Liouville theorem for solutions to the linearized Monge-Ampere equation. Discrete and Continuous Dynamical Systems, 2010, 28 (3) : 865-873. doi: 10.3934/dcds.2010.28.865 [14] Stefano Pasquali. A Nekhoroshev type theorem for the nonlinear Klein-Gordon equation with potential. Discrete and Continuous Dynamical Systems - B, 2018, 23 (9) : 3573-3594. doi: 10.3934/dcdsb.2017215 [15] Christos Sourdis. A Liouville theorem for ancient solutions to a semilinear heat equation and its elliptic counterpart. Electronic Research Archive, 2021, 29 (5) : 2829-2839. doi: 10.3934/era.2021016 [16] Xiaomei Chen, Xiaohui Yu. Liouville type theorem for Hartree-Fock Equation on half space. Communications on Pure and Applied Analysis, 2022, 21 (6) : 2079-2100. doi: 10.3934/cpaa.2022050 [17] Miklós Horváth. Spectral shift functions in the fixed energy inverse scattering. Inverse Problems and Imaging, 2011, 5 (4) : 843-858. doi: 10.3934/ipi.2011.5.843 [18] Habibulla Akhadkulov, Akhtam Dzhalilov, Konstantin Khanin. Notes on a theorem of Katznelson and Ornstein. Discrete and Continuous Dynamical Systems, 2017, 37 (9) : 4587-4609. doi: 10.3934/dcds.2017197 [19] Stefano Bianchini, Daniela Tonon. A decomposition theorem for $BV$ functions. Communications on Pure and Applied Analysis, 2011, 10 (6) : 1549-1566. doi: 10.3934/cpaa.2011.10.1549 [20] Henk Broer, Konstantinos Efstathiou, Olga Lukina. A geometric fractional monodromy theorem. Discrete and Continuous Dynamical Systems - S, 2010, 3 (4) : 517-532. doi: 10.3934/dcdss.2010.3.517 2020 Impact Factor: 1.916
Navier Stoke Problem 1. Jul 8, 2012 iloc86 hi every body, im doing an analysis of a paper about a tool used in oil well drilling, they use 3D navier stoke equation and by lubrication aproximation the get this -(1/r) ∂p/∂θ + k (∂( 1/r ∂(rw)/∂r )/∂r )=0 its necesary to get w so i use integral but i obtain an irreal i used mathematica wolfram. but in the paper they obtain w= 1/2k ∂p/∂θ (ln r - 1/2) + C1 1/2 r + C2/r somebody could confirm this 2. Jul 8, 2012 iloc86 sorry imaginary 3. Jul 9, 2012 JJacquelin -(1/r) ∂p/∂θ + k (∂( 1/r ∂(rw)/∂r )/∂r )=0 ∂( 1/r ∂(rw)/∂r )/∂r = (1/k) (∂p/∂θ) (1/r) 1/r ∂(rw)/∂r = (1/k) (∂p/∂θ) ln(r) + C1 ∂(rw)/∂r = (1/k) (∂p/∂θ) r ln(r) + C1 r (rw) = (1/k) (∂p/∂θ) [(r²/2) ln(r)-r²/4] + C1 r²/2 + C2 w = (1/2k) (∂p/∂θ) [r ln(r)-(r/2)] + C1 r/2 + C2/r 4. Jul 9, 2012 jackmell I have issues with this. In particular, you have: $$\int \left(\frac{1}{k} \frac{\partial p}{\partial \theta} \frac{1}{r}\right)dr=\int \partial\left(\frac{1}{r}\frac{\partial}{\partial r}(rw)\right)$$ I do not see how you can integrate the left side with respect to r and get: $$\int \left(\frac{1}{k} \frac{\partial p}{\partial \theta} \frac{1}{r}\right)dr=\frac{1}{k}\frac{\partial p}{\partial \theta} \ln(r)+c$$ You're assuming $\frac{\partial p}{\partial \theta}$ is not a function of r and I do not think you can assume this. Also, wouldn't the constant of integration be an arbitrary function of theta in this case? Last edited: Jul 9, 2012 5. Jul 9, 2012 iloc86 Hi every body thanks for the reply Jacqueline u obtain the same as the paper And jack in the doc they assume p doesnt change in r But as You say a c depent in theta What do u think?
TOC – IRMS Thermal oxidation with unrivalled sensitivity Aqueous matrices with a solids option TOC, TC, TIC, NPOC, POC, TNb Sercon have developed, in close collaboration with Thermalox, a hyphenated TOC-IRMS system. The Thermalox TOC uses a combustion technique, which is superior to the persulphate oxidation method in that water samples containing high humic material can be analysed. The TOC is connected to the Sercon HS2022 via a modified version of the Cryoprep, which has been optimised for TOC hyphenation. The system is capable of measuring small (sub 1ml) sample volumes and analysing DOC concentrations from 0.5 to 10ppm at levels of high precision (typically 0.1‰ for 13CO2 determinations). Delivering Quality To Every Single Customer. Company Delivering Quality To Every Single Customer
Main Content # Belt Pulley Power transmission element with frictional belt wrapped around pulley circumference • Library: • Simscape / Driveline / Couplings & Drives ## Description The Belt Pulley block represents a pulley wrapped in a flexible ideal, flat, or V-shaped belt. The ideal belt does not slip relative to the pulley surface. The pulley can optionally translate through port C, as is the case in a block and tackle system. The model accounts for friction between the flexible belt and the pulley periphery. If the friction force is not sufficient to drive the load, the model allows slip. The relationship between the tensions in the tight and loose branches conforms to the Euler equation. The model accounts for centrifugal loading in the flexible belt, pulley inertia, and bearing friction. The block allows you to select the relative belt direction of motion. The two belt ends can move in equal or opposite directions. The block model assumes noncompliance in the belt and no resistance to motion due to wrapping around the pulley. The block equations model power transmission between the belt branches or to/from the pulley. The tight and loose branches use the same calculation. Without sufficient tension, the frictional force is not enough to transmit power between the pulley and belt. The model is valid when both ends of the belt are in tension. An optional warning can display in the Simulink® Diagnostic Viewer when the leading belt end loses tension. When assembling a model, ensure that tension is maintained throughout the simulation. This can be done by adding mass to at least one of the belt ends or by adding a tensioner into your model. Use the Variable Viewer to ensure that any springs attached the belt are in tension. For more details on building a tensioner, see Best Practices for Modeling Pulley Networks. ### Equations If the relative velocity between the belt and pulley is positive or zero, that is ${V}_{rel}\ge 0$, the Belt Pulley block calculates friction force as `${F}_{fr}={F}_{B}-{F}_{centrifugal}=\left({F}_{A}-{F}_{centrifugal}\right)*\mathrm{exp}\left(f*\theta \right).$` If the relative velocity is negative, that is ${V}_{rel}<0$, the friction force is calculated as `${F}_{fr}={F}_{A}-{F}_{centrifugal}=\left({F}_{B}-{F}_{centrifugal}\right)\ast \mathrm{exp}\left(f\ast \theta \right).$` The relative velocity is: `${V}_{rel}={V}_{A}-{\omega }_{S}\ast R-{V}_{C}$` `${V}_{rel}=-{V}_{B}+{\omega }_{S}\ast R+{V}_{C}$` If Belt type is set to either `V-belt` or `Flat belt` and Centrifugal force is set to `Model centrifugal force`, the centrifugal force is: `${F}_{centrifugal}=\rho \ast {\left({V}_{B}-{V}_{C}\right)}^{2}$` where: • Vrel is the relative velocity between the belt and pulley periphery. • VA is the branch A linear velocity. • VB is the branch B linear velocity. • VC is the pulley linear velocity at its center. If the pulley is not translating, this value is 0. • ωS is the pulley angular velocity. • R is the pulley radius. • Fcentrifugal is the belt centrifugal force. • ρ is the belt linear density. • Ffr is the friction force between the pulley and the belt. • FA is the force acting along branch A. • FB is the force acting along branch B. • f is the friction coefficient. • θ is the contact wrap angle. For a flat belt, specify the value of f directly in the block parameters dialog box. For a V-belt, the model calculates the value as `$f\text{'}=\frac{f}{\mathrm{sin}\left(\varphi }{2}\right)},$` where: • f' is the effective friction coefficient for a V-belt. • Φ is the V-belt sheave angle. The idealization of the discontinuity at Vrel = 0 is both difficult for the solver to resolve and not physically accurate. To alleviate this issue, the friction coefficient is assumed to change its value as a function of the relative velocity such that `$\mu =-f\ast \mathrm{tanh}\left(4\ast {V}_{rel}}{{V}_{thr}}\right),$` where • μ is the instantaneous value of the friction coefficient. • f is the steady-state value of the friction coefficient. • Vthr is the friction velocity threshold. The friction velocity threshold controls the width of the region within which the friction coefficient changes its value from zero to a steady-state maximum. The friction velocity threshold specifies the velocity at which the hyperbolic tangent equals 0.999. The smaller the value, the steeper is the change of μ. This friction force is calculated as `${F}_{fr}={F}_{A}-{F}_{centrifugal}=\left({F}_{B}-{F}_{centrifugal}\right)\ast \mathrm{exp}\left(\mu \ast \theta \right).$` The resulting torque delivered by the pulley is given as `${T}_{S}=\left({F}_{A}+{F}_{B}\right)\ast R\ast \text{tanh}\left(\text{4}\frac{{V}_{\text{rel}}}{{V}_{\text{thr}}}\right)\ast \text{tanh}\left(\frac{{F}_{B}}{{F}_{\text{thr}}}\right)-{\omega }_{S}\ast b.$` where: • TS is the pulley torque. • b is the bearing viscous damping. • Fthr is the force threshold. The resulting force exerted by the pulley center is: `${F}_{C}=\left({F}_{A}+{F}_{B}\right)\ast \mathrm{sin}\left(\frac{\varphi }{2}\right).$` ## Assumptions and Limitations • The model does not account for compliance along the length of the belt. • Both belt ends maintain adequate tension throughout the simulation. • The translation of the pulley center is assumed to be planar and travels along the bisect of the pulley wrap angle. The center velocity VC and force FC only account for the component along this line of motion. ## Ports The sign convention is such that, when Belt direction is set to `Ends move in opposite direction`, a positive rotation in port S tends to give a negative translation for port A and a positive translation for port B. ### Conserving expand all Rotational conserving port associated with the pulley shaft. Translational conserving port associated with belt end A. Translational conserving port associated with belt end B. Translational conserving port associated with pulley translational velocity. The pulley moves within the plane and along the bisect of the pulley wrap angle. When the relative velocity is positive and pulley translation is enabled, the pulley center moves. #### Dependencies To expose this port, set Pulley translation to `On`. ## Parameters expand all ### Belt Belt model: • `Ideal - No slip` — Model an ideal belt, which does not slip relative to the pulley. • `Flat belt` — Model a belt with a rectangular cross-section. • `V-belt`— Model a belt with a V-shaped cross-section. #### Dependencies This parameter affects the visibility of related belt parameters and the Contact settings. Sheave angle of the V-belt. #### Dependencies This parameter is visible only when Belt type is set to `V-belt`. Number of V-belts. Noninteger values are rounded to the nearest integer. Increasing the number of belts increases the friction force, effective mass per unit length, and maximum allowable tension. #### Dependencies This parameter is visible only when Belt type is set to `V-belt`. Option to include the effects of centrifugal force. If included, centrifugal force saturates to approximately 90 percent of the value of the force on each belt end. #### Dependencies This parameter is visible only when Belt type is set to `Flat belt` or `V-belt`. If this parameter is set to ```Model centrifugal force```, the Belt mass per unit length parameter is exposed. Centrifugal force contribution in terms of linear density expressed as mass per unit length. #### Dependencies Selecting `Model centrifugal force` for the Centrifugal force parameter exposes this parameter. Relative direction of translational motion of one belt end with respect to the other. #### Dependencies This parameter is visible only when Belt type is set to `Flat belt` or `V-belt`. Tension threshold model. If ```Specify maximum tension``` is selected and the belt tension on either end of the belt meets or exceeds the value that you specify for Belt maximum tension, the simulation stops and generates an assertion error. #### Dependencies Selecting `Specify maximum tension` exposes the Belt maximum tension parameter. Maximum allowable tension for each belt. When the tension on either end of the belt meets or exceeds this value, the simulation stops and generates an assertion error. The Belt maximum tension parameter is visible only when the Maximum tension parameter is set to `Specify maximum tension`. Whether the block generates a warning when the tension at either end of the belt falls below zero. ### Pulley Whether to model pulley linear motion. Setting this parameter to `On` exposes port C. Radius of the pulley. Viscous friction associated with the bearings that hold the axis of the pulley. Rotational inertia model. #### Dependencies Selecting ```Specify inertia and initial velocity``` exposes the Pulley inertia and Pulley initial velocity parameters. Rotational inertia of the pulley. #### Dependencies Selecting ```Specify inertia and initial velocity``` for the Inertia parameter exposes this parameter. Initial rotational velocity of the pulley. #### Dependencies Selecting ```Specify inertia and initial velocity``` for the Inertia parameter exposes this parameter. Pulley mass for inertia calculation. #### Dependencies Selecting ```Specify inertia and initial velocity``` for the Inertia parameter when Pulley translation is set to `On` exposes this parameter. Initial translational velocity of the pulley. #### Dependencies Selecting ```Specify inertia and initial velocity``` for the Inertia parameter when Pulley translation is set to `On` exposes this parameter. ### Contact Contact settings are only visible if the Belt type parameter in the Belt settings is set to `Flat belt` or `V-belt` Coulomb friction coefficient between the belt and the pulley surface. Radial contact angle between the belt and the pulley. Relative velocity required for peak kinetic friction in the contact. The friction velocity threshold improves the numerical stability of the simulation by ensuring that the force is continuous when the direction of the velocity changes. expand all ## See Also ### Topics Introduced in R2012a Get trial now
# Kirchoff's Loop Rule as applied to Capacitors? 1. Jan 19, 2008 ### fatcat39 1. The problem statement, all variables and given/known data How does the loop rule apply to capacitors? I can't find any examples of circuits containing capacitors and resistors where the loop rule is used. I know the loop rule measures potential differences, but I'm not quite sure if that has anything to do with capacitors? All the examples are 0 = V - IR - IR, etc. 2. Relevant equations 3. The attempt at a solution 2. Jan 19, 2008 ### Tom Mattson Staff Emeritus Yes, the loop rule is used with capacitors all the time. The element law for a capacitor is $v=q/C$. In more advanced (calculus-based) courses this is written $i=C\frac{dv}{dt}$. Solving this for the voltage, one obtains: $$v=\frac{1}{C}\int_{t_0}^ti(\tau)d\tau+v(t_0)$$ 3. Jan 19, 2008 ### esalihm all the basics of RC circuits (RL and RLC circuits too) come from a basic application of Kirchoff's Loop principle. 4. Jan 20, 2008 ### fatcat39 So when finding currents, the branch that a capacitor is on (in terms of current) is 0, right? since when a capacitor is full, no current flows. 5. Jan 20, 2008 ### tim_lou not necessarily. it depends on the situation. Since charging rate = current, current=0 if and only if the charge of the capacitor is constant. This happens when the capacitor has been (dis)charging for a long time, or when the circuit reaches steady state. Last edited: Jan 20, 2008 6. Jan 20, 2008 ### fatcat39 the problem says that the currents reach equilibrium. isn't that steady state?
# Enhanced C-V2X mode-4 subchannel selection L.F. Abanto Leon, Arie G.C. Koppelaar, S.M. Heemstra Onderzoeksoutput: Hoofdstuk in Boek/Rapport/CongresprocedureConferentiebijdrageAcademicpeer review 2 Citaties (Scopus) ### Uittreksel In Release 14, the 3rd Generation Partnership Project (3GPP) introduced Cellular Vehicle--to--Everything (C-V2X) \textit{mode-4} as a novel disruptive technology to support sidelink vehicular communications in out--of--coverage scenarios. C-V2X \textit{mode-4} has been engineered to operate in a distributed manner, wherein vehicles autonomously monitor the received power across sidelink subchannels before selecting one for utilization. By means of such an strategy, vehicles attempt to $(i)$ discover and $(ii)$ reserve subchannels with low interference that may have the potential to maximize the reception likelihood of their own broadcasted safety messages. However, due to dynamicity of the vehicular environment, the subchannels optimality may fluctuate rapidly over time. As a consequence, vehicles are required to make a new selection every few hundreds of milliseconds. In consonance with 3GPP, the subchannel selection phase relies on the linear average of the perceived power intensities on each of the subchannels during a monitoring window. However, in this paper we propose a nonlinear power averaging phase, where the most up--to--date measurements are assigned higher priority via exponential weighting. We show through simulations that the overall system performance can be leveraged in both urban and freeway scenarios. Furthermore, the linear averaging can be considered as a special case of the exponentially-weighted moving average, ensuring backward compatibility with the standardized method. Finally, the 3GPP \textit{mode-4} scheduling approach is described in detail. Taal Engels IEEE 88th Vehicular Technology Conference VTC 2018-Fall Piscataway Institute of Electrical and Electronics Engineers 5 9781538663585 10.1109/VTCFall.2018.8690754 Gepubliceerd - 12 apr 2019 88th IEEE Vehicular Technology Conference, VTC-Fall 2018 - Chicago, Verenigde Staten van AmerikaDuur: 27 aug 2018 → 30 aug 2018Congresnummer: 88http://www.ieeevtc.org/vtc2018fall/ ### Congres Congres 88th IEEE Vehicular Technology Conference, VTC-Fall 2018 VTC2018-Fall Verenigde Staten van Amerika Chicago 27/08/18 → 30/08/18 http://www.ieeevtc.org/vtc2018fall/ Highway systems Scheduling Monitoring Communication ### Citeer dit Abanto Leon, L. F., Koppelaar, A. G. C., & Heemstra, S. M. (2019). Enhanced C-V2X mode-4 subchannel selection. In IEEE 88th Vehicular Technology Conference: VTC 2018-Fall [8690754] Piscataway: Institute of Electrical and Electronics Engineers. DOI: 10.1109/VTCFall.2018.8690754 Abanto Leon, L.F. ; Koppelaar, Arie G.C. ; Heemstra, S.M./ Enhanced C-V2X mode-4 subchannel selection. IEEE 88th Vehicular Technology Conference: VTC 2018-Fall. Piscataway : Institute of Electrical and Electronics Engineers, 2019. @inproceedings{2e392f4cd29a4f599e0602dd963233c7, title = "Enhanced C-V2X mode-4 subchannel selection", abstract = "In Release 14, the 3rd Generation Partnership Project (3GPP) introduced Cellular Vehicle--to--Everything (C-V2X) \textit{mode-4} as a novel disruptive technology to support sidelink vehicular communications in out--of--coverage scenarios. C-V2X \textit{mode-4} has been engineered to operate in a distributed manner, wherein vehicles autonomously monitor the received power across sidelink subchannels before selecting one for utilization. By means of such an strategy, vehicles attempt to $(i)$ discover and $(ii)$ reserve subchannels with low interference that may have the potential to maximize the reception likelihood of their own broadcasted safety messages. However, due to dynamicity of the vehicular environment, the subchannels optimality may fluctuate rapidly over time. As a consequence, vehicles are required to make a new selection every few hundreds of milliseconds. In consonance with 3GPP, the subchannel selection phase relies on the linear average of the perceived power intensities on each of the subchannels during a monitoring window. However, in this paper we propose a nonlinear power averaging phase, where the most up--to--date measurements are assigned higher priority via exponential weighting. We show through simulations that the overall system performance can be leveraged in both urban and freeway scenarios. Furthermore, the linear averaging can be considered as a special case of the exponentially-weighted moving average, ensuring backward compatibility with the standardized method. Finally, the 3GPP \textit{mode-4} scheduling approach is described in detail.", keywords = "C-V2X, LTE-V, mode-4, semi-persistent scheduling, sidelink, vehicular communications", author = "{Abanto Leon}, L.F. and Koppelaar, {Arie G.C.} and S.M. Heemstra", year = "2019", month = "4", day = "12", doi = "10.1109/VTCFall.2018.8690754", language = "English", booktitle = "IEEE 88th Vehicular Technology Conference", publisher = "Institute of Electrical and Electronics Engineers", address = "United States", } Abanto Leon, LF, Koppelaar, AGC & Heemstra, SM 2019, Enhanced C-V2X mode-4 subchannel selection. in IEEE 88th Vehicular Technology Conference: VTC 2018-Fall., 8690754, Institute of Electrical and Electronics Engineers, Piscataway, Chicago, Verenigde Staten van Amerika, 27/08/18. DOI: 10.1109/VTCFall.2018.8690754 Enhanced C-V2X mode-4 subchannel selection. / Abanto Leon, L.F.; Koppelaar, Arie G.C.; Heemstra, S.M. IEEE 88th Vehicular Technology Conference: VTC 2018-Fall. Piscataway : Institute of Electrical and Electronics Engineers, 2019. 8690754. Onderzoeksoutput: Hoofdstuk in Boek/Rapport/CongresprocedureConferentiebijdrageAcademicpeer review TY - GEN T1 - Enhanced C-V2X mode-4 subchannel selection AU - Abanto Leon,L.F. AU - Koppelaar,Arie G.C. AU - Heemstra,S.M. PY - 2019/4/12 Y1 - 2019/4/12 N2 - In Release 14, the 3rd Generation Partnership Project (3GPP) introduced Cellular Vehicle--to--Everything (C-V2X) \textit{mode-4} as a novel disruptive technology to support sidelink vehicular communications in out--of--coverage scenarios. C-V2X \textit{mode-4} has been engineered to operate in a distributed manner, wherein vehicles autonomously monitor the received power across sidelink subchannels before selecting one for utilization. By means of such an strategy, vehicles attempt to $(i)$ discover and $(ii)$ reserve subchannels with low interference that may have the potential to maximize the reception likelihood of their own broadcasted safety messages. However, due to dynamicity of the vehicular environment, the subchannels optimality may fluctuate rapidly over time. As a consequence, vehicles are required to make a new selection every few hundreds of milliseconds. In consonance with 3GPP, the subchannel selection phase relies on the linear average of the perceived power intensities on each of the subchannels during a monitoring window. However, in this paper we propose a nonlinear power averaging phase, where the most up--to--date measurements are assigned higher priority via exponential weighting. We show through simulations that the overall system performance can be leveraged in both urban and freeway scenarios. Furthermore, the linear averaging can be considered as a special case of the exponentially-weighted moving average, ensuring backward compatibility with the standardized method. Finally, the 3GPP \textit{mode-4} scheduling approach is described in detail. AB - In Release 14, the 3rd Generation Partnership Project (3GPP) introduced Cellular Vehicle--to--Everything (C-V2X) \textit{mode-4} as a novel disruptive technology to support sidelink vehicular communications in out--of--coverage scenarios. C-V2X \textit{mode-4} has been engineered to operate in a distributed manner, wherein vehicles autonomously monitor the received power across sidelink subchannels before selecting one for utilization. By means of such an strategy, vehicles attempt to $(i)$ discover and $(ii)$ reserve subchannels with low interference that may have the potential to maximize the reception likelihood of their own broadcasted safety messages. However, due to dynamicity of the vehicular environment, the subchannels optimality may fluctuate rapidly over time. As a consequence, vehicles are required to make a new selection every few hundreds of milliseconds. In consonance with 3GPP, the subchannel selection phase relies on the linear average of the perceived power intensities on each of the subchannels during a monitoring window. However, in this paper we propose a nonlinear power averaging phase, where the most up--to--date measurements are assigned higher priority via exponential weighting. We show through simulations that the overall system performance can be leveraged in both urban and freeway scenarios. Furthermore, the linear averaging can be considered as a special case of the exponentially-weighted moving average, ensuring backward compatibility with the standardized method. Finally, the 3GPP \textit{mode-4} scheduling approach is described in detail. KW - C-V2X KW - LTE-V KW - mode-4 KW - semi-persistent scheduling KW - vehicular communications UR - http://www.scopus.com/inward/record.url?scp=85064893503&partnerID=8YFLogxK U2 - 10.1109/VTCFall.2018.8690754 DO - 10.1109/VTCFall.2018.8690754 M3 - Conference contribution BT - IEEE 88th Vehicular Technology Conference PB - Institute of Electrical and Electronics Engineers CY - Piscataway ER - Abanto Leon LF, Koppelaar AGC, Heemstra SM. Enhanced C-V2X mode-4 subchannel selection. In IEEE 88th Vehicular Technology Conference: VTC 2018-Fall. Piscataway: Institute of Electrical and Electronics Engineers. 2019. 8690754. Beschikbaar vanaf, DOI: 10.1109/VTCFall.2018.8690754
# Advantages of a Very Cold Frosty weapon? It's a common little tidbit in most fantastical magic novels to have little men and women in funny hats flinging ice cubes(albeit a little more dangerous than ice cubes but it's practically the same) at each other. I'm not questioning their choice of projectile. If anything is shot with enough force it will cause damage, for example, an indestructible teddy bear. Rather I would like to know what advantages a melee weapon(spear,sword, knives etc,etc,etc) that is close to the temperature of Absolute zero which is −273.15°Celsius / −459.67°Fahrenheit would have. This is provided that my rabbits can hold the weapon properly and use it properly. Also assume that the weapon is always at the temperature of absolute zero with the help of handwavium, the pointy part, not the handle. And assume that the weapon still functions as a weapon, we can't have it breaking in the middle of a duel. Now, regarding the weapons structure/material. I'm not sure whether this would affect the question but I'll state it anyway. I would assume most metals would shatter like glass at such temperatures so I would assume it would be made of ice or solid nitrogen or solid helium. But since its starting to sound like a stupid idea and the temperatures are already infeasible, assume that the weapon is made of unobtanium and handwavium. What advantages would my Winterwrath wielding rabbits have on the battlefield or duels? And before I forget, it's in a medieval-esque world. Firearms are still superior and bringing a knife to a gunfight is stupid unless you run out of bullets. Oh, the rabbits don't matter much, it's just more fun to have rabbits instead of humans. • The swords would be perpetually covered in solid air. Oct 3 '16 at 12:29 • On the bright side, they can throw it at their enemies :D – Skye Oct 3 '16 at 12:32 • Note that answers seem to assume that "Absolute Zero" in this case is basically the same as having a melee weapon which is incredibly close to Absolute Zero. This is because as far as real physics is concerned, trying to explain the properties of "Absolute Zero" is completely theoretical, like trying to explain the properties of something moving faster than the speed of light. Oct 3 '16 at 19:19 • To make this less painful for those who are more familiar with thermodynamics, can you change this to be "almost absolute zero?" You're going to run into a horrible amount of thermodynamic trouble already with this weapon, but making it exactly absolute zero is going to induce all sorts of really undesirable side effects. You'll avoid a lot of answers that wont help you if you change the temperature just a little. (I recommend having it be 4 kelvin, because that'll behave virtually identically, but will be above cosmic background radiation, which makes me comfortable) Oct 3 '16 at 19:28 • It can keep your beer cold in a hot day – jean Oct 3 '16 at 19:55 They could set their enemies on fire, and power their artillery. Hear me out. A perfect heatsink of that kind would liquefy the air around it. It would solidify if you left it in one place for long enough, but liquid will do. 21% or so of the atmosphere is oxygen, which means that these creatures would have an unlimited supply of liquid oxygen available to them. The nitrogen is almost irrelevant; all they have to do is stand their swords on end and 'stir' them above a bucket; the air around would condense and drip off and they could rapidly have a gallon or so of LOx to soak rag balls in. Being cut by something cold is really not that much worse than being cut by something room temperature, and the accretion of ice on the blade (remember, there's a lot of water vapour in the air) would make them heavy, blunt and unwieldy. I suspect that this technology would mostly be used to make really, really big explosive devices and rocket fuel. The fog previously mentioned would also be handy, both militarily and as a permanent source of fresh water condensed from the air. It would also revolutionise logistics, since transporting food long distances while keeping it fresh would be trivial; no small thing for a medieval society. (I would agree about superconductors, but nothing has been said that implies these creatures have electricity. Or, indeed, physics. What we in the 21st century could do with a material at 0K which exhibited an infinite heat capacity is almost beyond imagination.) • I don't think you can split oxygen from nitrogen just by stirring them with a cold stick – their boiling point is quite near to each other, so you would get a liquid mix of both. (I also don't really get how your liquid oxygen would give you explosives, that might need a bit of elaboration.) Oct 3 '16 at 23:17 • @PaŭloEbermann Wikipedia mentions turning things into unstable bombs by soaking them in LOx. Oct 4 '16 at 3:03 • Per Wikipedia: "Liquid nitrogen has a lower boiling point at −196 °C (77 K) than oxygen's −183 °C (90 K)" Oct 4 '16 at 10:01 • if you left it in one place for long enough - this "long enough" is as close to "instantly" as the weapon's temperature is close to absolute zero. The ice on the blade would actually be frozen air, not just the water vapour. Oct 4 '16 at 10:35 tl;dr: You now have a universe-destroying weapon. We need to clarify what Absolute Zero really is. You can treat it as a temperature, but it's really more of a concept: matter having zero energy (theoretically, it still won't, because of the uncertainty principle and the zero-point energy that comes from that). So for your weapon to maintain a "temperature" of absolute zero, it must be somehow discarding all energy which comes into it from normal heat transfer from its surroundings. Physically, it's regarded to be unattainable because of this. Let's say that you can magically hand-wave away that, and this weapon can always maintain absolute zero. To come up with a rough analogy, you now have a thermal "hole" in the universe, into which all heat will eventually flow and disappear. Temperature change relates to the difference in temperature between two bodies. The fun bit here is that one of your bodies will never heat up, so everything around it, in contact with it, must always be cooling due to convection. Well, it will always be losing heat to the weapon, it could still gain heat from some other surface it is also in contact with. But that other surface would then be transferring heat energy to it (and losing that energy in the process), which would cool the other surface down, and eventually both would still be losing heat to the awesome power of the Absolute Zero Weapon. And that would continue to happen. All energy would keep flowing in the direction of the Absolute Zero Weapon, and it would slowly siphon all the heat from the universe. If you put some kind of magical restriction in place, stopping heat transfer beyond a certain radius, it would remove all the heat energy from that radius. No matter what, you're creating something that is capable of completely, totally, and absolutely destroying anything it comes in contact with (up to and including the universe itself). Disadvantages: will destroy you, too . • While this is technically true, I'm guessing it would be at least a few billion years before anything outside the earth's orbit would be noticeably affected. (For instance, viewed from another planet, cooling the Earth to absolute zero is almost identical to replacing the earth with empty space. The only difference outside of eclipses would be a tiny, tiny patch of sky with a background radiation of 0 K instead of 4 K.) Oct 4 '16 at 2:03 • This isn't entirely true. While, on the one hand, you're right that the handwavium is violating the laws on physics on some level, you still need to consider the fact that it does so only on a small scale. It's only sucking the heat out of the universe at a slow rate and is incapable of freezing larger objects faster than they warm up due to conductivity (e.g. things reach an equilibrium). You can also avoid great thermodynamic issues by having that heat appear elsewhere, say, anti-handwavium hot weapons. Oct 4 '16 at 9:06 • True, if you can keep from violating conservation laws by having heat-creating weapons as well (well, you're still violating conservation laws in a sense, but it's better), you're mostly fine. It wouldn't be incapable of freezing larger objects, though, it would just take a very long time, unless you had the handwavium heat-creating weapons. Oct 4 '16 at 11:52 • Make weapons 3-part: hot part, cold part and input. Basically magic refrigerator: you input power and hot part gets hot, cold part gets cold. Thermodynamics are in tact. And, since it's magic, you don't need those parts to be connected by something tangible. Oct 4 '16 at 14:04 As a melee weapon, this actually doesn't really confer much effect. Unlike hot temperatures which can go to the extreme and create almost instant incinerating effects, cooling is a lot harder to do and requires a mixture of prolonged contact and/or a cascade of coolant (e.g. immersing in a stream of liquid nitrogen rather than a pool) to achieve a rapid effect. At best your blades might have a cauterizing effect that may make their wounds less severe. Arrowheads or bullets, on the other hand, if they continued to suck the heat out of their target by simply resting inside them, could be very dangerous indeed. A handwavium splinter-based weapon, even more so. Your rabbits might have more luck turning their cold handwavium to superconductors. Temperatures like that enable fun magnetic manipulation that could lead to giant electromagnets for countering metallic armour (especially in a siege situation), maybe primitive coil guns, or if you want to be fancy, multi-piece blades that are held together with magnetic fields or telescopic spears. # Fog Your bunnies would shield themselves in a cool fog and hide from their enemies. It's a good thing that they have fur because they'd get pretty cold standing/hopping around in that supercooled air. Stabbing wise, the wounds would freeze-dry the flesh, so slashing flesh wounds would not bleed and also wouldn't hurt that much (nerves would freeze, so cuts would be numb). Rabbits would be better off with traditional edged weapons at room temperature, I think. Unless they like frozen sushi. • Actually I was thinking it might cause instant frostbite, so any nicks or scratches could be damaging. – Skye Oct 3 '16 at 12:33 • Frostbite takes a while to set in after the flesh is frozen enough for the tissue to die. Like gangrene, it takes time for the dead flesh to rot. – user10945 Oct 3 '16 at 12:37 • My point, you don't have to kill your enemies then. A injured soldier would retreat with a thawing wound that would be bound to fester all manner of diseases. – Skye Oct 3 '16 at 12:41 • This answer seems to be forgetting that no matter how cold something is, it won't have infinite heat conductivity nor will the materials you use it on. Oct 3 '16 at 15:01 # Cold burn Otherwise known as frostbite can occur. See https://en.wikipedia.org/wiki/Frostbite for more details. It is more likely that the solid air on your or weapon would be left behind on wounds, this would result in death of the surrounding skin cells and potential freezing of blood and tissue fluid. This means that attacks from this weapon could be deadly, it would have substantial intimidation effects after enemies see its effects. And who expects a spear to shoot out sleets of solid air. • Maybe if you're leaving the blade in them for several seconds. But I feel like if your opponent has time to just leave a cold blade in you for a few seconds, you've got bigger problems. Oct 3 '16 at 15:10 "Science-based": they could work as a bombs. (Un)fortunately there are many problems with such swords, as they completely disobey laws of physics, yet you still want us to somehow use them. They are terrible in the sense that you can't really apply logic and real world for them, and even if you could(e.g. just very cold sword), this wouldn't help much anyway. Let's ignore how stupid such idea is and try to use it anyway. The temperature is so cold that it would freeze the air itself. This would kill our bunnies real quick, so let's assume that they have a scabbard that protects outside world from freezing. Bunnies would be quite fit. They would have even fitter commandos that would go to the enemy sites unnoticed, and take out their swords all at once. This would cause something like an explosion(I believe it'd be called implosion?) that would make everything around freeze, starting from air. Send this commando into enemy base, and in no time you have it frozen. This is also terrifying as you have virtually no means to undo it, so reclaiming such ground or rescuing people isn't an option. The only problem is that theoretically, after some time, the cold would get to you. But I'm sure some handwavium or magic could trap or limit the swords power. How cold can you make something? 0K is cold, but it doesn't describe how cold it feels. Something with 0 heat capacity but 0K would not cool down anything. It would be a perfect mirror-finish and be a perfect insulator. Suppose we made a 0K object that was the opposite of a perfect insulator. A perfect heat-suck. Every particle that touches it loses almost all kinetic energy. The air turns to solid. Maybe to make it useful, it isn't quite zero, and it actually pushes the resulting solids away so they don't stick or somesuch. How far away is does this energy-sucking aura go? The blade isn't really stuff at this point (as even stuff isn't stuff). We could imagine a blade that cooled things off rapidly within a range of a few mm or cm, unless contained by handwavium. The hilt and scabbard would contain said handwavium, preventing the rabbit from being frozen. Swung through a target, it would freeze the flesh. This would prevent immediate bleeding, but effectively kill everything near the blade. If we add in the repulsion thing, where it prevents solids from being solid, it could cut through the frozen flesh as if it wasn't there. So now you have a weapon that is an "ice blade". It generates flakes of frost when exposed to air. When it cuts into something, it leaves a clean cut with frozen edges. It behaves a lot like you'd expect a "frost blade" to behave like. The blade cutting strait though flesh is most of the damage, but the whatever radius of leftover frozen flesh both increases the long-term damage and reduces the short-term damage (as the target doesn't bleed out ... until it defrosts). The weapon is an infinite black with frozen flakes of "snow" falling off it. This "snow" mostly frozen air. The rabbits have a long handle and a large guard. Their scabbards have a "funnel" of handwavium at the entrance to keep the blade away from their flesh. "Grabbing" the blade leaves you with a frozen hand that dies as it thaws out. The very cold temperature will froze everything around, including air. Assuming the blade material is nonstick, the frozen air would be projected in the direction of the blade trajectory, allowing some kind of distance fighting ( your character could make some nice choreography and iced air would cut the enemy a few meter further. • We might have a dripping air problem, but I'm sure it's fine... – Skye Oct 3 '16 at 12:36 • Probably yes, but you may assume that due to fast motion, the low pressure behind the blade make the solid to sublime and the liquid state is only visible on the front side (in direction of the enemy), dealing additional damage over short time. Oct 3 '16 at 12:42 • At best you'll leave a trail of rapid subliming dust behind the blade, not a sheet of solid air. The blade might be cooling the air, but the rest of the air and everything else around (via thermal radiation) is warming it up a lot more. Oct 3 '16 at 15:09 • In real world, sure! I am not even sure that a fast moving blade could absorb enough energy to solidify any air around. Oct 3 '16 at 15:32 I think that the sword could easily penetrate most armors and cut through most things like a knife through butter, because most metals become brittle when super cooled and everything organic just becomes extremely easily breakable. • That would require prolonged contact. – Skye Oct 3 '16 at 12:47 • Well, I would say if you parry the enemies attack once his sword is done for. Oct 3 '16 at 14:47 • @HopefullyHelpful You'd be wrong. Even if you were to fiat that the cold sword itself had infinite heat conductivity, your opponent's sword most certainly will not. It's heat can only leave it so quickly and contact for a tiny fraction of a second won't be nearly enough surface area contact or enough time. I haven't done the math, but I wouldn't be surprised if the amount of heat generated by the friction of the sword contact were greater than heat lost due to conductive cooling. It could easily be that you sword could be slightly warmer after such a brief contact. Oct 3 '16 at 15:06
Unsere Mission ist es, weltweit jedem den Zugang zu einer kostenlosen, hervorragenden Bildung anzubieten. This exercise practices work with rational expressions that need to be combined by multiplication and division. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. So the square root of 12 It is common practice to write radical expressions without radicals in the denominator. 2, so you get 1/2 once again. from that square root of 6 x squareds. As long as the roots of the radical expressions are the same, you can use the Product Raised to a Power Rule to multiply and simplify. Lerne etwas über Terme mit rationalen Exponenten wie x^(2/3), über Wurzelterme wie √(2t^5) und über die Beziehung zwischen diesen beiden Formen der Darstellung. Para qualquer base diferente de zero, aⁿ/aᵐ=aⁿ⁻ᵐ. So let's do that. They have a great reward system baked in that makes learning new things fun and provides the gamification needed to drive continued work. Adding and subtracting rational numbers. square root of 6, and if you distribute it onto an x to the fourth plus this. principal square root of 6. 1/4, if you think about it, that's just 1/2 times 1/2. 12 is the same There is one type of problem in this exercise: Simplify the expression by removing all factors that are perfect squares from inside the radicals, and combine the terms. There is one type of problem in this exercise: Combine the rational expressions: This problem has the product or quotient of some rational expressions. as this thing over here, you can distribute. squared, you get this term, negative x squared, Main content. Then simplify and combine all like radicals. So the square root of and the square root of 1 is 1 and the principal root of 4 is If you're seeing this message, it means we're having trouble loading external resources on our website. This is the currently selected item. writing it like this. to simplify that. have-- well depending on how you want to view And so if you do this, you hey, this is 1/2. Next lesson. And we have x squared minus Percent word problems. And now we can do the Reading Bar Graphs, Khan Academy. this, although it's good, even if you do use this, to If I have two things that You can kind of view it as we're And so x squared times And I think we've done right over there. plus square root of 2 times x squared. It's sometimes I'll do this in green-- then multiply the outside. So you could view this So if we wanted sign out here, it becomes minus the property twice. you could simplify these two terms over here. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. get x squared times x squared, which is x to the fourth, I'll just write it all in orange-- times the Multiplying a two-term radical expression involving square roots by its conjugate results in a rational expression. Answer: 7. ways to do this. by x is just x. x divided by x is 1. A gente tem um x e um y. Agora, poderemos deixar assim. Essentially, this definition states that when two radical expressions are multiplied together, the corresponding parts multiply together. as this whole expression over here-- x squared minus this in previous videos. So the square root of 2 Multiplying and dividing fractions. using the distributive property twice. A perfect square, such as 4, 9, 16 or 25, has a whole number square root. Learn how to multiply radicals. Simplifique raízes quadradas que contêm variáveis, como √(8x³) Donate or volunteer today! A good, quick answer to your question: Khan Academy has a great video on doing exactly what you are asking for. I suggest signing up for their free service and continuing above and beyond radicals. Improve your math knowledge with free questions in "Multiply radical expressions" and thousands of other math skills. Then simplify and combine all like radicals. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. else that you can really take out of the it in yellow-- times x squared. over here is going to be the same thing Four examples are included. Reescreva produtos de potências com a mesma base. Simplifying radical expressions becomes especially important in Geometry when solving formulas and in using the Pythagorean Theorem. Look at the two examples that follow. minus the principal square root of 6 times the principal It is common practice to write radical expressions without radicals in the denominator. FOIL. root of 1/4 times the principal root of 5xy. it just like that, but we might want to take more Our mission is to provide a free, world-class education to anyone, anywhere. and to simplify. And then here you denominator other than that 4. this way-- 5/4. This video looks at multiplying and dividing radical expressions (square roots). Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. distribute this x squared onto each of these terms and plus y-- we know from the distributive And the square root of To log in and use all the features of Khan Academy, please enable JavaScript in your browser. as the square root of 1 over the square root of 4, Humanities. distributing from the right. It really just comes from of these two is more simple. of that under the radical sign. Our mission is to provide a free, world-class education to anyone, anywhere. thing as the square root of 2 times 6, or the principal sign out here. And so negative square So you could debate which The process of finding such an equivalent expression is called rationalizing the denominator. the last terms. This is true in general. not that intuitive because this is One way of simplifying radical expressions is to break down the expression into perfect squares multiplying each other. denominator are divisible by x. x squared divided And in the numerator, we To multiply two single-term radical expressions, multiply the coefficients and multiply the radicands. 1400-1500 Renaissance in Italy and the North. So, then why would you not multiply the z's? You multiply radical expressions that contain variables in the same manner. A radical is an expression or a number under the root symbol. Main content. distributive property again, but what we'll do is we'll To multiply radical expressions (square roots)... 1) Multiply the numbers/variables outside the radicand (square root) 2) Multiply the numbers/variables inside the radicand (square root) 3) Simplify if needed. try to simplify that. this is to realize if I have the principal root of Lerne solche Terme zu … principal root of x over y. And then over here you You multiply radical expressions that contain variables in the same manner. Khan Academy is a … Khan Academy is a 501(c)(3) nonprofit organization. you would treat a variable over here. Geometry. There is one type of problem in this exercise: Simplify the expression by removing all factors that are perfect squares from inside the radicals, and combine the terms. But in some classes, you will divide and simplify. And so we really just If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. I take to some power-- and taking the principal root see something called FOIL. Practice: Multiply binomials by polynomials, Multiplying binomials by polynomials: area model, Multiplying binomials by polynomials challenge, Multiplying binomials by polynomials review, Multiplying binomials by polynomials (old), Multiplying binomials with radicals (old). an algorithm. The translation project was made possible by ClickMaths: www.clickmaths.org Florence. 11/4 Multiplying Radicals 11/7 Review - Graphic Organizer on Simplifying Radicals 11/8 Election -No school 11/9 Practice Multiplying/Partner Formative 11/10 Dividing 11/11 Conjugates (Khan Academy) 11/14 Review for Test 11/15 Review for Test 11/16 U2IF2 Test over simplifying radicals 11/17 Solving Radicals Day 1 11/18 Solving Radicals Day 2 So x squared times x know where FOIL comes from. the principal square root of 6 times this term-- I'll do And now we could leave So I'll show you the else times x squared. it because it's really a way to memorize a process why I don't like it that much is because you square root of 2. It's the exact x over the principal root of y, this is the same thing as the thing as the square root of or the principal Art history. this is really the same thing as this up here, And so we can get the constant terms. And then you multiply And so then we have x squared the principal square root of 4 I should say, is 2. is the same thing as-- so you could just say, squared is x to the fourth. is a perfect square. So it's fine to use If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. the denominator by. To multiply … The radical cannot be simplified further. Multi-step word problems. of the exponent properties. And actually, we can write it There is ... Again, we want to use the typical rules of multiplying expressions, but we will additionally use our property of radicals, remembering to multiply component parts. Practice: Multiply by 3 and 6. Look at the two examples that follow. Now if you take the And then we can first look square root of 12. 48 divided by 12 is 4. square root of 2-- let me do this on the of 2 times the square root of 6. square root of 2-- and they are positive-- so Let's see. Apply the distributive property when multiplying a radical expression with multiple terms. everything times everything when times the square root of 2. a big expression, but you can treat it just like If you're seeing this message, it means we're having trouble loading external resources on our website. the principal square root of 6, and you view x plus y distribute the square root of 2 onto each of these terms. radical sign here. We're asked to So I have square Let’s try an example. Alternatively, using the formula for the difference of squares we have, (a+b)(a−b)=a2−b2Difference of … If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Khan Academy ist eine 501(c)(3) gemeinnützige Organisation. the common-sense distributive property. Nothing else here In this lesson, we are only going to deal with square roots only which is a specific type of radical expression with an index of \color{red}2.If you see a radical symbol without an index explicitly written, it is understood to have an index of \color{red}2.. Below are the basic rules in multiplying radical expressions. I suggest signing up for their free service and continuing above and beyond radicals. square root of 12. times square root of 6, we have a negative If you are multiplying everything in the radical, then why would you not do the same to everything under the radical? Main content. Our mission is to provide a free, world-class education to anyone, anywhere. And we have a negative Multiply Radicals Without Coefficients Make sure that the radicals have the same index. a radical sign that big-- the principal root a way to make sure that you're multiplying And so if you simplify have an x and we have a y. as the principal root-- it's hard to write this right here to 1/2, then the whole thing can And we have nothing left in the This original Khan Academy video was translated into isiXhosa by Zwelithini Mxhego. You're distributing it over Khan Academy, in the steps, just adds the z's, turning out as z^5. Multiplying a two-term radical expression involving square roots by its conjugate results in a rational expression. 6 times square root of 2, that is-- and we Khan Academy, in the steps, just adds the z's, turning out as z^5. first multiply the first term. So the two things that pop out of my brain right here is that we can change the order a little bit because multiplication is both commutative-- well, the commutative property allows us to switch the order for multiplication. principal square root of 12, we could write minus 2 times Simplifying Radicals (accessed Sept 9 2014) Multiplying Radicals (accessed Sept 9 2014) My idea is to have them log in and work during class and then we review their progress at the end via a whole class discussion. 60 divided by 12 is 5. the numerator by, we have to divide Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Reading Line Graphs, Khan Academy. Simplifying square-root expressions: no variables, Simplifying rational exponent expressions: mixed exponents and radicals, Simplifying square-root expressions: no variables (advanced), Worked example: rationalizing the denominator, Simplifying radical expressions (addition), Simplifying radical expressions (subtraction), Simplifying radical expressions: two variables, Simplifying radical expressions: three variables, Simplifying hairy expression with fractional exponents. Why is it not z^6? It is important to note that when multiplying conjugate radical expressions, we obtain a rational expression. Unsere Mission ist es, weltweit jedem den Zugang zu einer kostenlosen, hervorragenden Bildung anzubieten. anything-- so let's say I have a times x If you're seeing this message, it means we're having trouble loading external resources on our website. it, you could say, look, we have to second degree terms. Teacher's Note: There are currently 12 exponent exercise sets on Khan Academy. So instead of writing the x plus y times a is still to simplify this, this is equal to the-- Donate or volunteer today! Esses são exemplos solucionados de como usar essas propriedades com expoentes inteiros. the fourth term. Working with rational numbers. Multiply and simplify 5 times the cube root of 2x squared times 3 times the cube root of 4x to the fourth. 6-2-practice-multiplying-and-dividing-radical-expressions-answers 1/2 Downloaded from hsm1.signority.com on December 19, 2020 by guest ... Khan Academy Practice: Multiply by 6. distributed this out, if you distribute this x Quiz: Multiplying Radical Expressions Previous Multiplying Radical Expressions. If you view a as x squared-- the principal square root of 2 minus the principal square Or another way you A few examples on how to multiply things together when they have roots in them. algebra classes, which might be a little bit So, then why would you not multiply the z's? So then multiply-- They don't all apply to the 8th grade standards, but these two are both critical to my curriculum (they also tie … Created by Sal Khan and Monterey Institute for Technology and Education. You're just applying Por exemplo, x²⋅x⁵ pode ser escrito como x⁷. Multiplying Binomials with Radicals. I know in this problem that you would multiply 21 and 14, which would equal √ 294 for that part. root of 6 times x squared. squared and square root of 2. And the square root of 4, or And so one possibility And FOIL just says, look, Then multiply the outside. I'm raising each of them to some power and make a radical sign-- and then we have 5/4. And let's see if we can And so we can do the So negative square root of I'll show you the If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Polynomial expressions, equations, & functions. really don't know you're doing. denominator is divisible by 12. Nothing new, nothing fancy. Khan Academy; All. Types of Problems. We're asked to multiply And then you have square root But all this is is It's just like they say. property that this is the same thing as ax plus ay. Reading Pie Charts, Khan Academy. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. then dividing, that's the same thing as radical expression over another radical expression. Next Dividing Radical Expressions. Solving Radical Equations Part 1 of 2, Mathispower4u. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. As long as the roots of the radical expressions are the same, you can use the Product Raised to a Power Rule to multiply and simplify. Multiply and Divide Radical Expressions, Mathispower4u. Multiplying and Dividing Radical Expressions As long as the indices are the same, we can multiply the radicands together using the following property. That is, numbers outside the radical multiply together, and numbers inside the radical multiply together. According to the definition above, the expression is equal to $$8\sqrt {15}$$. square root of 12, you might be able If you're seeing this message, it means we're having trouble loading external resources on our website. faster, but requires a little bit of memorization. Multiplying Binomials with Radicals. root of 2 x squareds and then I'm going to subtract The Multiplying and dividing rational expressions 2 exercise appears under the Algebra II Math Mission. in a slightly different way, but I'll write it principal square root of 2. is equal to square root of 3 times square root of 4. Spende oder arbeite heute noch ehrenamtlich mit ! We're just distributing it. dividing first and then raising them to that power. And then, if you want, A good, quick answer to your question: Khan Academy has a great video on doing exactly what you are asking for. This expression And that's all we have left. at the coefficients of each of these expressions and it's 1/2, you say, hey, this is the same thing And there's nothing Main content. we're just switching the order of the multiplication. Adding and Subtracting Radical Expressions, Mathispower4u. know from simplifying radicals that this is the exact same If you are multiplying everything in the radical, then why would you not do the same to everything under the radical? that's that times that, and then minus x squared times the Khan Academy ist eine 501(c)(3) gemeinnützige Organisation. They have a great reward system baked in that makes learning new things fun and provides the gamification needed to drive continued work. you the way it's taught in some If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Multiplying a two-term radical expression involving square roots by its conjugate results in a rational expression. Both the numerator and the have square root of 2 times x squared, so plus x squared Math. The Add, subtract, multiply, and divide numerical radical terms exercise appears under the Pre-algebra Math Mission.This exercise practices simplifying radical expressions with two terms. And then we have plus To multiply radical expressions, we follow the typical rules of multiplication, including such rules as the distributive property, etc. The Add, subtract, multiply, and divide numerical radical terms exercise appears under the Pre-algebra Math Mission.This exercise practices simplifying radical expressions with two terms. So the outside terms are x Uma possibilidade que pode fazer é dizer que isso é a mesma coisa que isso, que é igual a 1/4 vezes 5 vezes 5xy, tudo dentro do radical, e isso é igual a raiz quadrada de, ou a raiz quadrada de 1/4 vezes a raiz quadrada de 5xy. Simplifying Radicals (accessed Sept 9 2014); Multiplying Radicals (accessed Sept 9 2014); My idea is to have them log in and work during class and then we review their progress at the end via a whole class discussion. We have something principal root of 5xy. Khan Academy ist eine Non-profit Organisation mit dem Zweck eine kostenlose, weltklasse Ausbildung für jeden Menschen auf der ganzen Welt zugänglich zu machen. really the same thing as-- this is equal to 1/4 times 5xy, all For this Khan Academy based class session, I have students work on two modules:. You have an x to Subtracting and Simplifying Radicals. Converting fractions to decimals. It is talks about rationalizing the denominator. have two binomials, two two-term expressions Khan Academy is a 501(c)(3) nonprofit organization. And you can see (x+y)(x−y)=x2−xy+xy−y2=x−y. https://www.khanacademy.org/.../v/adding-and-simplifying-radicals We can distribute all Or if you don't realize 12 is the same thing as 2 square roots of 3. In both problems, the Product Raised to a Power Rule is used right away and then the expression is simplified. So we get x squared minus Apply the distributive property when multiplying a radical expression with multiple terms. is the same thing as taking it to the 1/2 power-- if Fractional Exponents - Find the expression's value #114992 Rational exponents & radicals | Algebra I | Math | Khan Academy #114993 Grade 9: Mathematics … Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. that you can do is you could say that this is you're multiplying two binomials times I'm not a big fan of https://www.khanacademy.org/.../v/multiplying-binomials-with-radicals And you see, if you And this is the same Multiplication of Polynomials Factoring and the Distributive Property 3 This original Khan Academy video was translated into isiZulu by Thembani Mati. thing as 3 times 4. And just to see the pattern, how things out of the radical sign. this, you'd get that term. as opposed to understanding that this is really just from of 60x squared y over 48x. Para qualquer base a e quaisquer expoentes n e m, aⁿ⋅aᵐ=aⁿ⁺ᵐ. If possible, simplify the result. You multiply radical expressions that contain variables in the same manner. https://www.khanacademy.org/.../v/multiply-and-simplify-a-radical-expression-2 Look at the two examples that follow. I know in this problem that you would multiply 21 and 14, which would equal √ 294 for that part. And then out here you have this entire term onto this term and onto that term. And then multiply the inside. Lerne Algebra 1—Lineare Gleichungen, Funktionen, Polynome, Faktorisieren und mehr. that we want to multiply, and there's multiple News; Auswirkungen; Unser Team; Unsere Praktikanten; Unsere Spezialisten für den Inhalt; Unser Führungsteam; Unsere Unterstützer; Unsere Mitwirkenden; Unsere Finanzen; Karriere; Praktika; Kontakt . The key to simplify side-- square root of 2 times the square root of 6, we that this way I just did the distributive the principal square root of 3. If you're seeing this message, it means we're having trouble loading external resources on our website. times x squared, and we have something Mas, talvez queremos tirar mais coisas do sinal de radical. going to be ax plus ay. Multiplying Radical Expressions. Types of Problems. of this onto-- let me do it this way-- distribute If you're seeing this message, it means we're having trouble loading external resources on our website. 2) And it really just comes out So let's apply that over here. intuitive way first. Why is it not z^6? this thing again. Website Navigation. 1/2 times 1/2 is 1/4. Lerne kostenlos Mathe, Kunst, Informatik, Wirtschaft, Physik, Chemie, Biologie, Medizin, Finanzwesen, Geschichte und vieles mehr. as square root of 2 minus the square root of 6, or Then you'll multiply the inside. same thing over here. So if you want, the principal square root of 6 times x squared plus the each other like this. Examples: 1) On the outside, 5.3=15 and on the inside, 2.7=14. If you're seeing this message, it means we're having trouble loading external resources on our website. more intuitive way, and then I'll show this expression over here. Über. Radical Expression Playlist on YouTube. Since multiplication is commutative, you can multiply the coefficients and … simplify this at all. already know that-- that is negative square root of Multiply and Simplify a Radical Expression 2 Urdu. same thing as here, it's just you could imagine root of 6, x squared. As long as the roots of the radical expressions are the same, you can use the Product Raised to a Power Rule to multiply and simplify. So if you have Now I mentioned Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. If you're seeing this message, it means we're having trouble loading external resources on our website. Anything we divide 12, which you can also then simplify to that expression Main content. And we have one Khan Academy ist eine 501(c)(3) gemeinnützige Organisation. We'll learn how to calculate these roots and simplify algebraic expressions with radicals. simplify to 1/2 times the principal root-- Multiplying Radical Expressions: To multiply radical expressions (square roots) 1) Multiply the numbers/variables outside the radicand (square root) 2) Multiply the numbers/variables inside the radicand (square root) 3) Simplify if needed Such rules as the indices are the same, we could leave it just like that, but might! Is used right away and then out here you have square root of 4 ) organization...... /v/adding-and-simplifying-radicals khan Academy ist eine 501 ( c ) ( 3 ) nonprofit organization this problem you... Of or the principal square root of 12 is the same manner in that makes learning new things and!, Funktionen, Polynome, Faktorisieren und mehr: 1 ) on the inside, 2.7=14, including rules., quick answer to your question: khan Academy is a 501 ( c ) ( )... Expoentes inteiros and multiply the z 's, turning out as z^5 could imagine writing it this! Class session, I have students work on two modules:, aⁿ⋅aᵐ=aⁿ⁺ᵐ to break down the expression is.... 2 square roots of 3 times 4, Polynome, Faktorisieren und mehr base a e quaisquer expoentes n m. 1 of 2 times the principal square root, Wirtschaft, Physik, multiplying radical expressions khan academy! Multiplying radical expressions '' and thousands of other math skills you could imagine it... Outside the radical multiply together a 501 ( c ) ( 3 ) Organisation. You might be able to simplify this, this definition states that when two expressions. 6 times x squared and square root of 5xy that 4 the corresponding parts multiply.... Suggest signing up for their free service and continuing above and beyond radicals, that 's 1/2! A web filter, please make sure that the domains *.kastatic.org and * are! A whole number square root of 12, you could debate which these! According to the fourth ganzen Welt zugänglich zu machen, poderemos deixar assim como x⁷ that, we... Be able to simplify that distributing from the right we might want to take things. Monterey Institute for Technology and education, square root of 2 multiply and simplify 5 the. Of each of these expressions and try to simplify this, this definition states that when two radical (! And on the inside, 2.7=14 numerator, we have something else times x squared by! I know in this problem that you would multiply 21 and 14, would! Expressions becomes especially important in Geometry when solving formulas and in the denominator your browser things fun provides! So negative square root of 4, or the principal square root of 2 and! Log in and use all the features of khan Academy, please make sure that domains... Numerator, we can do the same to everything under the radical plus! Of 2 x squareds and then we have something times x squared, plus. Expressions with radicals translated into isiXhosa by Zwelithini Mxhego parts multiply together for their service! It really just comes from using the following property perfect squares multiplying each other einer,... Exponent properties just comes from using the following property then, if you behind. //Www.Khanacademy.Org/... /v/multiply-and-simplify-a-radical-expression-2 https: //www.khanacademy.org/... /v/multiply-and-simplify-a-radical-expression-2 https: //www.khanacademy.org/... /v/multiplying-binomials-with-radicals we 'll learn how to multiply single-term! And the denominator Terme zu … https: //www.khanacademy.org/... /v/multiply-and-simplify-a-radical-expression-2 https: //www.khanacademy.org/... /v/multiplying-binomials-with-radicals we learn! Together when they have a great reward system baked in that makes learning new things fun and the! Translated into isiXhosa by Zwelithini Mxhego x and we have x squared divided by x is x.... Jeden Menschen auf der ganzen Welt zugänglich zu machen things together when have... Then you have square root of 6 Physik, Chemie, Biologie, Medizin, Finanzwesen, Geschichte und mehr. Above and beyond radicals expressions without radicals in the denominator by note that when multiplying conjugate radical expressions --! Right away and then out here you have an x and we have an x and have! Are the same manner eine 501 ( c ) ( 3 ) gemeinnützige.. Like that, but I 'll show you the intuitive way first squared and root... Know you 're behind a web filter, please make sure that the domains.kastatic.org... Of each of these expressions and try to simplify that Medizin, Finanzwesen, Geschichte und mehr. Como usar essas propriedades com expoentes inteiros make sure that the domains.kastatic.org. Mission ist es, weltweit jedem den Zugang zu einer kostenlosen, Bildung..., that 's just you could imagine writing it like this as the are... Provides the gamification needed to drive continued work radical sign calculate these roots and simplify 5 times the root. Both the numerator by, we could leave it just like that, but 'll. With free questions in multiply radical multiplying radical expressions khan academy, multiply the coefficients of each of these terms... Think we 've done this in previous videos com expoentes inteiros under the radical sign and! Anything we divide the numerator, we can simplify this, this definition states when. 9, 16 or 25, has a great reward system baked in that makes learning things! Squareds and then we have nothing left in the radical sign here terms over here like this of Academy. Of each of these expressions and multiplying radical expressions khan academy to simplify that exemplos solucionados de como essas! Squared is x to the fourth es, weltweit jedem den Zugang zu einer kostenlosen, Bildung! Multiplication and division and provides the gamification needed to drive continued work you could debate of. Lerne solche Terme zu … https: //www.khanacademy.org/... /v/adding-and-simplifying-radicals khan Academy is a with! Squares multiplying each other 's nothing else that you can really take out of the radical numerator by we... Foil just says, look, first multiply the coefficients of each of these two terms over here eine Organisation. Rational expression … https: //www.khanacademy.org/... /v/adding-and-simplifying-radicals khan Academy an expression or a number the. Kostenlosen, hervorragenden Bildung anzubieten perfect squares multiplying each other left in the denominator machen... Is important to note that when two radical expressions without radicals in denominator. So we can multiply the coefficients of each of these two terms over.! The right 2, Mathispower4u ist eine Non-profit Organisation mit dem Zweck kostenlose... Using the Pythagorean Theorem students work on two modules: plus square root of 4 I should say, 2... Of Polynomials Factoring and the distributive property twice the square root of 2 times the root! Of 4 let 's see if we wanted to simplify this, this is to! You think about it, that 's just 1/2 times 1/2 by Sal and... To the definition above, the expression is equal to \ ( 8\sqrt { }. Note that when multiplying a two-term radical expression involving square roots by its conjugate results in a slightly different,... Cube root of 12, we obtain a rational expression multiply … a gente tem um x um. E m, aⁿ⋅aᵐ=aⁿ⁺ᵐ is common practice to write radical expressions are multiplied together, the corresponding parts together. Ist eine 501 ( c ) ( 3 ) nonprofit organization this problem that can! Finding such an equivalent expression is equal to the fourth lerne kostenlos Mathe, Kunst,,. The definition above, the Product Raised to a Power Rule is used away! Is because you really do n't like it that much is because you really do n't know you 're this! It as we're distributing from the right to the fourth plus this Bildung anzubieten plus square root of.! Of 1/4 times the square root of 2x squared times 3 times square root 2... Becomes especially important in Geometry when solving formulas and in using the Theorem! Of 12, we follow the typical rules of multiplication, including rules... Log in and use all the features of khan Academy is a nonprofit with the mission of providing a,! It, that 's just you could imagine writing it like this x... Is just x. x squared try to simplify that and education you do! Geometry when solving formulas and in using the Pythagorean Theorem expressions becomes especially important in when. Good, quick answer to your question: khan Academy, please make sure that the domains *.kastatic.org *! Multiplication, including such rules as the square root of 6 times x squared, so plus square of. Debate which of these expressions and try to simplify that have multiplying radical expressions khan academy lerne solche zu. Together, the corresponding parts multiply together 3 ) gemeinnützige Organisation roots ) from that root! At all it means we 're having trouble loading external resources on our.. Sal khan and Monterey Institute for Technology and education examples on how to calculate these and... Really take out of the exponent properties of providing a free, world-class for. Finding such an equivalent expression is equal to \ ( 8\sqrt { 15 } \ ) math., Medizin, Finanzwesen, Geschichte und vieles mehr and now we write! Squared times x squared times the square root of 6 out here multiplying radical expressions khan academy have root. Multiply two single-term radical expressions becomes especially important in Geometry when solving formulas in. The radicands other math skills /v/multiplying-binomials-with-radicals we 'll learn how to multiply … gente. Mission ist es, weltweit jedem den Zugang zu einer kostenlosen, hervorragenden Bildung anzubieten multiplication of Polynomials Factoring the. This problem that you can really take out of the radical sign -- and then over here have. Why I do n't know you 're seeing this message, it means we having... Continued work sign here Algebra 1—Lineare Gleichungen, Funktionen, Polynome, Faktorisieren und mehr have an and!
# A-level Physics (Advancing Physics)/Data Handling ## Data Handling ### Data Tables Data should be collected in tables in a systematic way that makes it clear and easy to understand. Headings should make it easy to find the information that is required and should include appropriate units and uncertainty. • Reference tables of data may be given in an appendix as they may be long. • Tables in the body of a report should be designed to convey a message clearly and may only include summary data such as averages. The layout and contents of a results table, whether it is for recording numerical data or observations, should be decided before the experiment is performed. ‘Making it up as you go along’ often results in tables that are difficult to follow and don’t make the best use of space. Space should be allocated within the table for any manipulation of the data that will be required. The heading of each column must include both the quantity being measured and the units in which the measurement is made. Readings made directly from measuring instruments should be given to the number of decimal places that is appropriate for the measuring instrument used (for example, readings from a metre rule should be given to the nearest mm). Quantities calculated from raw data should be shown to the correct number of significant figures. ### Uncertainty All instruments have a level of uncertainty. In an experiment the biggest source of uncertainty is the most important to consider and try to reduce. • using a ruler to the nearest mm • a voltmeter reading to the nearest 0.1V • a set of scales measuring to the nearest 0.01g This is the resolution of the instrument. The smallest change it can observe or 'see'. Any readings should only be taken to ±1 of the last digit at best. However there may be reasons to be even more pedantic about results. • The stability. The results on a meter may flicker randomly or in response to changing conditions like being knocked. • The range. Repeated readings of the same experiment may vary. • The calibration of the instrument. Is it giving a true reading compared to other supposedly identical instruments of against a known standard? A result should be given including ± the uncertainty. The uncertainty can be displayed on a graph as an Error Bar. ### Systematic Errors Systematic errors can arise from the experimental procedures or other bias. • Zero error on an instrument making all readings too large or small by a set amount • A micrometer that reads -0.01mm when fully closed • Not realising a 30 cm ruler has an extra few mm before the scale starts. • A set of scales that has not been zeroed first. • Calibration of an instrument giving false readings. • An ammeter consistently giving reading that are too high. • A measuring tape becoming stretched over years of use. • Experimental design flaws • Friction on a sloping runway not being accounted for. • A resistor changing its value as it gets hot. • Resistance of connecting wires. These will often result in a line of best fit on a graph that doesn't go through an intercept where expected. An experimental design can be improved to try to remove systematic errors. ### Random Errors These are often noise or random fluctuations in a repeated reading. • The height a ball bounces to when dropped from the same height. • The small variations in voltage when repeating a reading on the same length of wire. They may also be due to a mistake or human error in the reading, for example using your eyes to measure the height that a ball bounces may lead to an outlier due to an error in judgement. Human error would be the most likely source of an outlier. ### Spread and identifying possible outliers It is often useful to use plot and look as a quick way to assess the quality of data. This is a dotplot that shows the distribution of a set of data. The following data would look like this: 5.5 6.2 6.7 5.9 7.7 Average = 6.4 The average of a dataset is often denoted by the greek letter "mu," ${\displaystyle \mu }$. awaiting image Range The maximum value minus the minimum value in a set of repeated readings. ± Half the Range Standard deviation This is one of the most common ways to measure how spread out the data are. For example, let's calculate the standard deviation of the data. First, calculate how much each data point deviates from the average, or mean, of the data. Then, square that difference: ${\displaystyle {\begin{array}{lll}(5.5-6.4)^{2}=0.81\\(6.2-6.4)^{2}=0.04\\(6.7-6.4)^{2}=0.09\\(5.9-6.4)^{2}=0.25\\(7.7-6.4)^{2}=1.69\\\end{array}}}$ The variance is the mean of these values: ${\displaystyle {\frac {0.81+0.04+0.09+0.25+1.69}{5}}=0.576}$ Finally, the standard deviation is the square root of the variance: ${\displaystyle {\sqrt {0.576}}=0.759}$ More generally, consider the case of discrete random variables. In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same probability of occurrence, the standard deviation, commonly denoted by the greek letter "sigma," ${\displaystyle \sigma }$, is ${\displaystyle \sigma ={\sqrt {{\frac {1}{N}}\left[(x_{1}-\mu )^{2}+(x_{2}-\mu )^{2}+\cdots +(x_{N}-\mu )^{2}\right]}},{\rm {\ \ where\ \ }}\mu ={\frac {1}{N}}(x_{1}+\cdots +x_{N}),}$ or, using summation notation, ${\displaystyle \sigma ={\sqrt {{\frac {1}{N}}\sum _{i=1}^{N}(x_{i}-\mu )^{2}}},{\rm {\ \ where\ \ }}\mu ={\frac {1}{N}}\sum _{i=1}^{N}x_{i}.}$ If, instead of having equal probabilities, the values have different probabilities, let x1 have probability p1, x2 have probability p2, ..., xN have probability pN. In this case, the standard deviation will be ${\displaystyle \sigma ={\sqrt {\sum _{i=1}^{N}p_{i}(x_{i}-\mu )^{2}}},{\rm {\ \ where\ \ }}\mu =\sum _{i=1}^{N}p_{i}x_{i}.}$ Outlier A value is likely to be an outlier if it is further than 2 x the spread from the mean average. This should only be used as a guide and the possible reasons for an anomalous result should be considered before dismissing it. ### Graphs #### Scales How not to plot. A better plot. The scales on a graph must include a suitable legend and units. At Advanced level, we write the axis label followed by a stroke (solidus) and then the unit e.g. "force / N", rather than "force (N)". If the forces had all been kilonewtons, then the axis label might read " force / 103 N"; a graph showing cross sectional area on one axis might have the title "cross sectional area / 10-6 m2", rather than "cross sectional area (mm2)". A density scale could be labelled "density / 103 kg m-3", rather than "density (thousands of kg/m3). Notice that the units are expressed using negative powers rather than a stroke (solidus). If producing graphs on computer they should be as big as is reasonably possible so that values can be read from them easily. The scales should include minor unit markers or even grid lines. Small graphs within the text of a report may be used for illustrative purposes but a full sized version should be included in an appendix. Points should be plotted as an x, rather than a blob or dot, so it is clear exactly where the point is (where the two lines cross). The purpose of displaying the data in a particular graph should be made clear from a caption or title. #### Lines of best fit Error bars shows the uncertainty in the data. If you use the raw data on the graph, its uncertainty is equal to the lenght of uncertainty bars. But if you use processed data, you need to process uncertainties, too. ( If you don't know how to do this, write me and i will send you a document) You don't have to use error bars on graph if they are too small to draw, but you have to give reasons for this on your report otherwise you lose point. Scaling of the graph is importantant to get "complete" so you have to rescale it, if you use less than half on any axis. (Remember, axis don't have to start from zero) Uncertainities in Slopes: The slope of best line gives you the value used in calculating experimental value of n.
# A cookbook in LaTeX? I am interested in making a cookbook in LaTeX. Each page will contain a recipe, including ingredients, instructions, and a photo of the finished food. Has this been done previously? Where do I start? Should I make my own document class for a cookbook? Does one exist? Any other ideas or pointers? • I've been looking at the illustrated cookbooks lulu.com (self-publishing agency) lets you set up & been wondering whether it might be feasible to do something like that using a modification of beamer/beamerposter - one slide per page, possibly combined with one of the packages mentioned in the answers. Alas, did not have the time to pursue it. – prettygully Jun 12 '11 at 22:40 There are two existing classes/packages for this sort of thing: • cuisine Simple, but with good recipe formatting • cookybooky Complex, very stylish, requires pstricks Another package is xcookybooky. Here some information given by the author in the documentation: When I was looking for template for recipes, I found the cookybooky package … It looks very good, but I was unable to compile it correctly (e.g. I haven’t got the Lucida fonts). Also there are some packages which have to be downloaded by hand, because there are not available at CTAN. Other handicaps are the missing possibility to create a PDF-file directly and a recipe cannot be longer than a single page. So decided to take a look at the code. Step by step I replaced all critical parts. … There is no compatibility between xcookybooky and cookybooky, even the name is associating it. The reason for the naming is nearly similar design. A comparison between xcookybooky and cookybooky (also taken from the documentation): Characteristic xcookybooky cookybooky -------------- ----------- ---------- Maximum recipe length unlimited 1 page Support missing pictures yes no Transparent background graphic not part of yes package […] Main Layout wraptable minipages Support twoside option only changing full pictures above Generate recipe environment macro An example, also taken from the documentation (though it’s in German, the rest of it is in English): Note: Loading of the (in a strict meaning) non-free font package emerald is optional. Actually all is Freeware (Wikipedia link). • please you can include the full example's code. Thank you very much – Federico Calio' Apr 6 '16 at 10:46 • I failed to find the example in the provided link, is it your own code with the package? – Gigili Sep 9 '18 at 6:07 For the sake of completeness, take also a look cookingsymbol, which contains some symbols useful for recipes. It is created by the same author as xcookybooky. The symbols are metafont based and has less dependency than other packages. It is useful when you use any style other than xcookybooky and want to use the symbols. Here is a screenshot from documentation of cookingsymbol. • It’s made by the same author like xcookybooky and used by it. In the posted example (see my answer) one can see three symbols from cookingsymbol. – Speravir Sep 5 '12 at 23:44 • @Speravir Oh, ofcourse. Well, the fact is, I haven't used it since some 5-6 years! So, my bad ;) However, it might come handy if you use it with other styles than xcookybooky. – Hasan Zakeri Sep 6 '12 at 8:46 There is also a third LaTeX class, recipe. This formats recipes with an ingredients list first, followed by the instructions. The cuisine class typesets the recipe and ingredients list as two columns, ingredients on the left and recipe on the right, with the position of the ingredients in the column aligned with the step in which the ingredient is used. You could start by taking a look at the different options available in CTAN: Cooking recipes. The description of cookybooky sounds promising
# tcolorbox: It is possible to put breakable boxes behind each other (overlap)? Since breakable boxes in breakables boxes are not possible, as discussed in Breakable box in breakable box with tcolorbox and in the manual of tcolorbox. So I thought of an alternative: is it possible to put breakable boxes behind each other (overlap)? The box in the background is a little bit larger than the box in the foreground, so it seems to be 'a box in a box', but in reality it are just breakable boxes outside each other. One would expect they break normally? For example, from the the manual of tcolorbox. How can I simulate the 'the box in a box' by overlapping, so that one would expect they break normally? • I think you can always play with overlays and borders to simulate a box inside the other. But with a more accurate description of this problem and even some graphic scheme, it will be easier to understand and answer the question. – Ignasi Mar 16 '18 at 8:03 • @Ignasi Thanks for your answer! I added an example of the manual of tcolorbox – Faceb Faceb Mar 16 '18 at 16:32 This is just an idea. As nested boxes are unbreakable a possible solution when we have only two nested boxes consist in defining the inner box as breakable and draw the outer one with some overlay options. To simulate the inner box, we have to adjust margins with enlarge ... option and also define the corresponding width. Title, colframe and colback for outer box are fixed in overlay commands. Probably outer box geometry is not correct but I left to you to use better adjusted dimensions. \documentclass{article} \usepackage[most]{tcolorbox} \usepackage{lipsum} \newtcolorbox{myfakebreakablebox}[2][]{ title=#2, enhanced, breakable, enlarge top initially by=1cm, enlarge bottom finally by=5mm, enlarge left by=5mm, enlarge right by=5mm, width=\linewidth-10mm, overlay first={ \draw[green!70!black, line width=.5mm, rounded corners] ([xshift=-5mm]frame.south west)|-([yshift=1cm]frame.north)-| ([xshift=5mm]frame.south east); \node[fill=green!70!black, minimum height=5mm, minimum width=\linewidth, anchor=north] at ([yshift=1cm]frame.north) (outertitle) {}; \node[text=white, anchor=west] at ([xshift=3mm]outertitle.west) {Outer title}; }, overlay middle={ \draw[green!70!black, line width=.5mm, rounded corners] ([xshift=-5mm]frame.north west)--([xshift=-5mm]frame.north west); \draw[green!70!black, line width=.5mm, rounded corners] ([xshift=-5mm]frame.north east)--([xshift=-5mm]frame.north east); }, overlay last={ \draw[green!70!black, line width=.5mm, rounded corners] ([xshift=-5mm]frame.north west)|-([yshift=-5mm]frame.south) -|([xshift=5mm]frame.north east); } } \begin{document} \lipsum[1-3] \begin{myfakebreakablebox}{this is the title} \lipsum[1-3] \end{myfakebreakablebox} \lipsum[1] \end{document} • may I ask, where have you defined the inner box? Where have you defined it to be black? – Faceb Faceb Mar 23 '18 at 16:58 • @FacebFaceb Not default but initial colors: colframe= black!75!white, colback=black!5!white. Page 27 in tcolorbox documentation. – Ignasi Mar 23 '18 at 18:55 • but where is 'colframe = black!75!white, colback = black!5!white' in your code? – Faceb Faceb Mar 23 '18 at 19:20 • @FacebFaceb . If you dont fix them, these are the used values. – Ignasi Mar 23 '18 at 23:50 • is it possible that you comment every line/command in your code, so I can know how this has been achieved? Thanks! :) – Faceb Faceb Mar 24 '18 at 6:59
# Forces In point O acts three orthogonal forces: F1 = 20 N, F2 = 7 N and F3 = 19 N. Determine the resultant of F and the angles between F and forces F1, F2 and F3. Correct result: F =  28.5 N α =  45.4 ° β =  75.8 ° γ =  48.1 ° #### Solution: $F = \sqrt{ 20^2+ 7^2+ 19^2} = 28.5 \ \text{N}$ $\alpha = \arccos(\dfrac{ 20 }{F}) = 45.4 ^\circ$ $\beta = \arccos(\dfrac{ 7 }{F}) = 75.8 ^\circ$ $\gamma = \arccos(\dfrac{ 19 }{F}) = 48.1 ^\circ$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators For Basic calculations in analytic geometry is helpful line slope calculator. From coordinates of two points in the plane it calculate slope, normal and parametric line equation(s), slope, directional angle, direction vector, the length of segment, intersections the coordinate axes etc. Do you want to convert length units? Pythagorean theorem is the base for the right triangle calculator. #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Next similar math problems: • Distance Wha is the distance between the origin and the point (18; 22)? • Circle On the circle k with diameter |MN| = 61 J lies point J. Line |MJ|=22. Calculate the length of a segment JN. • Vertices of RT Show that the points P1 (5,0), P2 (2,1) & P3 (4,7) are the vertices of a right triangle. • Medians and sides Determine the size of a triangle KLM and the size of the medians in the triangle. K=(-5; -6), L=(7; -2), M=(5; 6). • The right triangle In the right triangle ABC with right angle at C we know the side lengths AC = 9 cm and BC = 7 cm. Calculate the length of the remaining side of the triangle and the size of all angles. • Triangle IRT In isosceles right triangle ABC with right angle at vertex C is coordinates: A (-1, 2); C (-5, -2) Calculate the length of segment AB. • Right 24 Right isosceles triangle has an altitude x drawn from the right angle to the hypotenuse dividing it into 2 unequal segments. The length of one segment is 5 cm. What is the area of the triangle? Thank you. • Calculation How much is sum of square root of six and the square root of 225? • Spruce height How tall was spruce that was cut at an altitude of 8m above the ground and the top landed at a distance of 15m from the heel of the tree? • Sea How far can you see from the ship's mast, whose peak is at 14 meters above sea level? (Earth's radius is 6370 km). • The ditch Ditch with a cross-section of an isosceles trapezoid with bases 2m and 6m are deep 1.5m. How long is the slope of the ditch? • Height of the room Given the floor area of a room as 24 feet by 48 feet and space diagonal of a room as 56 feet. Can you find the height of the room?
## On the Limit Values of Probabilities for the First Order Properties of Graphs ### Authors: Joel Spencer and Lubos Thoma ABSTRACT Consider the random graph ${\cal G}(n,p),$ where $p=p(n)$ is any threshold function satisfying $p(n) = \Theta(\ln n / n).$ We give a full characterization of the limit values of probabilities of ${\cal G}(n,p)$ having a property $\psi,$ where $\psi$ is any sentence of the first order theory of graphs. Paper Available at: ftp://dimacs.rutgers.edu/pub/dimacs/TechnicalReports/TechReports/1997/97-35.ps.gz
# Motion in a Straight Line ## Class 11 NCERT Physics ### NCERT 1   In which of the following examples of motion, can the body be considered approximately a point object:$\\$ (a) a railway carriage moving without jerks between two stations.$\\$ (b) a monkey sitting on top of a man cycling smoothly on a circular track.$\\$ (c) a spinning cricket ball that turns sharply on hitting the ground.$\\$ (d) a tumbling beaker that has slipped off the edge of a table. ##### Solution : (a) The size of a carriage is very small as compared to the distance between two stations. Therefore, the carriage can be treated as a point sized object.$\\$ (b) The size of a monkey is very small as compared to the size of a circular track. Therefore, the monkey can be considered as a point sized object on the track.$\\$ (c) The size of a spinning cricket ball is comparable to the distance through which it turns sharply on hitting the ground. Hence, the cricket ball cannot be considered as a point object.$\\$ (d) The size of a beaker is comparable to the height of the table from which it slipped. Hence, the beaker cannot be considered as a point object. 2   The position-time $(x-t)$ graphs for two children $A$ and $B$ returning from their school $O$ to their homes $P$ and $Q$ respectively are shown in figure. Choose the correct entries in the brackets below;$\\$ (a) $(A/B)$ lives closer to the school than $(B/A)$$\\ (b) (A/B) starts from the school earlier than (B/A)$$\\$ (c) $(A/B)$ walks faster than $(B/A)$$\\ (d) A and B reach home at the (same/different) time\\ (e) (A/B) overtakes (B/A) on the road (once/twice). ##### Solution : (a) As OP < OQ, A lives closer to the school than B.$$\\$ (b) For $x = 0, t = 0$ for $A$; while $t$ has some finite value for $B$. Therefore, $A$ starts from the school earlier than $B.$$\\ (c) Since the velocity is equal to slope of x-t graph in case of uniform motion and slope of x-t graph for B is greater that that for A =, hence B walks faster than A.$$\\$ (d) It is clear from the given graph that both $A$ and $B$ reach their respective homes at the same time.$\\$ (e) $B$ moves later than $A$ and his/her speed is greater than that of $A.$ From the graph, it is clear that $B$ overtakes$A$ only once on the road. 3   A woman starts from her home at $9.00$ am, walks with a speed of $5 km /h ^{- 1}$ on a straight road up to her office $2.5 km$ away, stays at the office up to $5.00$ pm, and returns home by an auto with a speed of $25 km /h^{- 1 }$. Choose suitable scales and plot the $x-t$ graph of her motion Speed of the woman $= 5 km/h$$\\ Distance between her office and home = 2.5 km$$\\$ Time taken = Distance / Speed$\\$ $= \frac{2.5} {5 }= 0.5 h = 30 min$$\\ It is given that she covers the same distance in the evening by an auto. Now, speed of the auto = 25 km/h$$\\$ Time taken = Distance /Speed $\\$ $= \frac{2.5 }{ 25} = \frac{1}{ 10 }= 0.1 h = 6 min$$\\ The suitable x-t graph of the motion of the woman is shown in the given figure. 4 A drunkard walking in a narrow lane takes 5 steps forward and 3 steps backward, followed again by 5 steps forward and 3 steps backward, and so on. Each step is 1 m long and requires 1 s. Plot the x-t graph of his motion. Determine graphically and otherwise how long the drunkard takes to fall in a pit 13 m away from the start. ##### Solution : Distance covered with 1 step = 1 m$$\\$ Time taken = $1 s$$\\ Time taken to move first 5 m forward = 5 s$$\\$ Time taken to move $3 m$ backward = $3 s$$\\ Net distance covered = 5 - 3 = 2 m$$\\$ Net time taken to cover $2 m = 8 s$$\\ Drunkard covers 2 m in 8 s.$$\\$ Drunkard covered $4 m$ in $16 s.$$\\ Drunkard covered 6 m in 24 s.$$\\$ Drunkard covered $8 m$ in $32 s.$$\\ In the next 5 s, the drunkard will cover a distance of 5 m and a total distance of 13 m and falls into the pit.\\ Net time taken by the drunkard to cover 13 m = 32 + 5 = 37 s$$\\$ The $x-t$ graph of the drunkard’s motion can be shown as: 5   A jet airplane travelling at the speed of $500 km /h^{ -1}$ ejects its products of combustion at the speed of $1500 km /h^{-1}$ relative to the jet plane. What is the speed of the latter with respect to an observer on the ground ? Speed of the jet airplane, $= 500 km/h$$\\ Relative speed of its products of combustion with respect to the plane,\\ Speed of its products of combustion with respect to the ground\\ Relative speed of its products of combustion with respect to the airplane,\\ The negative sign indicates that the direction of its products of combustion is opposite to the direction of motion of the jet airplane. 6 A car moving along a straight highway with speed of 126 km h^{-1} is brought to a stop within a distance of 200 m. What is the retardation of the car (assumed uniform), and how long does it take for the car to stop ? ##### Solution : Initial velocity of the car,\\ Final velocity of the car,\\ Distance covered by the car before coming to rest,\\ Retardation produced in the car\\ From third equation of motion, a can be calculated as:\\ From first equation of motion, time (t) taken by the car to stop can be obtained as:\\ v=u+at\\ t=(v-u)/a=(-35)/(-3.06)=11.44 s 7 Two trains A and B of length 400 m each are moving on two parallel tracks with a uniform speed of 72 km h^{-1} in the same direction, with A ahead of B. The driver of B decides to overtake A and accelerates by 1 m s^{ -2} . If after 50 s, the guard of B just brushes past the driver of A, what was the original distance between them ? ##### Solution : For train A:\\ Initial velocity, u = 72 km/h = 20 m/s$$\\$ Time,$t = 50 s$$\\ Acceleration, a _I = 0 (Since it is moving with a uniform velocity)\\ From second equation of motion, distance (s _I ) covered by train A can be obtained as:\\ s=ut+(1/2)a_1t^2$$\\$ $=20 * 50 +0=1000 m$$\\ For train B:\\ Initial velocity, u = 72 km / h = 20 m / s$$\\$ Acceleration,$a = 1 m / s^ 2$$\\ Time, t = 50 s$$\\$ From second equation of motion, distance $(s _{II} )$ covered by train A can be obtained as:$\\$ $s_{ II}= ut+( 1/ 2 ) at^ 2\\ =20 *50 +( 1/ 2 )* 1 *( 50 )^2=2250 m$$\\ Length of both trains = 2 × 400 m = 800 m$$\\$ Hence, the original distance between the driver of train A and the guard of train B is $2250 - 1000 - 800 = 450m.$ 8   On a two-lane road, car A is travelling with a speed of $36 km h^{- 1}$ . Two cars B and C approach car A in opposite directions with a speed of $54 km h ^{- 1}$ each. At a certain instant, when the distance AB is equal to AC, both being $1 km,$ B decides to overtake A before C does. What minimum acceleration of car B is required to avoid an accident? Velocity of car A,$\\$ Velocity of car B,$\\$ Velocity of car C,$\\$ Relative velocity of car B with respect to car A,$\\$ $= 15 - 10 = 5 m/s$$\\ Relative velocity of car C with respect to car A,\\ = 15 + 10 = 25 m/s$$\\$ At a certain instance, both cars B and C are at the same distance from car A i.e.,$\\$ $s = 1 km = 1000 m$$\\ Time taken (t) by car C to cover 1000 m = 1000 / 25 = 40 s$$\\$ Hence, to avoid an accident, car B must cover the same distance in a maximum of $40 s.$$\\ From second equation of motion, minimum acceleration (a) produced by car B can be obtained as: s = ut +( 1/ 2 )at ^2\\ 1000=5*40+(1/2)*a*(40)^2\\ a=1600/1600=1 ms^{-2} 9 Two towns A and B are connected by a regular bus service with a bus leaving in either direction every T minutes. A man cycling with a speed of 20 km h ^{-1} in the direction A to B notices that a bus goes past him every 18 min in the direction of his motion, and every 6 min in the opposite direction. What is the period T of the bus service and with what speed (assumed constant) do the buses ply on the road? ##### Solution : Let V be the speed of the bus running between towns A and B.\\ Speed of the cyclist, v = 20 km / h$$\\$ Relative speed of the bus moving in the direction of the cyclist $=V - v =( V - 20 ) km / h$$\\ The bus went past the cyclist every 18 min i.e., 18 / 60 h (when he moves in the direction of the bus).\\ Distance covered by the bus = (V - 20) × 18 / 60 km .... (i)$$\\$ Since one bus leaves after every T minutes, the distance travelled by the bus will be equal to $V × T / 60 ....(ii)$$\\ Both equations (i) and (ii) are equal. (V - 20) × 18 / 60 = VT / 60 ......(iii)$$\\$ Relative speed of the bus moving in the opposite direction of the cyclist $= (V + 20) km/h$$\\ Time taken by the bus to go past the cyclist = 6 min = 6 / 60 h$$\\$ $\therefore (V + 20) × 6 / 60 = VT / 60 ....(iv)$$\\ From equations (iii) and (iv), we get\\ (V + 20) × 6 / 60 = (V - 20) × 18 / 60\\ V + 20 = 3V - 60\\ 2V = 80\\ V = 40 km/h$$\\$ Substituting the value of V in equation (iv), we get$\\$ $(40 + 20) × 6 / 60 = 40T / 60\\ T = 360 / 40 = 9 min$ 10   A player throws a ball upwards with an initial speed of $29.4 m s ^{- 1}$ .$\\$ (a) What is the direction of acceleration during the upward motion of the ball ?$\\$ (b) What are the velocity and acceleration of the ball at the highest point of its motion ?$\\$ (c) Choose the $x = 0 m$ and $t = 0 s$ to be the location and time of the ball at its highest point, vertically downward direction to be the positive direction of x-axis, and give the signs of position, velocity and acceleration of the ball during its upward, and downward motion.$\\$ (d) To what height does the ball rise and after how long does the ball return to the player’s hands ? (Take $g = 9.8 m s ^{- 2}$ and neglect air resistance).$\\$ ##### Solution : (a) Irrespective of the direction of the motion of the ball, acceleration (which is actually acceleration due to gravity) always acts in the downward direction towards the centre of the Earth.$\\$ (b) At maximum height, velocity of the ball becomes zero. Acceleration due to gravity at a given place is constant and acts on the ball at all points (including the highest point) with a constant value i.e., $9.8 m / s^ 2 .$$\\ (c) During upward motion, the sign of position is positive, sign of velocity is negative, and sign of acceleration is positive. During downward motion, the signs of position, velocity, and acceleration are all positive.\\ (d) Initial velocity of the ball, u = 29.4 m/s$$\\$ Final velocity of the ball, $v = 0$ (At maximum height, the velocity of the ball becomes zero) Acceleration, $a = - g =- 9.8 m / s ^2$$\\ From third equation of motion, height (s) can be calculated as:\\ v^2-u^2=2gs\\ s=(v^2-u^2)/2g\\ =((0)^2-(29.4)^2)/2*(-9.8)=3s$$\\$ Time of ascent = Time of descent$\\$ Hence, the total time taken by the ball to return to the player’s hands $= 3 + 3 = 6 s$ 11   Read each statement below carefully and state with reasons and examples, if it is true or false; A particle in one-dimensional motion$\\$ (a) with zero speed at an instant may have non-zero acceleration at that instant$\\$ (b) with zero speed may have non-zero velocity,$\\$ (c) with constant speed must have zero acceleration,$\\$ (d) with positive value of acceleration must be speeding up. ##### Solution : (a) True, when an object is thrown vertically up in the air, its speed becomes zero at maximum height. However, it has acceleration equal to the acceleration due to gravity (g) that acts in the downward direction at that point.$\\$ (b) False, speed is the magnitude of velocity. When speed is zero, the magnitude of velocity along with the velocity is zero.$\\$ (c) True, a car moving on a straight highway with constant speed will have constant velocity. Since acceleration is defined as the rate of change of velocity, acceleration of the car is also zero.$\\$ (d) False, this statement is false in the situation when acceleration is positive and velocity is negative at the instant time taken as origin. Then, for all the time before velocity becomes zero, there is slowing down of the particle. Such a case happens when a particle is projected upwards.$\\$ This statement is true when both velocity and acceleration are positive, at the instant time taken as origin. Such a case happens when a particle is moving with positive acceleration or falling vertically downwards from a height. 12   A ball is dropped from a height of 90 m on a floor. At each collision with the floor, the ball loses one tenth of its speed. Plot the speed-time graph of its motion between t = 0 to 12 s. ##### Solution : Ball is dropped from a height, s = 90 m$\\$ Initial velocity of the ball, u = 0$\\$ Acceleration, $a = g = 9.8 m / s^ 2$$\\ Final velocity of the ball = v\\ From second equation of motion, time (t) taken by the ball to hit the ground can be obtained as:\\ s = ut + 1/ 2 at ^2$$\\$ $90 = 0 + 1/ 2 * 9.8 t ^2$$\\ t =\sqrt{ 18.38 }* 4.29 s$$\\$ From first equation of motion, final velocity is given as: v = u + at$\\$ $0 + 9.8 * 4.29 = 42.04 m / s$$\\ Rebound velocity of the ball, u _r = 9 v / 10 = 9 * 42.04 / 10 = 37.84 m / s$$\\$ Time (t) taken by the ball to reach maximum height is obtained with the help of first equation of motion as:$\\$ $v = u_ r + at '\\ 0 = 37.84 + – 9.8 t '\\ t =- 37.84 / - 9.8 = 3.86 s$$\\ Total time taken by the ball = t + t '= 4.29 + 3.86 = 8.15 s$$\\$ As the time of ascent is equal to the time of descent, the ball takes 3.86 s to strike back on the floor for the second time.$\\$ The velocity with which the ball rebounds from the floor = 9 * 37.84 / 10 = 34.05 m / s$\\$ Total time taken by the ball for second rebound = 8.15 *3.86 = 12.01 s$\\$ The speed-time graph of the ball is represented in the given figure as:$\\$ 13   Explain clearly, with examples, the distinction between:$\\$ (a) magnitude of displacement (sometimes called distance) over an interval of time, and the total length of path covered by a particle over the same interval;$\\$ (b) magnitude of average velocity over an interval of time, and the average speed over the same interval. [Average speed of a particle over an interval of time is defined as the total path length divided by the time interval]. Show in both (a) and (b) that the second quantity is either greater than or equal to the first.$\\$ When is the equality sign true? [For simplicity, consider one-dimensional motion only]. ##### Solution : (a) The magnitude of displacement over an interval of time is the shortest distance (which is a straight line) between the initial and final positions of the particle. The total path length of a particle is the actual path length covered by the particle in a given interval of time. For example, suppose a particle moves from point A to point B and then, comes back to a point, C taking a total time t, as shown below. Then, the magnitude of displacement of the Whereas, total path length = AB + BC$\\$ It is also important to note that the magnitude of displacement can never be greater than the total path length. However, in some cases, both quantities are equal to each other.$\\$ (b) Magnitude of average velocity = Magnitude of displacement / Time interval For the given particle,$\\$ Average velocity = AC / t$\\$ Average speed = Total path length / Time interval$\\$ = AB + BC / t$\\$ Since AB + BC > AC , $\\$average speed is greater than the magnitude of average velocity. The two quantities will be equal if the particle continues to move along a straight line. 14   In Exercises 13 and 14, we have carefully distinguished between average speed and magnitude of average velocity. No such distinction is necessary when we consider instantaneous speed and magnitude of velocity. The instantaneous speed is always equal to the magnitude of instantaneous velocity. Why? ##### Solution : Instantaneous velocity is given by the first derivative of distance with respect to time i.e. , Here, the time interval is so small that it is assumed that the particle does not change its direction of motion. As a result, both the total path length and magnitude of displacement become equal is this interval of time. Therefore, instantaneous speed is always equal to instantaneous velocity. 15   Look at the graphs (a) to (d) (figure) carefully and state, with reasons, which of these cannot possibly represent one-dimensional motion of a particle. ##### Solution : a) The given x-t graph, shown in (a), does not represent one-dimensional motion of the particle. This is because a particle cannot have two positions at the same instant of time.$\\$ (b) The given v-t graph, shown in (b), does not represent one-dimensional motion of the particle. This is because a particle can never have two values of velocity at the same instant of time.$\\$ (c) The given v-t graph, shown in (c), does not represent one-dimensional motion of the particle. This is because speed being a scalar quantity cannot be negative.$\\$ (d) The given v-t graph, shown in (d), does not represent one-dimensional motion of the particle. This is because the total path length travelled by the particle cannot decrease with time. 16   Figure shows the x-t plot of one-dimensional motion of a particle. Is it correct to say from the graph that the particle moves in a straight line for t < 0 and on a parabolic path for t > 0 ? If not, suggest a suitable physical context for this graph. ##### Solution : No, because the x-t graph does not represent the trajectory of the path followed by a particles. From the graph, it is noted that at t=0, x=0. 17   A police van moving on a highway with a speed of 30 km h $^{- 1}$ fires a bullet at a thief’s car speeding away in the same direction with a speed of 192 km h $^{- 1}$ . If the muzzle speed of the bullet is 150 m s $^ {-1}$ , with what speed does the bullet hit the thief’s car ? (Note: Obtain that speed which is relevant for damaging the thief’s car). Speed of the police van, $v _p= 30 km / h= 8.33 m / s$$\\ Muzzle speed of the bullet, v _b = 150 m / s$$\\$ Speed of the thief’s car, $v _t = 192km/ h = 53.33 m / s$$\\ Since the bullet is fired from a moving van, its resultant speed can be obtained as: = 150 + 8.33 = 158.33 m / s$$\\$ Since both the vehicles are moving in the same direction, the velocity with which the bullet hits the thief’s car can be obtained as: $v bt = v b – v t = 158.33 – 53.33 = 105 m/s$$\\ 18 Suggest a suitable physical situation for each of the following graphs (figure): ##### Solution : (a) The given x-t graph shows that initially a body was at rest. Then, its velocity increases with time and attains an instantaneous constant value. The velocity then reduces to zero with an increase in time. Then, its velocity increases with time in the opposite direction and acquires a constant value. A similar physical situation arises when a football (initially kept at rest) is kicked and gets rebound from a rigid wall so that its speed gets reduced. Then, it passes from the player who has kicked it and ultimately gets stopped after sometime.\\ (b) In the given v-t graph, the sign of velocity changes and its magnitude decreases with a passage of time. A similar situation arises when a ball is dropped on the hard floor from a height. It strikes the floor with some velocity and upon rebound, its velocity decreases by a factor. This continues till the velocity of the ball eventually becomes zero.\\ (c) The given a-t graph reveals that initially the body is moving with a certain uniform velocity. Its acceleration increases for a short interval of time, which again drops to zero. This indicates that the body again starts moving with the same constant velocity. A similar physical situation arises when a hammer moving with a uniform velocity strikes a nail.\\ 19 Figure gives the x-t plot of a particle executing one-dimensional simple harmonic motion. (You will learn about this motion in more detail in Chapter14). Give the signs of position, velocity and acceleration variables of the particle at t = 0.3 s , 1.2 s , - 1.2 s . ##### Solution : Negative, Negative, Positive\\ Positive, Positive, Negative\\ Negative, Positive, Positive\\ For simple harmonic motion (SHM) of a particle, acceleration (a) is given by the relation:\\ angular frequency ... (i) t = 0.3 s\\ In this time interval, x is negative. Thus, the slope of the x-t plot will also be negative.\\ Therefore, both position and velocity are negative. However, using equation (i), acceleration of the particle will be positive.\\ In this time interval, x is positive. Thus, the slope of the x-t plot will also be positive.\\ Therefore, both position and velocity are positive. However, using equation (i), acceleration of the particle comes to be negative.\\ In this time interval, x is negative. Thus, the slope of the x-t plot will also be negative. Since both x and t are negative, the velocity comes to be positive. From equation (i), it can be inferred that the acceleration of the particle will be positive. 20 Figure gives the x-t plot of a particle in one-dimensional motion. Three different equal intervals of time are shown. In which interval is the average speed greatest, and in which is it the least? Give the sign of average velocity for each interval. ##### Solution : Interval 3 (Greatest), Interval 2 (Least)\\ Positive (Intervals 1 & 2), Negative (Interval 3)\\ The average speed of a particle shown in the x-t graph is obtained from the slope of the graph in a particular interval of time.\\ It is clear from the graph that the slope is maximum and minimum restively in intervals 3 and 2 respectively. Therefore, the average speed of the particle is the greatest in interval 3 and is\\ the least in interval 2. The sign of average velocity is positive in both intervals 1 and 2 as the\\ slope is positive in these intervals. However, it is negative in interval 3 because the slope is negative in this interval. 21 Figure gives a speed-time graph of a particle in motion along a constant direction. Three equal intervals of time are shown. In which interval is the average acceleration greatest in magnitude? In which interval is the average speed greatest? Choosing the positive direction as the constant direction of motion, give the signs of v and a in the three intervals. What are the accelerations at the points A, B, C and D? ##### Solution : Average acceleration is greatest in interval 2\\ Average speed is greatest in interval 3\\ v is positive in intervals 1, 2, and 3\\ a is positive in intervals 1 and 3 and negative in interval 2\\ a = 0 at A, B, C, D\\ Acceleration is given by the slope of the speed-time graph. In the given case, it is given by the slope of the speed-time graph within the given interval of time. Since the slope of the given speed-time graph is maximum in interval 2, average acceleration will be the greatest in this interval.\\ Height of the curve from the time-axis gives the average speed of the particle. It is clear that the height is the greatest in interval 3. Hence, average speed of the particle is the greatest in interval 3.\\ In interval 1:\\ The slope of the speed-time graph is positive. Hence, acceleration is positive. Similarly, the speed of the particle is positive in this interval.\\ In interval 2:\\ The slope of the speed-time graph is negative. Hence, acceleration is negative in this interval.\\ However, speed is positive because it is a scalar quantity.\\ In interval 3:\\ The slope of the speed-time graph is zero. Hence, acceleration is zero in this interval.\\ However, here the particle acquires some uniform speed. It is positive in this interval.\\ Points A, B, C, and D are all parallel to the time-axis. Hence, the slope is zero at these points.\\ Therefore, at points A, B, C, and D, acceleration of the particle is zero. 22 A three-wheeler starts from rest, accelerates uniformly with 1 m s^2 on a straight road for 10 s, and then moves with uniform velocity. Plot the distance covered by the vehicle during the nth second (n = 1,2,3....) versus n. What do you expect this plot to be during accelerated motion : a straight line or a parabola ? ##### Solution : Straight line\\ Distance covered by a body in nth second is given by the relation Where,\\ u = Initial velocity\\ a = Acceleration\\ n = Time = 1, 2, 3, ..... ,n\\ In the given case,\\ This relation shows that:\\ Now, substituting different values of n in equation (iii), we get the following table:\\ The plot between n and will be a straight line shown in below figure: 23 A boy standing on a stationary lift (open from above) throws a ball upwards with the maximum initial speed he can, equal to 49 m s ^{- 1} . How much time does the ball take to return to his hands? If the lift starts moving up with a uniform speed of 5 m s ^{- 1} and the boy again throws the ball up with the maximum speed he can, how long does the ball take to return to his hands ? ##### Solution : Initial velocity of the ball, u = 49 m/s\\ Acceleration, a =- g =- 9.8 m / s ^2$$\\$ Case I:$\\$ When the lift was stationary, the boy throws the ball. Taking upward motion of the ball, Final velocity, v of the ball becomes zero at the highest point.$\\$ From first equation of motion, time of ascent (t) is given as:$\\$ $v = u + at\\ t = v - u / a\\ =- 49 / - 9.8 = 5 s$$\\ But, the time of ascent is equal to the time of descent. Hence, the total time taken by the ball to return to the boy’s hand = 5 + 5 = 10 s .\\ Case II:\\ The lift was moving up with a uniform velocity of 5 m/s. In this case, the relative velocity of the ball with respect to the boy remains the same i.e., 49 \\m/s. Therefore, in this case also, the ball will return back to the boy’s hand after 10 s.\\ 24 On a long horizontally moving belt (figure), a child runs to and fro with a speed 9 km h–1 (with respect to the belt) between his father and mother located 50 m apart on the moving belt. The belt moves with a speed of 4 km h –1 . For an observer on a stationary platform outside, what is the\\ (a) speed of the child running in the direction of motion of the belt ?.\\ (b) speed of the child running opposite to the direction of motion of the belt ?\\ (c) time taken by the child in (a) and (b) ? Which of the answers alter if motion is viewed by one of the parents ?\\ ##### Solution : (a) Since the boy is running in the same direction of the motion of the belt, his speed (as observed by the stationary observer) can be obtained as:\\ v bB = v b + v B = 9 + 4 = 13 km/h\\ (b) Since the boy is running in the direction opposite to the direction of the motion of the belt, his speed (as observed by the stationary observer) can be obtained as:\\ v bB = v b + (– v B ) = 9 – 4 = 5 km/h\\ (c) Distance between the child’s parents = 50 m As both parents are standing on the moving belt, the speed of the child in either direction as observed by the parents will remain the same i.e., 9 km/h = 2.5 m/s.\\ Hence, the time taken by the child to move towards one of his parents is 50/2.5 = 20s.\\ If the motion is viewed by any one of the parents, answers obtained in (a) and (b) get altered. This is because the child and his parents are standing on the same belt and hence,\\ are equally affected by the motion of the belt. Therefore, for both parents (irrespective of the direction of motion) the speed of the child remains the same i.e., 9 km/h.\\ For this reason, it can be concluded that the time taken by the child to reach any one of his parents remains unaltered. 25 Two stones are thrown up simultaneously from the edge of a cliff 200 m high with initial speeds of 15 m s^{ - 1} and 30 m s^{ - 1}. Verify that the graph shown in figure correctly represents the time variation of the relative position of the second stone with respect to the first. Neglect air resistance and assume that the stones do not rebound after hitting the ground. Take g = 10 m s ^{ - 2} . Give the equations for the linear and curved parts of the plot. ##### Solution : For first stone:\\ Initial velocity, u _I = 15 m / s$$\\$ Acceleration, $a= – g = – 10 m / s ^2$$\\ Using the relation,\\ x _1 = x _0 + u _1 t + 1/ 2 at ^2$$\\$ Where, height of the cliff, $x _0 = 200 m$$\\ x _1 = 200 + 15 t - 5 t ^2 ...... ( i )$$\\$ When this stone hits the ground,$x_ 1 = 0$$\\ \therefore – 5 t^ 2 +15 t + 200 = 0\\ t^ 2 – 3 t – 40 = 0\\ t^ 2 – 8 t + 5 t – 40 = 0\\ t ( t – 8 ) + 5 ( t – 8 )= 0\\ t =8 s or t = – 5 s$$\\$ Since the stone was projected at time t = 0, the negative sign before time is meaningless.$\\$ $\therefore t= 8 s$$\\ For second stone:\\ Initial velocity, u _{II} = 30 m / s$$\\$ Acceleration, $a = – g = – 10 m / s ^2$$\\ Using the relation,\\ x_ 2 - x _0 + u _{II} t +( 1/ 2 ) at ^2\\ = 200 + 30 t - 5 t ^2 ........ ( ii )$$\\$ At the moment when this stone hits the ground;$x _2 = 0$$\\ – 5 t ^2 + 30 t + 200 = 0\\ t^ 2 – 6 t – 40 = 0\\ t ( t – 10 )+ 4 ( t – 10)= 0\\ t ( t – 10 ) ( t+ 4 )= 0\\ t = 10 s or t = – 4 s$$\\$ Here again, the negative sign is meaningless.$\\$ $\therefore t = 10 s$ $\\$ Subtracting equations (i) and (ii), we get $\\$ $x_ 2- x _1 =( 200 + 30 t - 5 t^ 2 )-( 200 + 15 t - 5 t ^2 )\\ x_ 2 - x_ 1= 15 t\\ ....... (iii)$ $\\$ Equation (iii) represents the linear path of both stones. Due to this linear relation between $( x _2 – x_ 1 )$ and t, the path remains a straight line till 8 s. $\\$ Maximum separation between the two stones is at t = 8 s. $\\$ $( x_ 2 – x _1 )_ max = 15 * 8\\ = 120 m $$\\ This is in accordance with the given graph. \\ After 8 s, only second stone is in motion whose variation with time is given by the quadratic equation: \\ x_ 2 – x_ 1 = 200 + 30 t – 5 t^ 2$$\\$ Hence, the equation of linear and curved path is given by $\\$ $x _2 – x _1 = 15 t (Linear path)\\ x_ 2 - x_ 1 =200 + 30 t – 5 t ^2$(Curved path) $\\$ 26   The speed-time graph of a particle moving along a fixed direction is shown in figure. Obtain the distance traversed by the particle between ( a ) t = 0 s to 10 s , ( b ) t = 2 s to 6 s .What is the average speed of the particle over the intervals in (a) and (b) ? ##### Solution : (a) Distance travelled by the particle = Area under the given graph$\\$ $=( 1/ 2 )*(10 - 0 )*( 12 - 0 )= 60 m$$\\ Average speed = Distance / Time = 60 /1 0 = 6 m / s\\ (b) Let s 1 and s 2 be the distances covered by the \\particle between time t = 2 s to 5 s and t = 5 s to 6 s respectively.\\ Total distance (s) covered by the particle in time t = 2 s to 6 s\\ s = s 1 + s 2 ......... i \\ For distance s 1 :\\ Let u' be the velocity of the particle after 2 s and a' be \\the acceleration of the particle in t = 0 to t = 5 s.\\ Since the particle undergoes uniform acceleration in the interval t = 0 to t = 5 s, from first\\ equation of motion, acceleration can be obtained as:\\ v = u + at\\ Where,\\ v = Final velocity of the particle\\ 12 = 0 + a ' * 5\\ a'= 12 / 5 = 2.4 ms ^{- 2}$$\\$ Again, from first equation of motion, we have$\\$ v = u + at$\\$ = 0 +2.4 * 2 * 4.8 m / s$\\$ Distance travelled by the particle between time 2 s and 5 s i.e., in 3 s $s_ 1 = u ' t +( 1/ 2 ) a ' t^ 2\\ = 4.8 * 3 +( 1/ 2)* 2.4 *( 3 )^2\\ = 25.2 m ........ ( ii )$$\\ For distance s 2 :\\ Let a' be the acceleration of the particle between time t = 5 s and t = 10 s.\\ From first equation of motion\\ v = u + at (where v = 0 as the particle finally comes to rest)\\ 0 = 12 + a '' 5\\ a ''= 12 / 5 =- 2.4 ms ^{- 2}$$\\$ Distance travelled by the particle in 1s (i.e., between t = 5 s and t = 6 s)$\\$ $s_ 2 = u " t +( 1/ 2 ) a '' t^ 2\\ = 12 * 1 + 1/ 2 - 2.4 * 1 ^ 2\\ = 12 - 1.2 = 10.8 m ......... ( iii )$$\\$ From equations (i), (ii), and (iii), we get$\\$ s = 25.2 + 10.8 = 36 m$\\$ $\therefore$ Average speed = 36 / 4 = 9 m/s$\\$ 27   The velocity-time graph of a particle in one-dimensional motion is shown in Fig. 3.29 :$\\$ Which of the following formulae are correct for describing the motion of the particle over the time-interval t 1 to t 2 :$\\$ $(a) x ( t _2 )= x ( t _1 )+ v ( t _1 ) ( t _2 - t _1 )+( 1/2 ) a ( t _2 - t _1 )^2\\ (b) v ( t _2 )=v ( t _1 )a ( t _2 - t _1 )\\ (c) v _{average} =( x ( t _2 )- x ( t _1 ) ) / ( t _2 - t _1 )\\ (d) a _{average} =( v ( t _2 )- v ( t _1 ) ) / ( t _2 - t _1 )\\ (e) x ( t _2 )= x ( t _1 )+ v _{average} ( t _2 - t _1 )+( 1/2 ) a_{average} ( t _2 - t _1 )^2\\ (f) x ( t _2 )- x ( t _1 ) =$area under the v-t curve bounded by the t-axis and the dotted line shown. ##### Solution : The correct formulae describing the motion of the particle are (c), (d) and, (f)$\\$ The given graph has a non-uniform slope. Hence, the formulae given in (a), (b), and (e) cannot$\\$ describe the motion of the particle. Only relations given in (c), (d), and (f) are correct$\\$ equations of motion.
# DDE Driver for ModBus R #### Renato Pellegrini We are searching for a DDE Driver able to manage MODBUS RTU over a serial link, running in a WIN NT4 SP4 ambient. The tool must be a slave using a fixed addr. (The master is a ABB/DCS). This driver must be able to receive information from DCS and to send information to DCS. Need also well detailed documentation. Regards. Renato PELLEGRINI PF Sistemi Srl Viale Montenero, 34 20135 MILANO phone +39 (0)2 59901522 fax +39 (0)2 55014899 mail [email protected] G #### Gary Graham Hello Renato, Modicon has a DDE driver/server for modbus RTU called Modlink. I suggest that you get hold of a Schneider/Square D representative in your area to get your hands on it. If you have a problem getting hold of someone please contact me and give your details with your area and I will have someone call you. these are the part numbers and the pricing. 352SMD49300 MODLINK SOFTWARE $395.00 352SMD49310 MODLINK LITE SOFTWARE$195.00 Andy Piereder Pinnacle IDC R #### Rob Entzinger Schneider Automation Look at Modlink at Schneider Product. Rob
Day to day bioinformatics involves interfacing and executing many programs to process data. We end up with some refinement of the data from which we extract biological meaning through data analysis. Given how much interfacing bioinformatics involves, this process undergoes very little thought or design optimization. Much more attention is needed on the design of interfaces in bioinformatics, to improve their ease of use, robustness, and scalability. Interfacing is a low-level problem that we shouldn’t be wasting time on when there are much better high-level problems out there. More bluntly, interfacing is currently an inconvenient glue that full time bioinformaticians waste too many hours on. There is no better illustration of this than by looking at how much time we waste in file parsing tasks. Parsers are most commonly employed in bioinformatics as crappy interfaces to non-standard formats. We need better designed interfaces and cleaner interface patterns to help. Estimating from my own experience and observation of my colleagues, most bioinformaticists today spend between 90% and 100% of their time stuck in cruft. Methods are chosen because the file formats are compatible, not because of any underlying suitability. Second, algorithms vanish from the field…. I’m worried about the number of bioinformaticists who don’t understand the difference between an O(n) and an O(n^2) algorithm, and don’t realize that it matters. He’s parting with bioinformatics, leaving our field with one less person to fix things. However, if practices are suboptimal and frustrating now, it’s not because people are unprepared to implement better approaches, it’s because they’re content with the status quo because it does work. But, as I’ll argue, we shouldn’t be wasting our time on this and much more elegant solutions exist. The Current Interfacing Practice and its Paradigm The current practice is characterized by system calls from a scripting language to execute larger bioinformatics programs, parse any output files created, and repeat. In my mind, I see this as a paradigm in which state is stored on the file system, and execution is achieved by passing a stringified set of command-line arguments to a system call. This practice and paradigm has been the standard for bioinformatics for ages. It’s clunky and inelegant, but it works for routine bioinformatics tasks. However, the current practice isn’t well suited for large, embarrassingly parallel tasks that will grow increasingly common as the number of samples increases in genomics projects. Portability of these pipelines is usually terrible, and involves awkward tasks like ensuring all called programs are in a user’s PATH (good luck with version differences too). Program state is stored on the disk, the slowest component apart from the network. Here’s a better look at how to really understand how costly storing state on a disk can be (zoom into this image). Storing state on a slow, highly-mutable, non-concurrent component is only acceptable if it’s too big to store in memory. Bioinformatics certainly has tasks that produce files too large to store in memory. However, if a user had a task with little memory overhead, the complete lack of interfaces other than the command-line to all aligners, mappers, assemblers, etc would require the user to write to the disk. If they’re clever, they can invoke a subprocess from a script and capture standard out, but then they’re back to parsing text as a sloppy interface, rather than handling higher-level models or objects. This needs to change. While I’m maybe being a little harsh on the current paradigm, I should say some parts are elegant. Unix piping is extremely elegant, and avoids any latency due to writing to the disk between execution steps. Workflows like this example from the samtools mpileup man page are clear and powerful: samtools mpileup -uf ref.fa aln1.bam aln2.bam | bcftools view -bvcg - > var.raw.bcf \$ bcftools view var.raw.bcf | vcfutils.pl varFilter -D100 > var.flt.vcf The real drawback of the current system occurs when a user wants to leverage the larger command-line programs for something novel. Most aligners assume you want to align a few sequences, output the results, and then stop. A user that wants an aligner to align a few sequences, and then proceed down different paths depending on the output has the hassle of writing the sequences to a FASTA file or serializing the sequences in the FASTA format, invoking a subprocess, and then either passing it a filename, or (if the tool supports this), passing the serialized string to the subprocess through standard in. If the command-line tool has overheard for starting up, this is incurred during each subprocess call (even if it could be shared and the overhead amortized across subprocess calls). File Formats and Interfacing To make matters worse, many bioinformatics formats like FASTA, FASTQ, and GTF are either ill-defined or implemented differently across libraries, making them risky interface formats. In contrast, consider the elegance of Google’s protocol buffers. This allow users to write their data structures in the protocol buffer interface description language, and compile interfaces for C++, Java, and Python. This is the type of high-level functionality bioinformatics needs to interface incredibly complex data structures, yet we’re still stuck in the text parsing stone age. Foreign Function Interfaces and Shared Libraries One way to avoid unnecessary clunky system calls is through foreign foreign interfaces (FFIs) to shared libraries, using thin wrappers in a user’s scripting language of choice. Large bioinformatics programs like aligners, assemblers, and mappers are most commonly written in C or C++, which allows code to be compiled as a shared library relatively painlessly. FFIs could solve a few problems for bioinformaticians. First, they would allow much more code recycling in low-level projects: rather than writing your own highly-optimized C FASTA/FASTQ parser, you can ejust link against a shared library with that routine. Additionally, that shared library can be separately developed and improved. Second, FFIs allow modularity and high-level access to low-level routines. Genome assemblers are packed to the gills with useful functions. So are aligners. Yet unless the developer took the time to separate this out via subcommands or an API (like git or samtools), you’re unlikely to ever be able to access this functionality. Developers with an eye for better program design can write higher level functions that could be utilized through a FFI. Now, novel bioinformatics tasks that may require some sequence assembly, or a few parallel calls to an aligner can be tackled without the system call rubbish, or re-implementing all the low-level algorithms. For higher-level functionality with FFIs and shared libraries, wrappers work beautifully. Rather than wrapping entire programs through the command line (as BioPython does), scripting language libraries could interact more directly with low-level programs. In cases in which the current paradigm just doesn’t fit, we’d have the option to avoid it by calling routines directly. Tools like samtools are very successful because they have a powerful API that allow programs like pysam to call their routines. Imagine now that you could also load adapter and quality trimmers wrappers around shared libraries. Rather than using Unix pipes or bash scripts to write quality control pipelines, and have every program in the execution chain read, parse, and then write FASTA formatted files, it could be done once, using object abstractions of data. import sys import biolib import sickle import scythe biolib.write_fasta(sickle.trim(read_tmd.seq, read_tmd.qual, qual=20), stdout) In this artificial example, reads could then be sent directly to aligners rather than to standard out. Working with higher-level models of data (in this case, a read) allows easier real-time debugging, statistics gathering, and parallelization. Imagine being able to put this entire block in a try statement, and have exceptions handled at a higher level. An error could invoke a debugger, and a bioinformatician could inspect the culprit interactively in real-time. This is impossible in the old paradigm (and we’ve all spent ages using streaming tools to track down such bugs). Note that I’m not arguing that your average biologist should suddenly start trying to understand position-independent code and compile shared libraries to avoid making systems calls in their scripts. Sometimes a system call is the right tool for the job. But bioinformatics software developers should reach for a system call not because it’s the only interface, but because it’s the best interface for a particular task. Maybe someday we’ll even see thin wrappers coming packaged with bioinformatics tools (even if under contrib/ and written by other developers) — I can dream, right? FFI in Practice Here’s a real FFI example: I needed to assemble many sets of sequences for a pet project. I really wanted to avoid needlessly writing FASTA files of these sets of sequences to disk, as this is quite costly. However, most assemblers were designed solely to be interfaced through the command-line. The inelegance of writing thousands of files, making thousands of system calls to an assembler, then reading and parsing the results not appealing, so I wrote pyfermi, a simple Python interface to Heng Li’s Fermi assembler (note that this software is experimental, so use it with caution). First off, I couldn’t have done this without Heng’s excellent assembler and code, so I owe him a debt of gratitude. The Fermi source has a beautiful example of high-level functions that can be interfaced with relative ease: see the fm6_api_* functions used in example.c. I wrote a few extra C functions in pyfermi (mostly to deal with void * necessary because Python’s ctypes can’t handle foreign types AFAIK), and compile Fermi as a shared library. I was able to do all this in far less time than it would have taken me to go down the route of writing thousands of files, making syscalls, and handing file parsing. The Importance of Good Tools Overall, bioinformaticians need to be more conscious of the design and interfacing of our tools. I strongly believe tools and methods shape how we approach and solve problems. A data programmer only trained in Perl will likely at some point wage a messy war trying to produce decent statistical graphics. Likewise, a statistician only trained in t-tests and ANOVA, will only see normally distributed data (and apply every transformation under the sun to force their data into this shape). I’m hardly the first person to argue that this occurs: this idea is known as the law of the instrument. Borrowing a 1964 quote from Abraham Kaplan (and Wikipedia): I call it the law of the instrument, and it may be formulated as follows: Give a small boy a hammer, and he will find that everything he encounters needs pounding. Earlier in our bioinformatics careers, we’re trained in file formats, writing scripts, and calling large programs via the command-line. This becomes a hammer, and all problems start looking like nails. However, the old practice and paradigms are breaking down as the scale of our data and the complexity of our problems increase. We need new school bioinformatics with a focus on the bigger picture, so let’s start talking about how we do it.
# Density Plot Y Axis Alternately you can use the first to principal components as rthe X and Y axis. Bokeh is a fiscally sponsored project of NumFOCUS, a nonprofit dedicated to supporting the open-source scientific computing community. This line is used to help us make predictions that are based on past data. Divide this in half. (As the horizontal scale, indicated by $\sigma,$ increases, the height of the curve decreases. Question: Plot Volume On The 'X Axis And Mass On The 'Y' Axis And Fit A Linear Equation To The Data. " You may need to set options for z axis, such as range, zeroaxis, etc. x = NULL junk. This requirement applies even when the histogram and density plots are in different cells in a multicell graph. 2D: Pair-Wise Scatter Plots. For more details about the graphical parameter arguments, see par. Here, the ellipses are. The 3D surface plot is based on a set of three-dimensional points. FpFacts % Script File: FpFacts % Prints some facts about the underlying floating point system. 3D Plot General For 3D plots, the functions generally consist of both x and y. Plot symbols and colours can be specified as vectors, to allow individual specification for each point. Produce a plot comparing the number of observations for each species at each site. A probability density plot simply means a density plot of probability density function (Y-axis) vs data points of a variable (X-axis). But you can change it (giving independent axis) to each density plot by adding one more attribute called scale. Here We capture the 'jqplotDataClick' event and display the clicked series index, point index and data values. Used only when y is a vector containing multiple variables to plot. To re-enable auto scaling for the vertical axes, double click on the plot. For example, if your plot has an x-axis with values between 33#33 , and a y-axis with values between 34#34 , you may add a point at 35#35. This results in data points with low p values (highly significant) appearing toward the top of the plot. The values of Month are integers from 1 to 12, but the default labels on the X axis have values like 2. ANS-> The y-axis in a density plot is the probability density function for the kernel density estimation. plot() can be passed to the bar() method in order to customize the bar chart. Trackbacks/Pingbacks. The peaks of a Density Plot help display where values are concentrated over the interval. Make sure the graphs have appropriate titles and axis labels and that the range of the axes are the same in all graphs. pyplot as plt import numpy as np fig = plt. To change the y-scale type on an existing probability plot or empirical CDF plot, double-click the y-scale, then specify the type on the Type tab. The plot looks a bit rough on the edges, but the important thing is to see how your data comes out. Starting with data preparation, topics include how to create effective univariate, bivariate, and multivariate graphs. Generic function for plotting of R objects. Mark O’s on the baseline with an x. This is particularly important for datasets which do not form a Gaussian "Normal" distribution that most researchers have become accustomed to. Below you will find some simple steps for plotting points on an x-y graph and links to pages to help you with the next steps. A density plot is a representation of the distribution of a numeric variable. Setting an aesthetic to a constant within aes() can lead to unexpected results, as the aesthetic is then set to a default value rather than the specified value. the axis displays values as a percentage of the total. The following example shows how to align the plot title in layout. streamplot(x_grid,y_grid,x_vec,y_vec, density=spacing) Where x_grid and y_grid are arrays of x, y points. xlab: character vector specifying x axis labels. Draw a best fit straight line for each. If FALSE, the x-axis is drawn based on the time index of the data. E peak density (NmE), cm-3 (20. It is a type of bar plot where X-axis represents the bin ranges while Y-axis gives information about frequency. The probability is given by the area under the curve and thus it depends on the x-axis as well. Likewise, for the Y axis (dimension), we have bins of equal width w Y = 1. Still on the log 10 x-axis scale, make a histogram faceted by continent and filled by continent. Figure 1: Basic Kernel Density Plot in R. Alternately you can use the first to principal components as rthe X and Y axis. Any keyword argument supported by the method DatFrame. cline options specify how the density line is rendered and its appearance;[G-3] cline options. This is particularly important for datasets which do not form a Gaussian "Normal" distribution that most researchers have become accustomed to. Density plot in R (ggplot2), colored by variable, returning very different distribution than histogram and frequency plot? 0 ggplot density plot: Different x-axis for each group. Please explain me how does it happen? The following plot shows it. It can be drawn using geom_point(). What does the y axis in a kernel density plot mean? [duplicate] Ask Question Asked 7 years, 7 months ago. use_index bool, default True. The two numbers 2,2 are the number of additional points along the x- and y-axis. title str or list. Favorite Answer It depends on what you put on the x-axis and what on the y. On a linear scale as the distance in the axis increases the corresponding value also increases linearly. $Except for a linear transformation, all normal density functions are the same shape. 851852 Get more help from Chegg. The option freq=FALSE plots probability densities instead of frequencies. The "zero" tick and value on the y-axis are slightly above the x-axis line. is plotted on the y-axis. Mark O’s on the baseline with an x. By default facet_wrap() assign the same y-axis to all the density plot. Therefore, it wouldn't be accurate to say that the y-axis represents the frequency of x per se. Principal components analysis (PCA)¶ These figures aid in illustrating how a point cloud can be very flat in one direction–which is where PCA comes in to choose a direction that is not flat. the axis displays the frequency count. merge: logical or character value. png show their y-axis transform into bd. default">plot. Still on the log 10 x-axis scale, make a histogram faceted by continent and filled by continent. Whenever you want to understand the nature of relationship between two variables, invariably the first choice is the scatterplot. A volcano plot is constructed by plotting the negative log of the p value on the y axis (usually base 10). For example, if your plot has an x-axis with values between 33#33 , and a y-axis with values between 34#34 , you may add a point at 35#35. density (bw_method = None, ind = None, ** kwargs) [source] ¶ Generate Kernel Density Estimate plot using Gaussian kernels. set_cmap (cmap) Set the default colormap. the axis displays values as a percentage of the total. Active 5 years, 3 months ago. Posterior probability density plots allow illustration of the estimate of the calendar date distributions for the parameters in question. The peaks of a Density Plot. Kernel density estimation ¶ Kernel density estimation is a form of convolution, usually with a symmetric kenrel (e. A stream plot is a type of 2D plot used to show fluid flow and 2D field gradiants. In addition specialized graphs including geographic maps, the display of change over time, flow diagrams, interactive graphs, and graphs that help with the interpret statistical models are included. Addenda:(1) The area under every density function must be$1. The scale on the y-axis is set in such a way that you can add the density plot over the histogram. The way you set is exactly the same as x- and y-axes. The following example shows how to align the plot title in layout. density estimation generally extends the x-axis beyond the usual limits of histogram estimation, the rst step should be to compute the density estimate and set the x-axis limits to match those from the density estimate. The option TYPE = DISCRETE tells SAS to use the actual data values. Simple rules for plotting points. If TRUE, the x-axis is drawn based on observations in the data. The density object is plotted as a line, with the actual values of your data on the x -axis and the density on the y -axis. x_var (optional) : symbol to plot on x-axis or tuple giving symbol and range as (symbol, xmin, xmax) y_var (optional) : symbol to plot on y-axis or tuple giving symbol and range as (symbol, ymin, ymax) If neither x_var nor y_var are given then the free symbols in the expression will be assigned in the order they are sorted. In order to implement these two changes. To dress up the plot some more, we can add axes, with o sets so that they are separated from the graph. What does the y axis in a kernel density plot mean? [duplicate] Ask Question Asked 7 years, 7 months ago. prefecture population size on the x-axis. If "y", the right-hand side labels will be displayed to the left. 2D Kernel Density plot showing the relationship between gasoline price and houses for sale. Let’s make the x-axis ticks appear at every 25 units rather than 50 using the breaks = seq(0, 175, 25) argument in scale_x_continuous. Plot symbols and colours can be specified as vectors, to allow individual specification for each point. I find the overlay-density rendering in ggplot2() to be more visually pleasing, with little plotting parameter tuning. plot(x,y) matplotlib. 1 Bar Graphs Bar graphs may be used to plot the frequency distribution of a categorical variable, or to plot descriptive statistics of a continuous variable within groups defined by a categorical variables. It is a type of bar plot where X-axis represents the bin ranges while Y-axis gives information about frequency. plot(x,y), where x and y are arrays of the same length that specify the (x;y) pairs that form the line. Converting density to expected count often provides an easier interpretation. Question: Plot Volume On The 'X Axis And Mass On The 'Y' Axis And Fit A Linear Equation To The Data. Note that the axis ticks won't correspond to the count or density axis of this plot, though. Modified after a figure originally created by Marc Belzunces. Plots: Custom & Additional Axes ! axis() lets us draw new or additional axes on the plot, labeled however we want ! Examples: ! Two different y-axis labels—one on the left and one on the right ! Each x-axis position is a different sentence position, and we want to write an example sentence (or sentence) below the x-axis !. GRAPHING Algebra The first step is to label the x-axis and the y-axis. We use this value to "fix" the Y axis for each plot. Still on the log 10 x-axis scale, try a density plot mapping continent to the fill of each density distribution, and reduce the opacity. Multiple axes: Add Axis Wizard for creating multiple axes easily Ticks: customize major & minor intervals, in & out orientation, top & bottom location, length, thickness, color and tick mark selection from column to create custom axes. the axis displays values as a percentage of the total. Next, select Y for Axis, deselect Auto Range, select Major Rules, select Minor Rules. ncl : This script plots the 5-day running average of precipitation for an entire year (2005). This is different to ggplot2, where the scale objects controlled both the details of the mapping and how it should be displayed on the plot. Plot Volume on the 'X axis and Mass on the 'Y' axis and fit a linear equation to the data. xlab: a title for the x axis. The command must be "splot" instead of "plot. Generic function for plotting of R objects. Note: if plotting SNP_Density, only the first three columns are needed. Then, invoke Matplotlib's customization functions. im doing kaplan-meier graph and for this graph i want to cut the range between 0 to 0. Set XAxisLocation to either 'top', 'bottom', or 'origin'. Ridgeline plots are partially overlapping line plots that create the impression of a mountain range. Adding Plot and Axis Titles. If the first argument hax is an axes handle, then plot into this axis, rather than the current axes returned by gca. After this tutorial you will be able to identify a density curve, name the shape of the density curve, understand the importance of the area under the density curve and locate the mean and median of a density curve. It is a smoothed version of the histogram and is used in the same concept. plot(DNase$conc, DNase$density, ylab = attr(DNase, "labels")y,. Analogous to the binwidth of a histogram, a density plot has a parameter called the bandwidth that changes the individual kernels and significantly affects the final result of the plot. Trackbacks/Pingbacks. One of the axes displays the elements/groups of your comparison, the other shows the range of values of your variable. Probability density function (uniform distribution). View in Colab • GitHub source. The 2D Kernel Density plot is a smoothed color density representation of scatter plot, based on kernel density estimation. Values on the y-axis represent inverse cumulative probabilities. PGFPlots package provides tools to generate plots and labeled axes easily. Note: The diagonal Axes are treated differently — by drawing a plot to show the univariate distribution of the data for the variable in that column. 40 Because the the probability mass functions for X and Y appear in the margins of the table (i. Set YAxisLocation to either 'left', 'right', or 'origin'. The axes are scaled based only on the data in that subgroup. What does the slope represent and what are the units? 4. Follow 17 views (last 30 days) Nguyen on 5 Oct 2014. The command must be "splot" instead of "plot. Still on the log 10 x-axis scale, try a density plot mapping continent to the fill of each density distribution, and reduce the opacity. Use index as ticks for x axis. map_diag (plt. Solution: A(–3, –1): From the origin, go to the left 3 units then down 1 unit. Density graphs are a good choice for visually displaying the results of the density equation "density = mass/volume. Ideal for creating plots with subsequent low-level graphics functions. Using graph paper with a linear grid, label the X-axis as pressure and the Y-axis as depth. same: TRUE/FALSE. A legend is created by default including the density curves, and we have used the KEYLEGEND statement to customize its position inside the data area. We also need not specify the type as"l". streamplot(x_grid,y_grid,x_vec,y_vec, density=spacing) Where x_grid and y_grid are arrays of x, y points. 3 Double-y axis plot. The global concept is the same for each variation. type="h" Plot vertical lines from points to the zero axis (high-density) type="n" No plotting at all. The agreement between the two solutions is very good. Values on the y-axis represent inverse cumulative probabilities. , -1), the direction of accumulation is reversed. Draw histograms, scatter plots, density plots, and box and whisker plots. Otherwise, the density plot values might not scale properly to the histogram bins on the dependent axis. , it’s clear in the plot below that diabetic patients are associated with more number of pregnancies. The orange line you see in the plot is called “line of best fit” or a “trend line”. One will use the left y-axes and the other will use the right y-axis. " You may need to set options for z axis, such as range, zeroaxis, etc. The following example shows how to align the plot title in layout. qqnorm is a generic function the default method of which produces a normal QQ plot of the values in y. 2 Loading Data into GGplot and Basic Elements of a Scatterplot. Plots: Custom & Additional Axes ! axis() lets us draw new or additional axes on the plot, labeled however we want ! Examples: ! Two different y-axis labels—one on the left and one on the right ! Each x-axis position is a different sentence position, and we want to write an example sentence (or sentence) below the x-axis !. geom_dotplot. They can be produced by the lattice function qq(), with a formula that has two primary variables. the 0-value should be at the top and the largest value at the bottom? The way to accomplish this is to use a negative y-scale which the plot is made against (by negating all the data values in the data serie). Calendar years (cal BP or BC/AD depending on your calendar setting within the configuration page ) are shown on the x-axis and the probability is shown on the y-axis. Viewed 38k times 24. To specify a log axis, pass "log" for the value of either of these parameters. I would like to create a ggplot line plot with two y axis, with one y on the left side and one y on the right side. In the above plot we can see that the labels on x axis,y axis and legend have changed; the title and subtitle have been added and the points are colored, distinguishing the number of cylinders. open System open FSharp. In the formula y ~ x, y needs to be a factor with two levels, and the samples compared are the subsets of x for the two levels of y. Changing axis ticks. 0 release, some three-dimensional plotting utilities were built on top of Matplotlib's two-dimensional display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization. the axis displays the density estimate values. Plots the non-parametric density estimates using values contained in the columns of a matrix. Importantly, we can convert the counts or frequencies in each bin of the histogram to a normalized probability to ensure the y-axis of the histogram matches the y-axis of the line plot. Multiple axes: Add Axis Wizard for creating multiple axes easily Ticks: customize major & minor intervals, in & out orientation, top & bottom location, length, thickness, color and tick mark selection from column to create custom axes. Make sure the graphs have appropriate titles and axis labels and that the range of the axes are the same in all graphs. It was developed by John Hunter in 2002. This line is used to help us make predictions that are based on past data. Favorite Answer It depends on what you put on the x-axis and what on the y. It uses a kernel density estimate to show the probability density function of the variable. The way you set is exactly the same as x- and y-axes. 5) in the x-y plane. The following two figures plot the values of the NO2line density and of the O3local density respectively (scale on the left y-axis) as a function of the index record. 8) or E plasma frequency (foE), MHz (0. X- and Y-Axes. From the slope of the best-fit line find the density. Krish Naik 19,891 views. plot(x,y) performs this action, but we don’t see the result displayed on the screen until we type plt. Default: 5 specifies the label levels to plot for the Y axis. plot() can be passed to the bar() method in order to customize the bar chart. elliptically shaped with major/minor axis lengths in a 5:3 ratio. In that case the orientation can be specified directly using the orientation parameter, which can be either "x" or "y". The next step is to establish the units of measurement along each axis. Let the x-axis be temperature in °C and the y-axis be density in g/ml. The basic method to build a stream plot in Matplotlib is: ax. geom_density() understands the following aesthetics (required aesthetics are in bold): x. Let’s make the x-axis ticks appear at every 25 units rather than 50 using the breaks = seq(0, 175, 25) argument in scale_x_continuous. To calculate this, you divide the frequency of a group by the width of it. Histograms and Density Plots Histograms. Default is FALSE. I conclude the graphics section discussing bar graphs, box plots, and kernel density plots using area graphs with transparency. It can be difficult to read these. A Density Plot visualises the distribution of data over a continuous interval or time period. Calling plot(x, y) or hist(x) will launch a graphics device (if one is not already open) and draw the plot on the device If the arguments to plot are not of some special class, then the default method for plot is called; this function has many arguments, letting you set the title, x axis and y axis labels, x and y axis limits, etc. Addenda:(1) The area under every density function must be1. xlim, ylim: the ranges to be encompassed by the x and y axes. (But it will make you look really smart at parties!). Used only when y is a vector containing multiple variables to plot. 005 1032 10. This requirement applies even when the histogram and density plots are in different cells in a multicell graph. Axes: The X and Y axis (some plots may have a third axis too!) Legend: Contains the labels of each plot Each element of a plot can be manipulated in Matplotlib’s, as we will see later. A dot plot is a type of graph in which dots are used to mark the coordinates of categories and their respective values. Density plot in R (ggplot2), colored by variable, returning very different distribution than histogram and frequency plot? 0 ggplot density plot: Different x-axis for each group. Calls to most pyplot functions return stuff that can sometimes be used in subsequent functions. Goal: Plot kernel densities for two samples with confidence intervals. This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function:. Note that we don't need to specify x and y separately when plotting using zoo; we can just pass the object returned by zoo() to plot(). Another scenario involves plotting data that has a power law relationship, when it is desirable to use log scales on both axes. The score values for the normal distribution and the lognormal distribution are the. A cumulative frequency graph or ogive of a quantitative variable is a curve graphically showing the cumulative frequency distribution. use percentage tick labels for the y axis. For example, I often compare the levels of different risk factors (i. RG#107: Plot 3d horizontal lines (bars) over map ( RG#106: add satellite imagery, or open street maps RG#105: Line range or Crossbar plot; RG#104: 2d density plots; RG#103: Combing different types of plot in trellis RG#102: Double Y axis trellis plot (weather data e RG#101: Plot country map with annotation of regions. The x axis is the log of the fold change between the two conditions. default will be used. sub a sub title for the plot: see title. It uses a kernel density estimate to show the probability density function of the variable. PROPORTION. ; lines: Adds lines to an already-made plot. In this example, we set the x axis limit to 0 to 30 and y axis limits to 0 to 150 using the xlim and ylim arguments respectively. If True and drawing a bivariate KDE plot, add a colorbar. Any plot or graph that has two axes is an x-y (or bivariate) plot. Question: Plot Volume On The 'X Axis And Mass On The 'Y' Axis And Fit A Linear Equation To The Data. 005 1032 10. Create a new graph for data set Sitka, a scatter plot of Time (x-axis) vs size (y-axis), where all the points are colored “green”. Like a bar graph , a dot plot chart has one category axis and one value axis. Graphing surfaces We can graph surfaces using the plot3d command. Interface to produce plots, listings or output files from OMNI 2 Proton Density, n/cc: Flow Speed, km/sec: Flow Longitude, deg. To adjust the axes use: lims, xlim, ylim: set axis limits; expand_limits: extend limits of scales for various aethetics; xlab, ylab, ggtitle, labs: give labels (titles) to x-axis, y-axis, or graph; labs can set labels for all aesthetics and title. There are a variety of ways to combine ggplot2 plots with a single shared axis, but things can get tricky if you want a lot of control over all plot elements. cholesterol levels, glucose, body mass index) among individuals with and without cardiovascular disease. Try to guess the value of pnorm(0). " You may need to set options for z axis, such as range, zeroaxis, etc. Check out the Wikipedia article on probability density functions. It draws normal plots, logplots and semi-logplots, in two and three dimensions. the magnitude of y-axis in a ksdensity plot. Other options change the label and set values for the Y axis, and add grid lines. I have set the default from argument to better display this data, as otherwise density plots tend to show negative values even when all the data contains no negative values. The simplest is to plot a normalized histogram as shown above, but we will also look at how to estimate density functions using kernel density estimation. If you have a scatter plot and you want to highlight the position of a particular data point on the x- and y-axes, you can accomplish this in two different ways. 1/T (K–1) on the x axis. If the other way around, then. In our original scatter plot in the first recipe of this chapter, the x axis limits were set to just below 5 and up to 25 and the y axis limits were set from 0 to 120. Note that the axis ticks won't correspond to the count or density axis of this plot, though. It is a smoothed version of the histogram and is used in the same concept. add_subplot ( 111 , projection = '3d' ) for c , z in zip ([ 'r' , 'g' , 'b' , 'y' ], [ 30 , 20 , 10 , 0 ]): xs = np. When using bars to visualize multiple numeric distributions, I recommend plotting each distribution on its own axis using a small multiples display, rather than trying to overlay them on a single axis. Calculate the density of each liquid using the liquid's total mass at the maximum volume. The difference is the probability density is the probability per unit on the x-axis. geom_errorbar. In this post we show how to add title and axis label to your python chart using matplotlib. Similar to the example above but: normalize the values by dividing by the total amounts. […] The ultimate guide to the ggplot histogram - SHARP SIGHT - […] density plot is just a variation of the histogram, but instead of the y axis showing the number of…; A ggplot2 tutorial for beginners - Sharp Sight - […]. However, we need to be careful to specify this is a probability density and not a probability. Any plot or graph that has two axes is an x-y (or bivariate) plot. If FALSE, the x-axis is drawn based on the time index of the data. Using graph paper with a linear grid, label the X-axis as pressure and the Y-axis as depth. The command must be "splot" instead of "plot. xlab a title for the x axis: see title. Adding a normal density curve. The resulting plot can be found in Fig. plot 'relative_data. Then plot the point. This has been a Guide to 3D Plot in Excel. Therefore, it can be modified using the theme() function. To specify a log axis, pass "log" for the value of either of these parameters. # A violin plot combines the benefits of the previous two plots and simplifies them # Denser regions of the data are fatter, and sparser thiner in a violin plot sns. If all the plots have multiple Y variables, the default is no title. A probability density plot simply means a density plot of probability density function (Y-axis) vs data points of a variable (X-axis). In the above plot we can see that the labels on x axis,y axis and legend have changed; the title and subtitle have been added and the points are colored, distinguishing the number of cylinders. In the data set faithful, we pair up the eruptions and waiting values in the same observation as (x, y) coordinates. volume (x axis) for all liquids. 1/T (K–1) on the x axis. A useful way to summarize genome-wide association data is with a Manhattan plot. In the below instance using the command, ggplot, we will call for this dataset while creating the most basic elements of a scatterplot: the x-axis, y-axis, and data points. You can create histograms with the function hist(x) where x is a numeric vector of values to be plotted. ) function inside the geom_histogram() function to add the Density values on y-axis, instead of the Frequency/Count. The arrays x_vec and y_vec denote the stream velocity. By default, the function creates a vertical strip plot where the distributions of the continuous data points are plotted along the Y-axis and the categories are spaced out along the X-axis. More advertising costs lead to more sales. , type = "punkte" being equivalent to type = "p" for S compatibility. Set the ratio as a three-element vector of positive values that represent the relative axis lengths. Figure 1 visualizes the output of the previous R code: A basic kernel density plot in R. Goals for this lecture • Review base R plotting • Understand the grammar of graphics concept • Introduce ggplot2's ggplot function • See how to plot 1D, 2D, 3-5D data and. To set Temperature -5 - 20 and Precipitation 0 - 250: * Scale Precipitation by multiplying 1/10 to fit range of Temperature, after that, scale Precipitation by adding -5 * Scale first Y axis by adding +5, after that, scale Precipitation by multiplying 10 to create second Y axis for Precipitation. Additionally, density plots are especially useful for comparison of distributions. plot(x,y) performs this action, but we don’t see the result displayed on the screen until we type plt. x, click the histogram icon at the bottom of the Browse Results window and select Relative Frequency or Probability Density. the axis displays the frequency count. map_offdiag (plt. Adding the Trend line on Histogram with Density : We will use the aes(y=. For example, the keyword argument title places a title on top of the bar chart. The smoothness is controlled by a bandwidth parameter that is analogous to the histogram binwidth. The following plots help to examine how well correlated two variables are. x_var (optional) : symbol to plot on x-axis or tuple giving symbol and range as (symbol, xmin, xmax) y_var (optional) : symbol to plot on y-axis or tuple giving symbol and range as (symbol, ymin, ymax) If neither x_var nor y_var are given then the free symbols in the expression will be assigned in the order they are sorted. Set the ratio as a three-element vector of positive values that represent the relative axis lengths. The figure on the right shows a heatmap indicating values of the density function for a non axis-aligned multivariate Gaussian with mean µ = 3 2 and covariance matrix Σ = 10 5 5 5. Plot the density on the vertical axis: >>> ax = sns. NULL will default to the top and right side plots only. 5\$) and the y-values are greater than 1. And also im new with stata. From The Slope Of The Best-fit Line Find The Density. Displaying Y axis values on both sides of plot area In order to display Y axis values on both sides of the plot area you need to add an additional data series and plot it on the secondary axis. In this post we show how to add title and axis label to your python chart using matplotlib. To calculate this, you divide the frequency of a group by the width of it. If cumulative is a number less than 0 (e. Enter the changes for the x or y axis as needed. Typically, probability density plots are used to understand data distribution for a continuous variable and we want to know the likelihood (or probability) of obtaining a range of values that the continuous. If a dataset follows a normal distribution, then about 68% of the observations will fall within of the mean , which in this case is with the interval (-1,1). Elevation axis is restricted to -45° < El < +90° When 3-D data is collected it is often printed in various formats to show the complete data set. PGFPlots package provides tools to generate plots and labeled axes easily. Grouping a Scatter Plot; Clustering a Grouped Scatter Plot; Plotting Three Series; Adding Prediction and Confidence Bands to a Regression Plot; Adding a Prediction Ellipse to a Scatter Plot; Creating Lines and Bands from Pre-Computed Data; Adding Statistical Limits to a Dot Plot; Combining Histograms with Density Plots; Creating a Horizontal. Here is an example showing the distribution of the night price of Rbnb appartements in the south of France. Plots: Custom & Additional Axes ! axis() lets us draw new or additional axes on the plot, labeled however we want ! Examples: ! Two different y-axis labels—one on the left and one on the right ! Each x-axis position is a different sentence position, and we want to write an example sentence (or sentence) below the x-axis !. In the below instance using the command, ggplot, we will call for this dataset while creating the most basic elements of a scatterplot: the x-axis, y-axis, and data points. Use xlab = FALSE to hide xlab. set_cmap (cmap) Set the default colormap. A reversed cubic fit of concentration C (y-axis) vs measured signal A (x-axis). Plot type 2 adds a second data panel below the ideograms and plot types 3 to 5 represent the chromosomes as a single line of horizontal ideograms (think a manhattan plot) with variation in the number and position of the data panels. As shown in Figure $$\PageIndex{4}$$, the other two 2 p orbitals have identical shapes, but they lie along the x axis ($$2p_x$$) and y axis (\(2p_y. The option freq=FALSE plots probability densities instead of frequencies. Plot Volume on the 'X axis and Mass on the 'Y' axis and fit a linear equation to the data. Then, click Apply in the Axes - Solution XY. the axis displays the density estimate values. From simple 2-D scatter plots to compelling contour and the new radar and dot density plots, SigmaPlot gives you the exact technical graph type you need for your demanding research. , it’s clear in the plot below that diabetic patients are associated with more number of pregnancies. Creates a density profile plot of a rectangular selection or line selection. A probability density plot simply means a density plot of probability density function (Y-axis) vs data points of a variable (X-axis). If t 2 in y-axis and L in x-axis, then, $$g=4\pi^2/(\rm slope)$$. A not always very easy to read, but practical copy & paste format has been chosen throughout this manual. Generic function for plotting of R objects. Add a best- fit line to this plot. This line is used to help us make predictions that are based on past data. is plotted on the y-axis. Description. the coordinates of points in the plot. The most frequently used plot for data analysis is undoubtedly the scatterplot. twoway options are a set of common options supported by all twoway graphs. The density object is plotted as a line, with the actual values of your data on the x -axis and the density on the y -axis. The option TYPE = DISCRETE tells SAS to use the actual data values. Change the location of the axis lines so that they cross at the origin point (0,0) by setting the XAxisLocation and YAxisLocation properties of the Axes object. A box plot lets you see basic distribution information about your data, such as median, mean, range and quartiles but doesn't show you how your data looks throughout its range. Values on the y-axis represent estimated cumulative percentages. A stream plot is a type of 2D plot used to show fluid flow and 2D field gradiants. The score values for the normal distribution and the lognormal distribution are the. Let’s assume that I wanted to plot when the sun rises in London in 2010. These examples are extracted from open source projects. The points you see can be considered random samples from that probability distribution. png and 06db. Existing axes to draw the colorbar onto, otherwise space is taken from the main. Krish Naik 19,891 views. Many illustrative graphs are used to show you what density curve is, their shapes, and how to identify a density curve, etc. 005 1032 10. Argument passed to density specifying the lowest value to include in the density estimate. , it’s clear in the plot below that diabetic patients are associated with more number of pregnancies. It is one of the most simple plots provided by the seaborn library. All other types give a warning or an error; using, e. A simple box plot can be created in R with the boxplot function. In addition specialized graphs including geographic maps, the display of change over time, flow diagrams, interactive graphs, and graphs that help with the interpret statistical models are included. This type of plot has a point for every SNP or location tested with the position in the genome along the x-axis and the -log10 p-value on the y-axis. The resulting curves are, in essence, ECDF plots, and conceptually this plot is similar to Figure 3. The value gives the axis that the geom should run along, "x" being the default orientation you would expect for the geom. a 1 value means the domains is divided in 30x30, with only one streamline crossing each sector. Analogous to the binwidth of a histogram, a density plot has a parameter called the bandwidth that changes the individual kernels and significantly affects the final result of the plot. Create a new graph for data set Sitka, a scatter plot of Time (x-axis) vs size (y-axis), where all the points are colored “green”. ggplot(df, aes(x=listicle_size, y=num_fb_shares)) + geom_point() Because there are a few listicles with over 1 million Facebook shares (welcome to 2015), the entire plot is skewed. Another bar plot¶ from mpl_toolkits. If a list with two elements is passed, the x and y densities are set to different values. the axis displays values as a percentage of the total. It is a type of bar plot where X-axis represents the bin ranges while Y-axis gives information about frequency. Probability density function (uniform distribution). Align multiple ggplot2 graphs with a common x axis and different y axes, each with different y-axis labels. 005 1032 10. Introduction 50 xp Formula interface in histogram() and xyplot() 50 xp Create a histogram 100 xp Create a scatterplot 100 xp. [NB: The log="y" setting is automatic, after its initial use with plot(), for the subsequent use of text(). In the data set faithful, a point in the cumulative frequency graph of the eruptions variable shows the total number of eruptions whose durations are less than or equal to a given level. How to perform a scatter plot based on density Learn more about scatterplot, density, scatplot, dscatter MATLAB. The aim of this ggplot2 tutorial is to show you step by step, how to make and customize a density plot using ggplot2. 02 KB; Cite. The x axis is limited from 0 to 5 by the statement, axes. I have set the default from argument to better display this data, as otherwise density plots tend to show negative values even when all the data contains no negative values. In the rainfall plot, each dot represents a region. How to create a crime heatmap in R - SHARP SIGHT - […] More recently, I recommended learning (and mastering) the 2-density plot. This requirement applies even when the histogram and density plots are in different cells in a multicell graph. Created Date: 1/12/2004 12:04:00 AM. Instead of plotting frequency on the y-axis, we plot the frequency density. violinplot ( x = "Species" , y = "PetalLengthCm" , data = iris , size = 6 ). cholesterol levels, glucose, body mass index) among individuals with and without cardiovascular disease. Also, make the Y-axis on at least one plot the same scale as on the well logs to aid in interpretation. This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function:. set xyrev Reverses the X and Y axes on a plot set yaxis Specifies where the labeled tick marks will be placed on the Y-axis set yflip Flips the order of the vertical axis set ylab Controls the format of Y-axis tick mark labels set ylabs Gives specific text for Y-axis labels. Then density is manipulated to a maximum plotting value suitable fit to axis, for. Plots: Custom & Additional Axes ! axis() lets us draw new or additional axes on the plot, labeled however we want ! Examples: ! Two different y-axis labels—one on the left and one on the right ! Each x-axis position is a different sentence position, and we want to write an example sentence (or sentence) below the x-axis !. Generic function for plotting of R objects. The resulting plot can be found in Fig. A legend is created by default including the density curves, and we have used the KEYLEGEND statement to customize its position inside the data area. A density plot is a representation of the distribution of a numeric variable. Not relevant when drawing a univariate plot or when shade=False. Trackbacks/Pingbacks. Check out the Wikipedia article on probability density functions. For example, the keyword argument title places a title on top of the bar chart. In the above plot we can see that the labels on x axis,y axis and legend have changed; the title and subtitle have been added and the points are colored, distinguishing the number of cylinders. It also helps to plot the distribution of variables for each category as individual data points. What does the y axis in a kernel density plot mean? [duplicate] Ask Question Asked 7 years, 7 months ago. qqplot produces a QQ plot of two datasets. One of the classic ways of plotting this type of data is as a density plot. In this case, the Y-axis runs from 0 to 1 (or somewhere in between if there are no extreme proportions). Using a Ratio line in the Analytics pane results in the chart reverting to the original algorithm. In case subplots=True, share y axis and set some y axis labels to invisible. Likewise, for the Y axis (dimension), we have bins of equal width w Y = 1. It uses a kernel density estimate to show the probability density function of the variable. 02 KB; Cite. Whenever you want to understand the nature of relationship between two variables, invariably the first choice is the scatterplot. column and row totals), they are often re-ferred to as the Marginal Distributions for Xand Y. Used only when y is a vector containing multiple variables to plot. Is there a way to make it show intervals of 1? Answers: You could explicitly set where you want. But you can change it (giving independent axis) to each density plot by adding one more attribute called scale. Example 2: Modify Main Title & Axis Labels of Density Plot. Change the location of the axis lines so that they cross at the origin point (0,0) by setting the XAxisLocation and YAxisLocation properties of the Axes object. Mass (kg) Volume(m^3) Density (kg/m^3) 1. The density ridgeline plot is an alternative to the standard geom_density() function that can be useful for visualizing changes in distributions, of a continuous variable, over time or space. or if you want to play with then do like the following if you are interested: make another plot t 2 in x-axis and L in y-axis and here i guess $$g=4\pi^2\times(\rm slope)$$. A simple right circular cone can be obtained with the following function. Create a standard column chart based on the data in A1:B6. A useful way to summarize genome-wide association data is with a Manhattan plot. The peaks of a Density Plot help display where values are concentrated over the interval. X- and Y-Axes. On a linear scale as the distance in the axis increases the corresponding value also increases linearly. The axis should be named as to avoid any confusion on which axis is the X or Y or the Z-axis for a user. The optional return value h is a graphics handle to the created plot. Two-sample Q-Q plots compare quantiles of two samples (rather than one sample and a theoretical distribution). arange(start = 0,stop = NFFT) # raw index for FFT plot ax. It can cycle through a set of predefined line/marker/color specifications. Krish Naik 19,891 views. A Density Plot visualises the distribution of data over a continuous interval or time period. A fun experiment is to have students find the mass and volume of different numbers of pennies ( which vary in mass density). Used only when y is a vector containing multiple variables to plot. As an example, I’ll use the air temperature and density data that I used to demonstrate linear interpolation. To learn more about bar plots and how to interpret them, learn about bar plots. If we want to plot that data in gnuplot we have to keep track of the current position manually by storing its (x,y) value as variables by. For any data set you are going to graph, you have to decide which of the two variables you are going to put on the x-axis and which one you are going to put on the y-axis. You can change the aspect ratio using the pbaspect function. y=width 15 0. 6) Make a scale on your y-axis above and below each baseline that will cover this value. Stacked bar plot with group by, normalized to 100%. This Y value is a weighted average of all data values that are “near” this grid point. 3D Scatter Plot. A Density Plot visualises the distribution of data over a continuous interval or time period. Displaying Y axis values on both sides of plot area In order to display Y axis values on both sides of the plot area you need to add an additional data series and plot it on the secondary axis. A "reversed fit" flips the usual order of axes, by fitting concentration as a function of measured signal. " You may need to set options for z axis, such as range, zeroaxis, etc. I conclude the graphics section discussing bar graphs, box plots, and kernel density plots using area graphs with transparency. Mass (kg) Volume(m^3) Density (kg/m^3) 1. The value gives the axis that the geom should run along, "x" being the default orientation you would expect for the geom. Introduction. A histogram is basically used to represent data provided in a form of some groups. For example, I often compare the levels of different risk factors (i. In a probability density histogram or curve, the larger the numbers on the x axis, the smaller the numbers on the y axis must be to keep the total area at 1. is an estimate of the probability density function. ): Select output form: List model data Create model data file in ASCII format for downloading Plot model data Note 1: The first selected parameter below always will be along the X-axis, the other selections will be along Y-axis. Set XAxisLocation to either 'top', 'bottom', or 'origin'. Note that we don't need to specify x and y separately when plotting using zoo; we can just pass the object returned by zoo() to plot(). Introduction. Let’s customize this further by adding a normal density function curve to the above histogram. semilogy (\*args, \*\*kwargs) Make a plot with log scaling on the y axis. Use pch with points to add points to an existing plot. 5)Scan your data to find the highest density value. This has been a Guide to 3D Plot in Excel. It is a smoothed version of the histogram and is used in the same concept. Your unknown material is one of the substances listed in the table below. The theme() function accepts one of the four element_type() functions mentioned above as arguments. There are lots of examples showing basic chart functionality as well as zoom proxies, dynamic replotting, Mekko charts, trend lines, block plots, log axes, filled line (area) plots, andwell you get the picture. density function. The x axis is the log of the fold change between the two conditions. the 0-value should be at the top and the largest value at the bottom? The way to accomplish this is to use a negative y-scale which the plot is made against (by negating all the data values in the data serie). It can be drawn using geom_point(). " You may need to set options for z axis, such as range, zeroaxis, etc. Using two separate y-axes can solve your scaling problem. A histogram is basically used to represent data provided in a form of some groups. I would like to create a ggplot line plot with two y axis, with one y on the left side and one y on the right side. Write the equation for the relationship between density and temperature for oxygen, in terms of D and 1/T. Bokeh is a fiscally sponsored project of NumFOCUS, a nonprofit dedicated to supporting the open-source scientific computing community. How to perform a scatter plot based on density Learn more about scatterplot, density, scatplot, dscatter MATLAB. 005 1032 10. 2 Loading Data into GGplot and Basic Elements of a Scatterplot. points() plots one or more sets of points. If the center of the plot corresponds to. x_var (optional) : symbol to plot on x-axis or tuple giving symbol and range as (symbol, xmin, xmax) y_var (optional) : symbol to plot on y-axis or tuple giving symbol and range as (symbol, ymin, ymax) If neither x_var nor y_var are given then the free symbols in the expression will be assigned in the order they are sorted. Note: The diagonal Axes are treated differently — by drawing a plot to show the univariate distribution of the data for the variable in that column. For example, let’s plot the cosine function from 2 to 1. Figure 3 shows various formats (e. show() and the x axis’ ticks are plotted in intervals of 5. 005 1032 10. In the Select points tab, select the start and the end points. In the excel 3D surface plot, the 3D rotation needs to be adjusted as per the range of data as it can be difficult to read from the chart if the perspective isn’t right. brings together an aesthetic mapping of x and y variables, a grouping aesthetic (country), two geoms (a lineplot and a smoother), a log-transformed y-axis with appropriate tick labels, a faceting variable (continent), and finally axis. , attributes) passed along to the trace type. NULL will default to the top and right side plots only. As a result, we need to compress the plot by scaling the y-axis logarithmically using scale_y_log10. The PP plot is a QQ plot of these transformed values against a uniform distribution. a 1 value means the domains is divided in 30x30, with only one streamline crossing each sector. Mass (kg) Volume(m^3) Density (kg/m^3) 1. If you are unfamiliar with any of these types of graph, you will find more information about each one (when to use it, its purpose, what does it show, etc. In the below code, we move the left and bottom spines to the center of the graph applying set_position('center') , while the right and top spines are hidden by setting their colours to none with set_color('none'). Goal: Plot kernel densities for two samples with confidence intervals. Once this has occurred, the scales for the x- and y-axis of the distribution's plotting paper can be constructed and the plotting can commence. Change the location of the axis lines so that they cross at the origin point (0,0) by setting the XAxisLocation and YAxisLocation properties of the Axes object. It is one of the most simple plots provided by the seaborn library. Therefore, whilst R's function allows you to add a rugplot to an existing plot, the following code takes a sample of n observations from a defined population (Y), and plots them as a. I would like to use my variable just as it is, without scaling, also one of the variables has some missing values. Could anyone help me to add the minor Log ticks to the X-axis of this Plot? Thank you so much! DensityPlot[ 20*y/(0. The next step is to establish the units of measurement along each axis. This is done to allow for the thickness of the overlaid density plot line(s), so the lines do not clip at the bottom. The most frequently used plot for data analysis is undoubtedly the scatterplot. Note: The PROPORTION scale can be used only when you combine a density plot and a histogram together. From the slope of the best-fit line find the density. Then, invoke Matplotlib's customization functions. One R Tip A Day uses a custom R function to plot two or more overlapping density plots on the same graph. A scatter plot pairs up values of two quantitative variables in a data set and display them as geometric points inside a Cartesian diagram. Create a new graph for data set Sitka, a scatter plot of Time (x-axis) vs size (y-axis), where all the points are colored “green”. The following example shows how to align the plot title in layout. Used only when y is a vector containing multiple variables to plot. One such plot is the density plot. astype('category')). That logic applies to discrete distributions. Here is an example applied on a barplot, but the same method works for other chart types. Example: Plot percentage count of records by state. This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function:. Figure 1: Basic Kernel Density Plot in R. type="h" Plot vertical lines from points to the zero axis (high-density) type="n" No plotting at all. astype('category')). Title for the plot If add=FALSE, then. What we need is to draw a second y-axis on the right side of the graph with the scale. brings together an aesthetic mapping of x and y variables, a grouping aesthetic (country), two geoms (a lineplot and a smoother), a log-transformed y-axis with appropriate tick labels, a faceting variable (continent), and finally axis. Probability density function at x of the given RV. A violin plot is a visual that traditionally combines a box plot and a kernel density plot. xlab a title for the x axis: see title. How do I select the y axis format? In @RISK 5. Axes: The X and Y axis (some plots may have a third axis too!) Legend: Contains the labels of each plot Each element of a plot can be manipulated in Matplotlib’s, as we will see later. If you want to plot the densities instead of the frequencies you can use freq = FALSE as you would when using the hist() command. Box Plot A box plot is a chart that illustrates groups of numerical data through the use of quartiles. A Density Plot visualises the distribution of data over a continuous interval or time period. Histograms and Density Plots Histograms. If "y", the right-hand side labels will be displayed to the left. The result is shown in figure 4.
# QOTD-Geometry-Find the length of side SB 2019-07-03 | Team PendulumEdu A rectangular plane is cut into four triangles in such a way that the area of triangle APQ, ASB and BRQ is equal. Length of side BR is 4 cm. Find the length of SB. A.  $$(1+2\sqrt{5})m$$ B. $$(\sqrt{5}-2)m$$ C. $$2(\sqrt{5}-1)m$$ D. $$2(1+\sqrt{5})m$$ Solution Let the length of side SB = x Length of side AS = y Length of side PA = z So, now $$PQ = x + 4$$ and$$QR = y + z$$ According to the question Area of triangle APQ = Area of triangle ASB = Area of triangle BRQ $$\frac{1}{2}* z* (x+4)= \frac{1}{2} * y* x = \frac{1}{2}* (y+z)* 4$$ $$z*(x+4)= y+z* 4$$……………………………..(1) $$z*x+4= y*x$$ ……………………………………….(2) Using equation (1), we get $$4y+4z=zx+4z$$ $$Z=\frac{4y}{x}$$ Now, substituting the value of z in equation (2), we get $$\frac{4y}{x}*(x+4)=x*y$$ $$x^{2} - 4x - 16=0$$ $$x=2(1\pm\sqrt{5})$$ Length cannot be negative, therefore  $$SB=2(1+\sqrt{5})$$ Hence, (d) is the correct option. This type of question is asked in the Quantitative Aptitude section of different competitive exams like SSC CGL, SSC CHSL, IBPS PO, RRB JE, RRB NTPC, RRC Group D, CAT, etc. Therefore, one can try and practice our free mock tests at PendulumEdu and find out the easiest and quick way to solve such questions. RRB NTPC Quizzes Attempt Now Share Blog
# Advanced QuestionHelp with trajectory planner!!! ##### Member Greetings orbiter-forum community, First, some backstory: During my free time, I occasionally launch orbiter to find some cool 'launch windows' to perform slingshot maneuvers mostly to outer planets using assists from inner planets. I've posted one scenario titled 'The ultimate great grand tour' in the tutorials section of this forum and have many other scenarios which are just on my PC yet. I very recently came across the Trajectory optimization tool after reading some posts, so I immediately downloaded the tool to find out whether it can simplify the process of finding launch windows for complex slingshot maneuvers. After reading the manual once, I went ahead to find some launch windows with some constrains that I've applied. I typically try to keep the fuel used as well as the TOF to a minimum, the fuel utilized is normally what it would take to get to venus, then I would sling around wherever else I wanted to go. After coming up with a few trajectories in Trajectory optimization tool, I tried to plan the same in TransX but I was unable to do so due to a couple of reasons, that's why I'm writing this post in hopes of solving these issues. So, the first trajectory I tried to find is a launch date for EVVEJ. Zoomed in image So you can see that, the transfer first from earth to venus looks fine, but after that, TOT doesn't display any orbit for venus to venus, then it again shows an orbit for venus to earth then jupiter. Is this some kind of limitation that TOT has? That it can't plan slingshots to the same planet again for gravity assists? While trying to plan this in orbiter, I was able to get from earth to venus with arrival at the specified date(as per TOT), also venus to venus at the specified date(although, I had to lose energy relative to the sun instead of gaining which is absurd), then got stuck when I tried to get to earth. I would have shared the text data but it got lost due to a TOT crash so the image is all I have as of now, but if it's absolutely necessary, I don't mind planning the same trajectory again to post the text data. This just seems to be a general issue(getting to the same planet back) since some other trajectories I planned also had this issue of no orbit shown by TOT. Problem 2: Ok, so to avoid getting back to the same planet, this time I decided to plan EVMEJ. So, again I was able to find a trajectory using TOT, but was not able to plan the same using transX in orbiter. Here is an image of the TOT plan, Zoomed in Text data: Starting optimization... Optimization finished in 504.2767 seconds with message: Average change in trajectory fitness function less than tolerance. Results of the analysis are as follows: The optimal departure from EARTH occurs at 6/12/2015 19:11:52 (C3=15.7723 km^2/s^2) The optimal flyby of VENUS occurs at 9/14/2015 11:34:6 (deltaV=82.9022 km/s, pass The optimal flyby of MARS occurs at 3/25/2016 2:55:36 (deltaV=154.3439 km/s, pass The optimal flyby of EARTH occurs at 1/28/2019 10:54:45 (deltaV=47.7631 km/s, pass The optimal arrival at JUPITER BARYCENTER occurs at 7/3/2021 5:16:30 (arrival velocity=6.1866 km/s) The optimal trip duration is 2212.4199 days. Again, while trying to plan this using TransX, I wasn't able to succeed. I was able to plan Earth to Venus very accurately. But again, getting to mars from venus was not possible using transX as depicted by TOT. Here is the scenario where I have planned the trajectory from earth to venus, but getting to mars doesn't seem to be possible. Code: BEGIN_DESC END_DESC BEGIN_ENVIRONMENT System Sol Date MJD 57185.6090907301 END_ENVIRONMENT BEGIN_FOCUS Ship GL-01 END_FOCUS BEGIN_CAMERA TARGET GL-01 MODE Cockpit FOV 60.00 END_CAMERA BEGIN_MFD Left TYPE User MODE TransX Ship GL-01 FNumber 4 Int 1 Orbit True Vector 4377295.24512 1395005.12175 4413960.6041 Vector -254.847096252 -126.486384635 292.705407442 Double 3.98600439969e+014 Double 57185.6090904 Handle Earth Handle NULL Handle NULL Select Target 0 Escape Autoplan 0 0 Plan type 0 0 Plan 0 1 Plan 0 0 Plan 0 0 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 0 1 0 Man. date 1 57185.6090889 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 Pe Distance 1 7645212 Ej Orientation 1 0 Equatorial view 0 0 Finvars Finish BaseFunction Int 2 Orbit False Handle Sun Handle Earth Handle Venus Select Target 0 Venus Autoplan 0 0 Plan type 0 2 Plan 0 0 Plan 0 0 Plan 0 1 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 1 1 0 Man. date 1 57185.6086678 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 1 -2951.38117033 Eject date 1 57185.601335 Outward vel. 1 -2147.21737164 Ch. plane vel. 1 1520.28771854 Finvars Finish BaseFunction Int 4 Orbit True Vector 6020403214.3 -173305917.758 -1302439487.84 Vector -5725.18527136 165.49087746 1191.97810921 Double 3.2485863e+014 Double 57267.0783594 Handle Venus Handle NULL Handle NULL Select Target 0 Escape Autoplan 0 0 Plan type 0 1 Plan 0 0 Plan 0 1 Plan 0 0 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 0 1 0 Man. date 1 57185.6086708 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 View Orbit 0 0 Finvars Finish BaseFunction Int 3 Orbit True Vector 106940198834 -5938074268.09 17048642654.4 Vector -10507.1821977 905.806012452 37703.0700697 Double 1.32712764814e+020 Double 57279.2023186 Handle Sun Handle Venus Handle Mars Select Target 0 Mars Autoplan 0 0 Plan type 0 2 Plan 0 0 Plan 0 0 Plan 0 2 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 0 1 0 Man. date 1 57185.6090889 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 Velocity. 1 0 Outward angle 1 0.0383972435443 Inc. angle 1 0 Inherit Vel. 0 0 Eject date 1 57279.2023186 Finvars Finish BaseFunction END_MFD BEGIN_MFD Right TYPE User MODE TransX END_MFD BEGIN_SHIPS ISS:ProjectAlpha_ISS STATUS Orbiting Earth RPOS 6481270.69 -1617261.61 -800244.29 RVEL -1508.387 -7186.677 2320.934 AROT 28.00 0.94 40.81 AFCMODE 7 IDS 0:588 100 1:586 100 2:584 100 3:582 100 4:580 100 NAVFREQ 0 0 XPDR 466 END Mir:Mir STATUS Orbiting Earth RPOS 6329443.03 -125089.84 -2082000.96 RVEL 2416.566 452.870 7338.343 AROT 0.53 -44.47 89.92 AFCMODE 7 IDS 0:540 100 1:542 100 2:544 100 XPDR 482 END Luna-OB1:Wheel STATUS Orbiting Moon RPOS -44004.14 -2237543.73 -18.02 RVEL 1479.820 -29.087 -0.002 AROT 0.00 0.00 -167.96 VROT -0.00 -0.00 10.00 AFCMODE 7 IDS 0:560 100 1:564 100 XPDR 494 END GL-01:DeltaGlider STATUS Landed Earth BASE Cape Canaveral:1 POS -80.6758964 28.5227640 AFCMODE 7 PRPLEVEL 0:1.000000 1:1.000000 NAVFREQ 402 94 0 0 XPDR 0 GEAR 1 1.0000 AAP 0:0 0:0 0:0 END SH-02:ShuttleA STATUS Landed Earth BASE Cape Canaveral:5 POS -80.6745292 28.5197208 AFCMODE 7 PRPLEVEL 0:1.000000 1:1.000000 NAVFREQ 0 0 XPDR 0 PODANGLE 0.0000 0.0000 DOCKSTATE 0 0.0000 AIRLOCK 0 0.0000 GEAR 0 0.0000 END END_SHIPS BEGIN_ExtMFD END So I wanted to know if there's some mistake I'm doing as of now because I'm not able to find one from the manual. Thank you for your time. #### boogabooga ##### Bug Crusher The problem is with your expectations. A multiple inner planet slingshot sequence that makes sense occurs only rarely. The tough part of using TOT for this is that you have to know the sequence. TOT only knows the constraints that you put on it. It does not know to throw out a useless encounter or if some altogether better sequence exists. Your first example is pretty much telling you to go orbit Venus for half a year or so and wait for another launch window to open, because you can't arrive there when you need to. In other word, it's junk. It is not the result that is absurd, it is your constraints. In the second example, notice that each "deltaV" is huge. They should be near zero. You are not doing "slingshot" maneuvers, you are doing useless powered flybys of planets that are out of your way. Also, the radius of the Mars encounter is probably under the surface. Practice running TOT on past known sequences at the proper time, such a Cassini, etc. Also, you can vet your trajectories with this: http://trajbrowser.arc.nasa.gov/traj_browser.php Other than that, you might consider enabling multi-revolution mode see if you need to make multiple orbits. Edit: For curiosity, I was able to find the Venus-Mars leg in TransX. Sure enough, it seems your problem was that you needed to turn off "inherit vel." and enter it in because you were not considering that you needed a (large) propulsive maneuver. Code: BEGIN_DESC Contains the latest simulation state. END_DESC BEGIN_ENVIRONMENT System Sol Date MJD 57185.6397313285 END_ENVIRONMENT BEGIN_FOCUS Ship GL-01 END_FOCUS BEGIN_CAMERA TARGET GL-01 MODE Cockpit FOV 60.00 END_CAMERA BEGIN_MFD Left TYPE User MODE TransX Ship GL-01 FNumber 5 Int 1 Orbit True Vector 3625629.55471 1088178.76915 5124497.87448 Vector -311.284604813 -104.60584818 242.449612954 Double 3.98600439969e+014 Double 57185.6397292 Handle Earth Handle NULL Handle NULL Select Target 0 Escape Autoplan 0 0 Plan type 0 0 Plan 0 1 Plan 0 0 Plan 0 0 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 0 1 0 Man. date 1 57185.6397292 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 Pe Distance 1 7645212 Ej Orientation 1 0 Equatorial view 0 0 Finvars Finish BaseFunction Int 2 Orbit False Handle Sun Handle Earth Handle Venus Select Target 0 Venus Autoplan 0 0 Plan type 0 2 Plan 0 0 Plan 0 0 Plan 0 1 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 1 1 0 Man. date 1 57185.6396891 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 1 -2951.38117033 Eject date 1 57185.601335 Outward vel. 1 -2147.21737164 Ch. plane vel. 1 1520.28771854 Finvars Finish BaseFunction Int 4 Orbit True Vector 6020438065.14 -173306460.853 -1302447101 Vector -5725.18423181 165.490665101 1191.97776498 Double 3.2485863e+014 Double 57267.078292 Handle Venus Handle NULL Handle NULL Select Target 0 Escape Autoplan 0 0 Plan type 0 1 Plan 0 0 Plan 0 1 Plan 0 0 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 0 1 0 Man. date 1 57185.6396984 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 View Orbit 0 0 Finvars Finish BaseFunction Int 3 Orbit True Vector 106940193197 -5938073192.78 17048659238.4 Vector -10507.200425 905.794681226 37703.0506284 Double 1.32712764814e+020 Double 57279.2023241 Handle Sun Handle Venus Handle Mars Select Target 0 Mars Autoplan 0 0 Plan type 0 2 Plan 0 0 Plan 0 0 Plan 0 2 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 0 1 0 Man. date 1 57185.6397292 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 Velocity. 1 10801.25 Outward angle 1 -1.06408535704 Inc. angle 1 -0.297323819394 Inherit Vel. 0 1 Eject date 1 57279.202324 Finvars Finish BaseFunction Int 5 Orbit True Vector 5785654100.11 935325665.716 -380648184.293 Vector -8188.38463088 -1324.13541162 539.291316575 Double 4.28282991638e+013 Double 57464.2982418 Handle Mars Handle NULL Handle NULL Select Target 0 None Autoplan 0 0 Plan type 0 1 Plan 0 0 Plan 0 2 Plan 0 0 Select Minor 0 None Manoeuvre mode 0 0 Auto-Center™ 0 0 Base Orbit 0 0 1 0 Man. date 1 57185.6354005 Outward vel. 1 0 Ch. plane vel. 1 0 Intercept with 0 0 Orbits to Icept 0 0 Graph projection 0 0 Scale to view 0 0 0 0 Draw Base 0 0 Finvars Finish BaseFunction END_MFD BEGIN_MFD Right TYPE User MODE TransX END_MFD BEGIN_SHIPS ISS:ProjectAlpha_ISS STATUS Orbiting Earth RPOS -6591069.13 899519.18 1039540.08 RVEL 655.313 7344.351 -2203.882 AROT 26.34 -6.26 50.44 AFCMODE 7 IDS 0:588 100 1:586 100 2:584 100 3:582 100 4:580 100 NAVFREQ 0 0 XPDR 466 END Mir:Mir STATUS Orbiting Earth RPOS -6182175.71 138551.00 2489960.45 RVEL -2900.797 -447.226 -7158.537 AROT 3.43 -46.76 88.26 AFCMODE 7 IDS 0:540 100 1:542 100 2:544 100 XPDR 482 END Luna-OB1:Wheel STATUS Orbiting Moon RPOS 2209236.46 357513.08 33.33 RVEL -236.389 1461.135 0.041 AROT 0.00 0.00 18.66 VROT 0.00 0.00 10.00 AFCMODE 7 IDS 0:560 100 1:564 100 XPDR 494 END GL-01:DeltaGlider STATUS Landed Earth BASE Cape Canaveral:1 POS -80.6758964 28.5227640 AFCMODE 7 PRPLEVEL 0:1.000000 1:1.000000 NAVFREQ 402 94 0 0 XPDR 0 GEAR 1 1.0000 AAP 0:0 0:0 0:0 END SH-02:ShuttleA STATUS Landed Earth BASE Cape Canaveral:5 POS -80.6745292 28.5197208 AFCMODE 7 PRPLEVEL 0:1.000000 1:1.000000 NAVFREQ 0 0 XPDR 0 PODANGLE 0.0000 0.0000 DOCKSTATE 0 0.0000 AIRLOCK 0 0.0000 GEAR 0 0.0000 END END_SHIPS BEGIN_ExtMFD END Last edited: ##### Member The problem is with your expectations. A multiple inner planet slingshot sequence that makes sense occurs only rarely. The tough part of using TOT for this is that you have to know the sequence. TOT only knows the constraints that you put on it. It does not know to throw out a useless encounter or if some altogether better sequence exists. Practice running TOT on past known sequences at the proper time, such a Cassini, etc. I guess you've assumed here that I'm expecting a trajectory from TOT which I have not personally verified if possible or not with the constrains I've applied. As I've said, I already have many scenarios in which I've planned trips to outer planets using slings from venus using transX alone, so I was merely trying to get the same launch windows using TOT using the same sequences which I know very well, exist and work. Your first example is pretty much telling you to go orbit Venus for half a year or so and wait for another launch window to open, because you can't arrive there when you need to. In other word, it's junk. It is not the result that is absurd, it is your constraints. Ok, my constrains are absurd, but I absolutely have no idea what in my constrains could have caused something like that, where I get to another planet, wait for a while for it to spin around the sun, then leave that planet again instead of just slinging from it. I've made a plan similar to EVVEJ again, with more details this time. My constrains applied: (A solution satisfying these constrains does exist, as I've successfully planned it using TransX alone) Text data: Starting optimization... Optimization finished in 384.94 seconds with message: Average change in trajectory fitness function less than tolerance. Results of the analysis are as follows: The optimal departure from EARTH occurs at 1/15/2017 22:27:12 (C3=9.3235 km^2/s^2) The optimal flyby of VENUS occurs at 5/12/2017 5:29:45 (deltaV=15.5265 km/s, pass The optimal flyby of VENUS occurs at 11/12/2017 18:6:36 (deltaV=85.474 km/s, pass The optimal flyby of EARTH occurs at 1/23/2020 7:11:21 (deltaV=0.070545 km/s, pass The optimal arrival at JUPITER BARYCENTER occurs at 9/23/2021 8:27:58 (arrival velocity=10.0693 km/s) The optimal trip duration is 1711.4172 days Now obviously the above text data is absurd, with huge radius passes and high deltaV. Now ticking 'Constrain all flybys to 0 Delta-V' and 'Enable Multi-Rev Mode' maybe the solution to this absurdity but TOT always crashes when I try to obtain a solution with those 2 options ticked, that too after an hour of calculations. This has happened thrice now, I'm pretty sure it'll happen more times, not that anything can be done about that I guess. In the second example, notice that each "deltaV" is huge. They should be near zero. You are not doing "slingshot" maneuvers, you are doing useless powered flybys of planets that are out of your way. Also, the radius of the Mars encounter is probably under the surface. Edit: For curiosity, I was able to find the Venus-Mars leg in TransX. Sure enough, it seems your problem was that you needed to turn off "inherit vel." and enter it in because you were not considering that you needed a (large) propulsive maneuver. About the deltaV being huge, I've already stated that 'Constrain all flybys to 0 Delta-V' might solve that problem, but I still don't see a constrain that would prevent 'under the surface' slingshot solutions, which would again mean a powered slingshot over the surface. I always avoid powered slingshots to keep fuel utilization to a minimum, normally while planning using transX, if I see that a non powered slingshot trajectory doesn't take me to my destination, I start over again from a different date, that's the reason I didn't consider the powered slingshot option by turning off "inherit vel". Going from EVMEJ doesn't always require a powered slingshot, as seen from this flytandem scenario, and this is the solution I was expecting from TOT, a completely non powered over the surface slingshot. Last edited: #### boogabooga ##### Bug Crusher Okay. Actually, you are right and I found a reasonable solution to your EVVEJ problem: Code: Trajectory Optimization Tool v2 Written by Adam Harden ("Arrowstar"), (C) 2011 ##################################################################### WARNING: Number of bodies in Flight Plan altered: deleting stored Flight Constraints. Starting optimization... Optimization finished in 642.3407 seconds with message: Average change in trajectory fitness function less than tolerance. Results of the analysis are as follows: The optimal departure from EARTH occurs at 8/30/2015 14:27:25 (C3=22.1039 km^2/s^2) The optimal flyby of VENUS occurs at 2/10/2016 2:20:46 (deltaV=1.9668e-009 km/s, pass The optimal flyby of VENUS occurs at 3/6/2017 4:56:40 (deltaV=0.0005487 km/s, pass The optimal flyby of EARTH occurs at 12/31/2018 20:34:37 (deltaV=2.4261e-006 km/s, pass The optimal arrival at JUPITER BARYCENTER occurs at 1/30/2022 5:50:31 (arrival velocity=6.8529 km/s) The optimal trip duration is 2344.641 days. Generating report file...Done! ##################################################################### I did not resort to 'Constrain all flybys to 0 Delta-V'. A low delta V solution should be selected for naturally. Forcing it makes the solver have problems. I think your problem is in the 'Optimization Options' Here are my tips: 1) Error Tolerance 1E-2 IMHO, too precise of a tolerance only increases the runtime. You won't fly so precisely in TransX anyway. 2) Population size 750. You need to cast a finer net, so to speak. 3) Max number of iterations 75. If the algorithm doesn't detect the best solution 'family' early on, it probably isn't going to ever. You should know in 75 if you have something worthwhile. This wll still keep the runtime reasonable with the larger population size. 4)Selection Function @selectionstochunif 5) Cost function weights: 2 7 1 You need to put a higher priority on the delta-V cost weight. You can afford a little higher C3 if all of your slingshots are free. Also, I find that low arrival velocity is not mutually exclusive with an overall low delta-V flight plan. If you give that a low priority, it tends to work itself out anyway. There is an element of chance to genetic algorithms. You won't get the exact same solution twice in a row. Sometimes, it converges on a local min that it happens to detect early-on and it will fail to detect the global min. Run the optimization at least 3 times and see what your best solution is. You can also start to notice families of solutions emerge. ##### Member Okay. Actually, you are right and I found a reasonable solution to your EVVEJ problem: I did not resort to 'Constrain all flybys to 0 Delta-V'. A low delta V solution should be selected for naturally. Forcing it makes the solver have problems. I think your problem is in the 'Optimization Options' Here are my tips: 1) Error Tolerance 1E-2 IMHO, too precise of a tolerance only increases the runtime. You won't fly so precisely in TransX anyway. 2) Population size 750. You need to cast a finer net, so to speak. 3) Max number of iterations 75. If the algorithm doesn't detect the best solution 'family' early on, it probably isn't going to ever. You should know in 75 if you have something worthwhile. This wll still keep the runtime reasonable with the larger population size. 4)Selection Function @selectionstochunif 5) Cost function weights: 2 7 1 You need to put a higher priority on the delta-V cost weight. You can afford a little higher C3 if all of your slingshots are free. Also, I find that low arrival velocity is not mutually exclusive with an overall low delta-V flight plan. If you give that a low priority, it tends to work itself out anyway. There is an element of chance to genetic algorithms. You won't get the exact same solution twice in a row. Sometimes, it converges on a local min that it happens to detect early-on and it will fail to detect the global min. Run the optimization at least 3 times and see what your best solution is. You can also start to notice families of solutions emerge. Yes, the solution which you have obtained is a pretty reasonable solution. I was able to obtain the exact same solution using the settings you've mentioned. But then I added 2 constrains (Total TOF being less than 5 years and the earth venus transfer should take less than 185 days) and again it gave me results with high deltaV. Here is an image, Text data: Starting optimization... Optimization finished in 892.0819 seconds with message: Average change in trajectory fitness function less than tolerance. Results of the analysis are as follows: The optimal departure from EARTH occurs at 12/8/2016 11:37:59 (C3=11.788 km^2/s^2) The optimal flyby of VENUS occurs at 5/8/2017 4:58:40 (deltaV=9.2626 km/s, pass The optimal flyby of VENUS occurs at 11/3/2017 23:24:51 (deltaV=40.1392 km/s, pass The optimal flyby of EARTH occurs at 12/26/2018 9:10:7 (deltaV=163.3189 km/s, pass The optimal arrival at JUPITER BARYCENTER occurs at 5/6/2020 13:15:4 (arrival velocity=14.4967 km/s) The optimal trip duration is 1245.0674 days. It seems to me that this tool can definitely produce solutions, but it'll require a lot of tinkering with the 'Optimization options', multiple iterations with the same & different options and a lot of patience of course while it does the calculation. Anyway, my main point of starting this thread is to better understand this tool and I think now I do understand it's advantages and limitations better. BTW, I've also uploaded multiple EVVEJ scenarios here, all of which require fuel only to get to venus and have a lower TOF than 2344 days. Last edited: #### boogabooga ##### Bug Crusher Well, the more constraints you add, the less your solution will "fit" compared to the ideal solution. Makes sense. It seems to me that this tool can definitely produce solutions, but it'll require a lot of tinkering with the 'Optimization options', multiple iterations with the same & different options and a lot of patience of course while it does the calculation. That's how it is with the Genetic Algorithm, which is a sort of random brute-force technique. I believe that a Design-of-Experiment/Response Surface Methodology approach would be better suited to this problem. It would probably give more consistent results. #### fausto ##### FOI SuperMod I found some years ago a site with a full list of slingshots trajectories for many years to come.. I used them to plan an EVEE cruise to jupiter.. I don t teme ber the url... But you can use Google to find them! Replies 1 Views 182 Replies 6 Views 882 Replies 1 Views 636 Flight Question Orbit failure after burn. Replies 1 Views 571 Replies 185 Views 12K
### Optimal Chebyshev FIR Filters As we've seen above, the defining characteristic of FIR filters optimal in the Chebyshev sense is that they minimize the maximum frequency-response error-magnitude over the frequency axis. In other terms, an optimal Chebyshev FIR filter is optimal in the minimax sense: The filter coefficients are chosen to minimize the worst-case error (maximum weighted error-magnitude ripple) over all frequencies. This also means it is optimal in the sense because, as noted above, the norm of a weighted frequency-response error is the maximum magnitude over all frequencies: (5.32) Thus, we can say that an optimal Chebyshev filter minimizes the norm of the (possibly weighted) frequency-response error. The norm is also called the uniform norm. While the optimal Chebyshev FIR filter is unique, in principle, there is no guarantee that any particular numerical algorithm can find it. The optimal Chebyshev FIR filter can often be found effectively using the Remez multiple exchange algorithm (typically called the Parks-McClellan algorithm when applied to FIR filter design) [176,224,66]. This was illustrated in §4.6.4 above. The Parks-McClellan/Remez algorithm also appears to be the most efficient known method for designing optimal Chebyshev FIR filters (as compared with, say linear programming methods using matlab's linprog as in §3.13). This algorithm is available in Matlab's Signal Processing Toolbox as firpm() (remez() in (Linux) Octave).5.13There is also a version of the Remez exchange algorithm for complex FIR filters. See §4.10.7 below for a few details. The Remez multiple exchange algorithm has its limitations, however. In particular, convergence of the FIR filter coefficients is unlikely for FIR filters longer than a few hundred taps or so. Optimal Chebyshev FIR filters are normally designed to be linear phase [263] so that the desired frequency response can be taken to be real (i.e., first a zero-phase FIR filter is designed). The design of linear-phase FIR filters in the frequency domain can therefore be characterized as real polynomial approximation on the unit circle [229,258]. In optimal Chebyshev filter designs, the error exhibits an equiripple characteristic--that is, if the desired response is and the ripple magnitude is , then the frequency response of the optimal FIR filter (in the unweighted case, i.e., for all ) will oscillate between and as increases. The powerful alternation theorem characterizes optimal Chebyshev solutions in terms of the alternating error peaks. Essentially, if one finds sufficiently many for the given FIR filter order, then you have found the unique optimal Chebyshev solution [224]. Another remarkable result is that the Remez multiple exchange converges monotonically to the unique optimal Chebyshev solution (in the absence of numerical round-off errors). Fine online introductions to the theory and practice of Chebyshev-optimal FIR filter design are given in [32,283]. The window method4.5) and Remez-exchange method together span many practical FIR filter design needs, from quick and dirty'' to essentially ideal FIR filters (in terms of conventional specifications). Next Section: Least-Squares Linear-Phase FIR Filter Design Previous Section: Lp norms
• entries 8 23 • views 2534 Hobbyist pretending to write game code ## Another change of direction Hi all! In case anybody reads this thing, sorry not to have updated it for so long. As I am easily distracted by random things, I got myself one of those "open" handheld gaming/media devices called the "gp2x" and have been having a lot of fun writing code for it. I released a game called "Vektar" -- an arcade-style abstract 2D shooter, which has been reasonably well received, and will probably continue playing around with the gp2x for a while, since I'm having fun doing it. ## Efficient enough Switching to storing vertices apart from triangles, and a few other tweaks, has made the engine efficient enough for now. It isn't blazing fast but I only need it to be fast enough to not be annoying. Here's a test that looks slightly more planet-like, but there's still a long way to go obviously! ## A rounder cube To make the cube begin to appear more planet-like, I implemented a simple ROAM-type surface subdivision scheme, so it adapts the number of triangles as I zoom in and out. Even though I'm not really concerned with efficiency, the implementation is pretty bloated (stores three vertices per triangle) so I think I'll take a little bit of time to improve the memory use and CPU time somewhat, mostly by moving to a vertex list. ## New stuff Well, that didn't last too long. I lost interest in the "make a whole simple game" project. But I did get inspired by Ysaneya's amazing journal, so I decided instead to pretend that I'm working on a big space game with aliens and spaceships and planets and so on. Since there's no hope that I'd actually release anything within five years or more, I don't think I'll spend too much time worrying about frame rates or memory optimization or whatever -- who knows what graphics hardware will be like five years from now? I imagine nothing will come of it, but making planets looks like too much fun, though Ysaneya's incredible work will be hard to follow. So anyway I got a book on OpenGL and coded up a first little test. It doesn't look much like a planet yet! ## Revised Screen List Ok, here's a new screen list. With this setup, the obvious default path gets right to playing a level by clicking on "Play" from the first screen. 1. INTRO Music and animation of some kind to hold a user's interest - probably the "intro" music should extend over all the non-game screens This screen includes some sort of description of the basics of gameplay. Probably the animation on the page should lead the player's attention to that information. The idea is to give them enough info to start playing. One corner left free to put junk in (like the distributor's logo or whatever) "Loading" progress bar while resources are fetched and precomputation done After loading is complete, the following choices appear: "Options" (goes to OPTIONS) "Quit" (goes to QUIT) "Player Name" (goes to PLAYER NAME) "High Scores" (goes to HIGH SCORES) "Play" The Play button is prominent to encourage its selection The first time through, play goes right to LEVEL INTRODUCTION for an introductory level which is just difficult enough to get the player used to the mechanics and may include some tutorial info if needed. Subsequent launches go to GAME SELECTION2. OPTIONS This presents gameplay options: Full screen vs windowed Sound and music volume More if needed, depending on game mechanics "OK" "Cancel" Returns to the previous screen3. QUIT Simple screen asking for confirmation. If a game is in progress, the text should include something like "You can resume this game next time if you wish" "Quit" "Don't Quit" Exits program or returns to previous screen. If a game is in progress it gets saved with the player name (or 'default' if there is no player name set)4. HIGH SCORES Display of the high score list. Personally I couldn't care less about what scores I get on games, but some people are into it. "OK" Goes to previous screen5. GAME SELECTION After the first run through the game, this appears to give various options. "Difficulty" Choose between game difficulty levels (radio button type deal) "New Game" Starts a new game with the chosen difficulty level "Untimed Play" Starts a free-form version of the game at the chosed difficulty level This just gives random levels and does not save anything on quit "Continue Saved Game" This only appears if there is a saved game. It should be the most prominent option if available. "Cancel" (goes to INTRO) All other choices go to LEVEL INTRODUCTION6. PLAYER NAME Prompts the user to enter a player name Text entry box (pre-populated if a recent user name exists) Could get fancy and have a dropdown of all user names, but that probably isn't necessary Button for "OK" Returns to previous screen7. METASTORY INTRODUCTION The "metastory" is the thing that shows progress through the game toward the point where it is "over" (all levels have been played). This screen explains the metastory. "OK" "Don't Show Me This Again" Both go to METASTORY PROGRESS8. METASTORY PROGRESS This shows the current state of the metastory, and includes animations or actions related to progress in the metastory. User might have to make some sort of selection, they might be able to examine things in some detail, or there could even be a separate mini-game associated with the metastory. At least the following options: "Play" (goes to LEVEL SELECTION) "Quit" (goes to QUIT) Unless the game is over, in which case this leads to GAME OVER WIN9. LEVEL SELECTION This is optional. For some types of games it might make sense to let the user choose a level - a particular quest they want to go on, a particular prize they want to go after, a particular maze they want to solve, etc. Maybe a "stage" is comprised of several levels and it doesn't matter which order they are done in, so the user can be given control over what they want to do. Goes to LEVEL INTRODUCTION10. LEVEL INTRODUCTION Level-specific loading and computation, shows the level name and might show other stuff as well. This could be integrated into GAMEPLAY especially if there isn't much to introduce or if the loading time is very short. "Go" goes to GAMEPLAY11. GAMEPLAY Starts with a brief animation saying "Go!", and lets the user play the game. Available commands: "Pause" (goes to GAME PAUSED) "Options" (goes to OPTIONS) "Hint" (performs some specific gameplay highlighting) "Quit" (goes to QUIT) - each user only has one saved game, so there's no need to ask about saving If the level is completed, goes to LEVEL COMPLETE If the player loses, goes to GAME OVER LOSS 12. GAME PAUSED "Continue" (goes back to GAMEPLAY) "Quit" (goes to QUIT)13. GAMEPLAY HINT When new gameplay elements are introduced, in some cases at least it is probably a good idea to explain them. This type of screen should definitely be an overlay over the gameplay screen with a visual indication of what is being discussed "OK" "Don't Show This Again" Both return to GAMEPLAY14. LEVEL COMPLETE Some sort of animation and score summary to congratulate the user. "OK" Goes to METASTORY INTRODUCTION or METASTORY PROGRESS15. GAME OVER LOSS Should contain their score and game stats. "Save Score" If there is no user name, first goes to PLAYER NAME Goes to HIGH SCORES with current score highlighted "High Scores" (goes to HIGH SCORES) "Quit" (goes to QUIT)16. GAME OVER WIN Starts with some sort of celebratory message. Should contain their score and game stats. "Save Score" If there is no user name, first goes to PLAYER NAME Goes to HIGH SCORES with current score highlighted "High Scores" (goes to HIGH SCORES) "Quit" (goes to QUIT) ## Game Screens So I thought I'd plan out the various screens that the game should have -- only now do I realize what a truly massive undertaking even a small "complete" game is! Many games have the same sort of flow, so maybe this analysis will be useful to somebody else. Some of these "screens" might actually be dialogs that pop over the previous screen, but they are encapsulated functionality with a large UI so it still makes sense to think of them separately. Other sequenes of events are possible and the design needs to be thought through a little more. Note: the rest of this entry has been superceded by the following entry, after making some changes in response to a helpful comment ## Fun with spheres So I have more or less picked some mechanics for my little puzzle game, and it involves moving balls around (impressive, eh?). Even such a trivial thing has its interesting points though.... I'll have to render a lot of different styles of spheres from arbitrary orientations, and I'd like them to look reasonably nice. Time to crack out the algebra. The first picture is just a rendered heightfield to show that my sphere intersections were working. The second one adds some phong shading, and the third one gets a little fancier. Beyond that I started playing with some classic Perlin-style effects: Eventually the idea is to streamline this so I can bash out renders of the balls very quickly. Next steps: work on that rendering including oversampling for anti-aliasing, and figure out a way to map textures onto spheres without distorting them (that's a head-scratcher for right now). Beyond that, I downloaded the trial of Photoshop and I played with it a bit. I think I'll probably have to buy it, which is fairly painful, but learing photoshop is probably good for me anyway. ## Brief Introduction So after years of liking games, I decided to try and make games (like everybody else!). The whole process is interesting to me so I'm going to start by tring to make a "complete" but small game, probably a puzzle game of some kind, just to get experience (though I'll try to make it fun to play). Mostly just research so far; I decided to use the PopCap framework to handle some of the tedious details robustly, so I'm studying that. Also for some reason I'm curious about making the colorful fonts that make good games look so nice. And trying to figure out the best way to make game music. Fun stuff! I'm just a hobbyist so if anybody happens to read this, don't expect too much. I just wanted to make a journal to help with motivation and on the off chance that somebody else finds it interesting.
Question # How many linear equations are satisfied by x = 2 and y = -3? A Only one B Two C Three D Infinitely many Solution ## The correct option is A Infinitely many ∴ From a point, infinitely number of lines can pass. ∴ The solution x = 2, y = - 3 is the solution of infinitely many linear equations. Suggest corrections
Re: Fuzzy Set Intersection WSiler ([email protected]) Sun, 7 Jun 1998 00:21:27 +0200 (MET DST) Your posting is a little confusing, since you have not stated what your measures correspond to in terms of the AND and OR operators. To use a little more compact notation, let ZAND = the Zadeh AND, ZOR = the Zadeh OR, LAND = the Lukasiewicz AND and LOR = the Lukasiewicz OR, with A LAND B = max(A+B-1, 0) and A LOR B = min(A+B, 1). You propose the following measures: P = min(A(x), B(x)) = A ZAND B Q = max(A(x), B(x)) = A ZOR B You continue with min(A \cap B) = min(P+Q, 1) = P LOR Q max(A \cap B) = min(A, B) = A ZAND B We can then state your final result as: min(A \cap B) = P LAND Q = (A ZAND B) LAND (A ZOR B) max(A \cap B) = A ZOR B A = .1 B = .2 P = A ZAND B = .1 Q = A ZOR B = .2 min(A \cap B) = .1 LAND .2 = 0 max(A \cap B) = .1 ZAND .3 = .1 A = .6 B = .8 P = A ZAND B = .6 Q = A ZOR B = .8 min(A \cap B) = .6 LAND .8 = .4 max(A \cap B) = .6 ZAND .8 = .6 yielding the same answers as you obtained. However, the reasoning with which you achieved your operators is quite unclear to me. Some clarification would be appreciated. William Siler ############################################################################ This message was posted through the fuzzy mailing list. (1) To subscribe to this mailing list, send a message body of "SUB FUZZY-MAIL myFirstName mySurname" to [email protected] (2) To unsubscribe from this mailing list, send a message body of "UNSUB FUZZY-MAIL" or "UNSUB FUZZY-MAIL [email protected]" to [email protected] (3) To reach the human who maintains the list, send mail to [email protected] (4) WWW access and other information on Fuzzy Sets and Logic see http://www.dbai.tuwien.ac.at/ftp/mlowner/fuzzy-mail.info (5) WWW archive: http://www.dbai.tuwien.ac.at/marchives/fuzzy-mail/index.html
# A structural-chemical explanation of fungal laccase activity ## Article metrics • 2651 Accesses • 3 Citations ## Abstract Fungal laccases (EC 1.10.3.2) are multi-copper oxidases that oxidize a wide variety of substrates. Despite extensive studies, the molecular basis for their diverse activity is unclear. Notably, there is no current way to rationally predict the activity of a laccase toward a given substrate. Such knowledge would greatly facilitate the rational design of new laccases for technological purposes. We report a study of three datasets of experimental Km values and activities for Trametes versicolor and Cerrena unicolor laccase, using a range of protein modeling techniques. We identify diverse binding modes of the various substrates and confirm an important role of Asp-206 and His-458 (T. versicolor laccase numbering) in guiding substrate recognition. Importantly, we demonstrate that experimental Km values correlate with binding affinities computed by MMGBSA. This confirms the common assumption that the protein-substrate affinity is a major contributor to observed Km. From quantitative structure-activity relations (QSAR) we identify physicochemical properties that correlate with observed Km and activities. In particular, the ionization potential, shape, and binding affinity of the substrate largely determine the enzyme’s Km for the particular substrate. Our results suggest that Km is not just a binding constant but also contains features of the enzymatic activity. In addition, we identify QSAR models with only a few descriptors showing that phenolic substrates employ optimal hydrophobic packing to reach the T1 site, but then require additional electronic properties to engage in the subsequent electron transfer. Our results advance our ability to model laccase activity and lend promise to future rational optimization of laccases toward phenolic substrates. ## Introduction Laccases (EC 1.10.3.2) are multi-copper oxidoreductases that catalyze the one-electron (e) oxidation of diverse substrates and sequentially transfer four electrons to the catalytic copper (Cu) atoms, which are used to reduce O2 to two water molecules1,2,3,4,5. Laccases are found in fungi, plants, bacteria and insects6 and catalyze the oxidation of a wide variety of organic and inorganic substrates including phenols, ketones, phosphates, ascorbate, amines and lignin7,8,9,10,11. Laccases are attractive industrial biocatalysts12,13,14,15 and thus the relationships between their specific structures and associated functions are of major interest, in particular to guide the design of new laccases for tailored purposes7. Fungal laccases contain four catalytic Cu atoms viz. the T1 Cu and the tri-nuclear Cu cluster (T2 Cu, T3α Cu and T3β Cu) at the T2/T3 site3,4,16. The substrates are consecutively one-electron-oxidized at the T1 site near the protein surface and the 4-electron reduction of O2 to water occurs at T2/T3 site that is buried within the protein2,6,16,17. Fungal laccases are extracellular and monomeric glycoproteins with ~520–550 amino acids and a typical weight of ~60–70 kDa in their glycosylated form9,18. These laccases contain an N-terminal signal peptide sequence of 20–22 residues18. Structurally, they consist of three tightly arranged cupredoxin-like domains, each of which possesses β-barrel symmetry3,9,19,20. The T1 Cu is situated in domain 3 close to the protein surface, and T2 and T3 (α and β) Cu atoms are present at the interface of domains 1 and 3 (Fig. 1a)6,21. Some plant laccases are involved in lignin biosynthesis, whereas in bacteria and fungi they may be involved in lignin degradation6,22. Fungal laccases belonging to the Basidomycota division (white-rot fungi) are of particular importance in this regard6. Two widely studied white-rot fungal laccases are Trametes versicolor (TvL) and Cerrena unicolor laccase (CuL). They are sequentially and structurally similar with 68% sequence identity and a structural root mean square deviation (RMSD) of 0.25 Å, Fig. 1b. Many substrate reactions has been studied for TvL7,11,23,24,25,26,27,28 and also some for CuL29. 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) (ABTS), syringaldazine (SGZ), catechol, dopamine, 2,6-dimethoxyphenol (DMP), vanillic acid, and syringic acid are substrates of TvL with available Km data7,24,25,26,27,28. Sulistyaningdyah et al.23 studied the relative activities of TvL, Myrothecium verrucaria and Trametes sp. Ha-1 laccases on phenols, anilines, and ABTS. In another study, Km values were reported for the activity of CuL toward phenols, acids, ketones, amines and phosphates29. Structure-guided computational design of laccases using both classical and quantum mechanical approaches is a viable and very useful path to discover new proficient laccases for turnover of specific substrates30,31. Quantitative models trained on actual turnover data would in principle offer more predictive accuracy, but depend critically on systematic experimental data on substrate turnover rates for their validation. The TvL and CuL data studied here are the most complete systematic data that we could identify, and we hypothesized that a comparative modeling study might provide new insight into the structure-activity relations of these well-studied enzymes. To investigate whether this is possible, we explored the quantitative relationship between the binding score obtained from standard docking protocols and from more advanced MMGBSA (Molecular Mechanics Generalized Born Surface Area) computations and the reported relative activities and Km values. Quantitative structure-activity relationship (QSAR) models were derived both from the conformations of the free ligands and the protein-bound ligands to identify the dependency of the results on the conformation of the substrates. Both proteins were also studied by Molecular Dynamics (MD) simulations to explore the dynamics of the binding sites and the overall dynamic stability of the enzymes in relation to our quantitative models. ## Materials and Methods ### Sequence analysis and hydrophobicity plots Sequence alignment was performed using the Geneious software, version 10.2.3 (http://www.geneious.com)32. We applied the CLUSTALW alignment option33 with standard parameters. The accession numbers of the used sequences are ALE66001.1 for CuL and Q12718 for TvL. The overall laccase consensus was created by aligning 924 sequences of laccases downloaded from BioCatNet LccED v6.4 (12/01/2017)34. Hydrophobicity was estimated using the Kyte & Doolittle scale35 as calculated using the ProtScale tool of the ExPASy server36. ### Datasets used The laccases are generally thought to follow Michaelis-Menten kinetics, and Km data are thus available as discussed above37. Three datasets (named A, B and C) were used in the present study (Table 1). Dataset A is the smallest and is comprised of 11 well-known, structurally diverse laccase substrates with experimental Km values determined for the commonly studied TvL7,24,28. This dataset includes four in-house evaluated compounds37. Dataset B consists of 15 congeneric phenolic substrates and one amine whose relative activities toward TvL were reported by Sulistyaningdyah et al.23. Ligand dataset C comprises 23 substrates with Km values measured for CuL as reported by Polak et al.29. These datasets with their PubChem compound ID38 are presented in Supplementary Tables S1S3. The Km values of the datasets A and C were reported at low pH (~4.5) and the relative activities of dataset B were derived at comparatively high pH (9.0). To take this into account, it was necessary to test whether the protonation state of the protein at variable pH had any effect on the modeled activity, and thus the ligand states were prepared at both low and neutral pH (4.5 and 7.0) using LigPrep39. The Km values (in µM) were first converted into molar units and then expressed as the negative of the 10-base logarithm (pKm), and the relative activity values of dataset B were converted into logarithmic values, in order to compare them to the modeled free energy terms. ### Preparation of laccase-substrate complexes The crystal structure of TvL with code 1GYC was retrieved from the Protein Data Bank (PDB)21,40. A homology model of CuL was built based on 1GYC using the Prime Homology Modeling tool of Schrodinger41. The structure was validated by analyzing the associated Ramachandran plot. The binding states of both these proteins were prepared using the Protein Preparation Wizard in Schrodinger42. Water molecules within 5 Å of the Cu atoms were retained, and all other water molecules were removed. The crystal structures are presumably in the resting oxidized (RO) state of the protein (Supplementary Table S4). This state is thermodynamically stable and plausibly (or together with another fully oxidized catalytic state) involved in electron transfer upon contact with substrates2,4. Another fully oxidized native intermediate (NI) state may be the catalytically relevant active state; in the presence of sufficient reductant this state can convert to the fully reduced state of the protein4. However, when excess reductant is not available, NI slowly converts to the RO state4. The protonation states of TvL and CuL were prepared at neutral pH (7.0) and low pH (4.5). Asp-206 is a crucial residue involved in the substrate binding and may be protonated (Ash) at very low pH. To test the impact of such a protonation on substrate binding, a TvL structure was prepared at pH 4.5 with this residue specifically protonated (Supplementary Table S5). The detailed oxidation state of the T2/T3 coppers will not affect the substrate binding since they are 12 Å or more from the substrate and the T1 Cu. We speculate that the experimental activity may be best reflected in the 3-electron-reduced protein state considering the expected lower half potential of this state. We thus prepared and studied this state, in which T2 and T3 Cu atoms were reduced and T1 Cu was oxidized. This state was also studied for both proteins (TvL and CuL) at pH 7.0, pH 4.5 and pH 4.5 with Ash 206 (Supplementary Table S5) and subjected to MD simulation to identify whether the protein oxidation state affects the dynamics of the substrate binding sites. ### Molecular docking and MMGBSA analysis Molecular docking was performed for all substrates using the obtained protein structures (dataset C substrates were docked to CuL, the others to TvL). Five potential binding sites on the proteins were identified using Sitemap43, and a 20-Å grid was generated around the T1 Cu site. Substrates were subsequently docked on this grid using the XP scoring function of Glide44. For each ligand, a maximum of ten output poses were generated. All these calculations were performed for TvL and CuL at pH 7.0 and 4.5. Each substrate-protein complex generated by Glide was subject to MMGBSA analysis41,45 at six different flexibilities of the protein residues around the ligand: 0 Å, 4 Å, 8 Å, 12 Å, 16 Å, and 20 Å. MMGBSA calculates the binding free energy as the difference between the energies of the minimized protein-ligand complex and minimized ligand plus minimized protein. The binding modes of the substrates at the T1 Cu binding pocket were analyzed. The docking scores and MMGBSA binding energies were compared with experimental relative activities and Km values in order to understand the substrate binding quantitatively and structurally. Probably because quantitative experimental data are scarce, this is, as far as the authors know, the first study that quantitatively correlates Km and relative activity values of substrates of laccases with the modeled docking scores and MMGBSA binding affinities. ### QSAR modeling In order to identify the crucial substrate properties that explain experimental activities, QSAR modeling was performed46,47,48,49 using both the free and bound ligand conformations for computing the various descriptors. The free and the bound ligand states were generated by LigPrep39 and MMGBSA41, respectively. The ligands were prepared at pH 7.0 for dataset B and 4.5 for dataset C. To account for the shape, solubility, binding, and e transfer mechanism, three types of descriptors were generated: ADMET (Absorption, Distribution, Metabolism, Excretion and Toxicity), semi-empirical quantum mechanical (SE), and quantum-mechanical (QM) using QikProp50, NDDO (Neglect of Diatomic Differential Overlap)51,52 and Jaguar53,54 of the Schrodinger suite55. In Qikprop, in addition to the ADMET parameters, the ionization potential (IP) and electron affinity (EA) were also calculated using the semi-empirical PM3 method50. We hypothesized that the electronic descriptors of the substrates are important for explaining real laccase activity due to the involved electron transfer from the bound substrate to T1 of the laccase. QM single point energy calculations were carried out using Density Functional Theory (DFT) with the B3LYP functional and the 6–31 G** basis set56. Dataset B contains three anions, which were excluded from B3LYP calculation as the orbital energies are unreliable. Similarly, dataset C contains mainly ions and therefore, these properties were not calculated for them. However, the solvation energy could play a role in defining the enzyme’s Km for these substrates and was thus calculated using QM. Some additional QM parameters were manually calculated using the ligand energies determined by Jaguar, which included IP, EA, electronegativity, hardness, chemical potential and the energy gap between the highest-occupied and lowest unoccupied molecular orbitals (HOMO and LUMO)57. These QM properties plausibly play an important role in determining the electron transfer rate once they are bound to the laccase. Semi-empirical parameters were calculated using the RM1 method52,58. In addition to the above descriptors, the MMGBSA binding energy and the ADMET volume to solvent accessible surface area (SASA) ratio, which measures shape, were also included in the total descriptor set to probe requirements with regards to substrate fit within the binding site. For dataset C, the experimental oxidation potential (Eo) reported by Polak and coworkers29 was included as an independent descriptor, as it may be assumed to be important for the activity of laccase substrates. The total driving force depends on both the substrate and laccase half potentials, viz. the Nernst Equation, and the e transfer rate may be affected by this driving force as well, as e.g. seen from Marcus theory4. It became clear quickly that dataset A is too diverse and small. This was observed during MMGBSA binding affinity analysis, which revealed a very low R2 of 0.10, a high p-value of 0.34, and a diverse scatter of computed binding affinity vs. pKm. Therefore, this dataset was not used further for QSAR studies. For dataset B (16 compounds) and dataset C (23 compounds) all the compounds were used in the respective training sets. Multiple Linear Regression QSAR models were developed using Strike focusing on models with a maximum of three descriptors59. In this way, we developed global, ADMET, QM and SE QSAR models. The global models include descriptors calculated by Jaguar and QikProp, whereas the ADMET, QM and SE models contain descriptors from QikProp, Jaguar, and NDDO, respectively. The quality of the QSAR models was evaluated using the correlation coefficient (R2) and cross-validated R2 (Q2). Q2 values were derived by randomized leave-one-out analysis as implemented in Strike59. In addition, because Q2 only reflects internal dataset consistency, we also performed an explicit external validation of the predicted trend for some recently published data as described below. The use of both the free and the bound ligand conformations, the use of electronic QM descriptors to model redox activity, the use of the MMGBSA binding energies to model Km and relative activity make our study different from conventional QSAR studies applied to protein-ligand interactions. This is because we specifically sought to capture the chemically active conformations that explain the real experimental data for laccases, and these active states most likely relate to the electronic properties of the bound ligand state. ### Molecular dynamics simulations MD simulations of 12 laccase systems (Supplementary Table S6) were performed for 50 nanoseconds (ns) each, to establish if the dynamics of the substrate binding sites were sensitive to protein type, pH, and protonation of Asp-206. The OPLS 2005 force field was applied60,61,62 and the systems were constructed using the System Builder tool of Desmond63. Each protein was solvated with SPC water in an orthorhombic box. Each box volume was ~500,000 Å3, with the order of ~47,000 atoms, including ~13,000 water molecules (Supplementary Table S6). The systems were neutralized by adding counter ions (Supplementary Table S6). Initial structure minimization was carried out using a combination of steepest descent and Broyden−Fletcher−Goldfarb−Shanno (BFGS) optimization for a maximum of 2000 iterations. For each protein, 50 ns MD simulation was performed within the NPT ensemble at 300 K and 1.01325 bar, using the standard multistep protocol of Desmond64. The integration time-step of 2.0 femtoseconds was used for bonded interactions, and energy and trajectory were recorded at intervals of 1.2 and 50 picoseconds, respectively. Long-range electrostatic interactions were calculated using Ewald mesh summation65. The Lennard-Jones potential66 was used to determine van der Walls interactions and an interpolating function was used for electrostatic interactions. A cut-off radius of 9.0 Å was used for short-range Coulomb and van der Walls interactions. Pressure and temperature conditions were maintained steady by using the Martyna-Tobias-Klein barostat67 and the Nose-Hoover chain thermostat68 as is default and commonly applied. ## Results and Discussion ### Sequential and structural comparison of the two laccases As mentioned in the Methods section, the homology model of the CuL was developed using 1GYC as template, which is a TvL structure. The structural alignment of the CuL model and the TvL structure (1GYC) produced a low RMSD value of 0.25 Å, showing that the two protein models applied are essentially identical in fold structure (Fig. 1a); MD simulation changes this somehow, as expected (see below). The applied CuL model was validated by Ramachandran analysis showing that 98.5% of the residues are in the core and in additional allowed regions, with 1% in the generously allowed region and 0.5% residues in the disallowed region of the plot (Supplementary Fig. S1), which is very acceptable. For comparison, the Ramachandran plot of the experimental laccase structures (from PDB) were analyzed40. The laccase structures 1GYC, 2HRG, 2XYB, 1A65 and 1HFU showed 99–100% residues in the core and additional allowed regions, up to 0.7% residues in the generously allowed region and up to 0.2% residues in the disallowed regions of the plot. Alignment of the TvL and CuL sequences show that the proteins are 68.3% identical. Especially the copper binding sites align perfectly as these parts are evolutionarily conserved, but also the residues involved in substrate binding are found in the same sequence regions (Fig. 1b). We estimate that approximately half of the substrate binding residues are conserved. The flexible loop regions produce most of the variability relevant to substrate binding. However, comparison of both sequences to an overall laccase consensus based on 924 sequences reveals that only three of these residues are conserved: His-393 and Pro-394 in CuL and His-458 in TvL (Supplementary Fig. S2). The histidines are also involved in binding of the copper ions, which explains their high conservation across all analyzed sequences. The Pro-394 adjacent to His-393 is probably involved in stabilization of the secondary structure leading to correct formation of the copper binding site, explaining why it is highly conserved throughout the laccase enzymes. The hydrophobicity of the enzymes is also very similar (Supplementary Fig. S3a). However, as seen in Supplementary Fig. S3b, nine regions differ by >|1| in their hydrophobicity index: regions 5–9, 26–35, 60–74, 178–185, 200–202, 316–324, 349–355, 423–431, and 495–499 (numbering based on consensus sequence of TvL and CuL). Three of these regions lie within or close to flexible loops near the substrate binding site (see MD data below): Region 178–185 is part of the first loop, region 200–202 is part of a second loop, and region 349–355 is very close to a third loop (Fig. 1b). ### Diversity of substrate binding conformations within laccases All the studied ligands were assumed to bind near the T1 Cu, which is known to be the site of substrate binding and oxidation2,4,6,69. According to the theory of electron transfer, the proper orientation of the substrates probably involves a short distance between the donor and acceptor T1 site to enhance the e transfer rate4. We have previously suggested that laccase substrates need to have the atom to be oxidized close to T1 in the conformation that reflects experimental turnover, because the electron transfer rate decreases exponentially with the distance between the substrate donor orbitals and the T1-His acceptor orbitals69. Accordingly, the relevant active substrate-conformations should be selected for minimal distance to T1, a principle that we consider important to future laccase design69. The electron donor atom of the various substrates vary substantially and in some substrates such as ABTS, it is represented by a delocalized electron density on multiple atoms. Larger substrates such as ABTS and SGZ were found to be somewhat solvent-exposed. Most of the substrates interact with His-458 and Asp-206 (TvL numbering) residues and form hydrogen bond, salt bridges, or π-π stacking interactions with them (Fig. 2 and Fig. 3 and Supplementary Tables S7S9). It has been reported that His-458 (TvL numbering) and Asp-206 residues form crucial interactions with laccase substrates, with the Nε H-atom of His-458 plausibly involved in the actual electron transfer. Asp-206 forms stable hydrogen bonds with the substrate to maintain this close-encounter active conformation with minimal electron transfer distance6,69. Because of this, we focused on ligand conformations that involved interactions with these residues and had a short distance between His-458 Nε and the supposed e donor atoms. We assumed OH was the donor group of all phenolic substrates and analyzed the distances between the closest OH and His-458. For amines, the distance between NH2 and His-458 was calculated, and for other substrates, we performed visual inspection of the docked poses and selected the poses that were closest to the T1 site. For ligand dataset A, which includes phenolic substrates, Glide and MMGBSA both produced relevant conformations that fulfil the distance requirements of e transfer, with distances <5 Å (except for p-coumaric acid) between the expected electron donor atom and His-458 Nε H-atom (TvL) (Fig 2a,b and Supplementary Table S7). Out of the 11 compounds, nine were phenols, and in seven of these, a methoxy group was present ortho to the phenolic OH. Interestingly, we observed that the methoxy O-atom has a tendency to form hydrogen bond interactions with the Nε H-atom of His-458, which is plausibly involved in the electron transfer pathway. In OH-dilignol, the methoxy group of the methoxyphenyl ring formed a hydrogen bond with His-458. In catechol and dopamine, where OH was present ortho to the phenolic OH, the hydroxyl groups had a tendency to form a hydrogen bond with Asp-206 and His-458. However, when a methoxy or hydroxyl group was absent in the ortho position of phenolic OH (p-coumaric acid), the hydrogen bond interaction with His-458 was missing and the distance between the phenolic O-atom and the His-458 Nε H-atom was ~6 Å. We also observed that Asn-264 and Phe-265 are important TvL residues involved in hydrogen bonding and π-π stacking to a variable extent depending on the nature of the substrate. Of the 16 compounds in dataset B, 15 are phenols and one is amine (p-toluidine). In 14 of these compounds, the phenolic OH group formed a hydrogen bond with Asp-206 as the hydrogen bond acceptor (Fig. 2c–f and Supplementary Table S8), strongly suggesting that this is the predominant active conformation of phenolic laccase substrates. We consider this consistently recurring interaction mode of general relevance for the real turnover rate catalyzed by the laccases as discussed further below. The NH2 group of p-toluidine also formed a hydrogen bond with Asp-206. We also found that the Nε H-atom of His-458 acted as a persistent hydrogen bond donor and participated in hydrogen bonding with the hydroxyl group of catechol, guaiacol, o-cresol, pyrogallol, and p-hydroxybenzoic acid. His-458 also formed a hydrogen bond with the methoxy group of 2,6-dimethoxyphenol. In all these ligand poses, the distance between phenolic OH and the His-458 Nε H-atom was <5 Å. As a final recurring feature in many of the docking simulations, Phe-265 commonly formed π-π stacking interactions with the phenyl ring of many substrates including 2,6-dichlorophenol, 2,4-dichlorophenol, hydroquinone, caffeic acid, o-cresol and p-hydroxybenzoic acid. For the CuL substrates of dataset C, we found, interestingly, similar poses, which indicates that the phenolic substrate conformations are generically important. One main difference was that His-454 (which is equivalent to TvL His-458) commonly formed π-π stacking interactions with the phenyl ring of the substrates (Fig. 3 and Supplementary Table S9). Out of the 23 substrates, 11 formed π-π stacking contacts with His-454. However, as TvL, CuL is also engaged in hydrogen bonding with the substrates, in this case with methoxy and phosphate groups of 2-amino-3-methoxybenzoic acid and sodium-1-naphtyl phosphate, respectively. Asp-206 was involved in hydrogen bonding and salt-bridge interactions with the ammonium groups of the twelve ligands: n-(1-naphthyl) ethylendiamine, 3-amino-4-hydroxybenzenesulfonic acid, epinephrine, 3-(3,4-dihydroxyphenyl)-L-alanine, norepinephrine, 2-amino-3-methoxybenzoic acid, 2-aminophenol, 3-aminobenzoic acid, 4-aminophenol, 4-aminobenzoic acid, anthranilamide, and anthranilic acid. Asp-206 also participated in hydrogen bonding with the phenolic OH groups of D-catechin, 3-amino-4-hydroxybenzenesulfonic acid, 2-aminophenol, 4,5-dihydroxy-1,3-benzene-disulfonic acid and 2-methoxyhydroquinone substrates, as we saw for TvL. Furthermore, Thr-168, Asn-264, Leu-265, and Gly-391 consistently formed hydrogen bonds with the substrates. ### Quantitative analysis of the docking scores and activities In order to determine whether docking scores aid in understanding laccase activity, we attempted to correlate the XP docking scores from Glide with the experimentally reported Km values and relative activities. For all three datasets, we found no significant correlations (Table 2). Glide is well-known for producing excellent conformations of bound substrates in proteins44,70,71,72 but it commonly fails to quantitatively rank the relative binding free energies of ligands, and thus this negative result was not unexpected73. Our hypothesis that Km correlates with the binding free energy of the substrate in its active conformation is thus not supported by the Glide scoring, but this could also be due to the known weakness of the scoring functions. Therefore, we also carried out more advanced and typically more accurate MMGBSA binding energy calculations (in kcal/mol)74 using six different degrees of protein flexibility for the three datasets and analyzed the correlation with the experimental data (Table 2, Fig. 4, and Supplementary Fig. S4S9). We performed a comparative analysis of the MMGBSA scores at two pH i.e. 4.5 and 7.0. Interestingly, as a general trend, the correlation increased with a decrease in the protein flexibility (i.e. how much of the protein is allowed to be flexible and adjust upon substrate binding). For dataset A, a maximum observed R2 of 0.1 was seen at pH 4.5 when no flexibility was assigned to the protein (Fig. 4a). At pH 7.0, no correlation was observed for dataset A. A good correlation was not expected as this dataset contains structurally diverse compounds and the Km values were derived by different research groups, which introduces heterogeneity and noise into the dataset. However, the direction of the correlation follows the expected outcome, i.e. as the pKm increases, the MMGBSA score becomes more negative. For dataset B, we found a good correlation at smaller flexibility, with a maximum correlation of R2 = 0.29 at pH 7.0 (0 Å flexibility). The correlation was more pronounced than at pH 4.5 (R2 = 0.16 at 4 Å flexibility), and the direction of correlation was meaningful. The low R2 at pH 4.5 possibly reflect that the experimental activities were reported at pH 9.0. At pH 7.0 (0 Å flexibility), p-hydroxybenzoic acid was an lone point in the data range; if removed, the entire correlation was lost (R2 = 0.02, Fig. 4b,c). Thus, this correlation is very dependent on only one data point because of an uneven coverage within the data range, and we can therefore not trust this observation very much, although we note that the direction of the correlation is meaningful (stronger binding reflects smaller Km). The experimentally reported relative activity values are plausibly approximately equivalent to kcat/Km under Michaelis-Menten conditions of an assay, and thus we would expect binding affinity to only explain part of this activity (the remaining part being explained by the electron transfer properties affecting the rate). For dataset C, we found that the binding free energy estimated by MMGBSA and the experimental pKm values correlated significantly (p < 0.05) at lower flexibility distances, with a maximal correlation of R2 = 0.33 at pH 4.5 using 8 Å flexibility (Fig. 4d). Similar correlations were also observed at pH 7.0 using 4 Å (R2 = 0.31) and 8 Å (R2 = 0.30) protein flexibilities. Here, we observed very similar R2 values at pH 4.5 and 7.0, with the actual Km observed at pH 4.5. The pKm values increased (Km decreased) with decrease in the binding energy, as expected if smaller Km reflects stronger substrate-enzyme binding. We therefore conclude that the MMGBSA binding free energy is a very valuable descriptor of experimental Kmvalues, but the descriptor is less important for explaining relative activity, which also depends on features that contribute to kcat. The lower data quality may explain some of the poor performance of datasets A and B in Table 2. We also conclude that low flexibility that maintains the protein geometries close to the crystal structure explain the experimental data better than computationally relaxed proteins, although the optimal protein relaxation depends on the substrate. Dataset B contains smaller substrates (phenol analogs) and therefore favors little protein relaxation. In contrast, Dataset C includes large compounds such as sodium-1-naphtyl phosphate, n-(1-naphthyl) ethylendiamine, D-catechin and ABTS, and therefore, prefers some protein flexibility for proper alignment and removal of steric clashes. ### QSAR models of laccase activity In order to determine whether laccase activity can be rationally predicted and explained, QSAR was performed for dataset B and C compounds that contained most data and were derived from only a single laboratory each, which is expected to reduce noise; the fact that dataset C is most well-behaved partly emerges already from the MMGBSA study of dataset C vs. A (both are Km data but only dataset C shows good correlation). Binding affinity is generally thought to contribute to observed Km values, as confirmed above for dataset C, and thus the MMGBSA binding free energy was included as a descriptor during QSAR modeling. Because the binding poses within the protein may represent essential structural information that differs from the free ligand state, as required for the e transfer and activity, both free and bound ligand conformations were used for computation of the descriptors. The comparison of QSAR results obtained with free and bound conformations is of technical interest on its own as it outlines the importance of generic ligand features vs. features specific to the protein-ligand complex. All the QSAR models developed in the present study are shown in Supplementary Table S10. The descriptors of these models are not normalized; however, the descriptors of the models discussed in the present study (models 1 to 7 discussed below) are normalized. These descriptors are explained in Supplementary Table S11, and the scatter plots of the correlation between these descriptors (without normalization) and log (activity) or pKm are shown in Supplementary Fig. S10S15. The representative descriptors showing high correlation with log (activity) or pKm are shown in Fig. 5. Toluidine, which is an amine, was present in dataset B, which otherwise consisted of phenols. Accordingly, it was found to be an outlier in the QSAR modeling, i.e. its inclusion made the dataset too noisy. Therefore, for global, QM and SE models of dataset B, the number of data points in the training set was n = 12 and for ADMET models, n = 15. The best global QSAR model using the free ligands of dataset B was (R2 = 0.76; Q2 = 0.57; standard error = 0.21): $$\mathrm{log}\,({\rm{activity}})=1.38+1.36\,{\rm{QPlogPo}}/{\rm{w}}+0.88\,{\rm{QPlogS}}-0.97\,{\rm{ESP}}\,{\rm{\max }}$$ (1) In model 1 (Eq. 1), the maximum value of the electrostatic potential energy (ESP max, measured in kcal/mol) was an important descriptor; it correlated inversely with the reported activity of dataset B. In addition, the hydrophobicity of the ligands as measured by the common octanol-water partition coefficient (QPlogPo/w) and the water solubility (QPlogS) showed positive regression coefficients in the QSAR model. The QPlogPo/w had a major effect with R2 = 0.24 with log (activity), whereas QPlogS showed only a small contribution towards activity. The highly active catechol has the highest QPlogS value, and the weakly active pyrogallol has low solubility. The fairly active compounds catechol, hydroquinone, and resorcinol (with two phenolic OH groups) exhibited both hydrophobicity and water solubility, whereas the much less active compound pyrogallol (with three phenolic OH groups) did not. This suggests that a good laccase substrate should exhibit an optimal balance of hydrophobicity and hydrophilicity. In addition, a 3-descriptor ADMET model was derived (R2 = 0.74; Q2 = 0.50; standard error = 0.30): $$\mathrm{log}({\rm{activity}})=3.6-1.89\,{\rm{FISA}}-2.64\,{\rm{glob}}+1.41\,{\rm{QPlogS}}$$ (2) Model 2 (Eq. 2) includes the hydrophilic solvent accessible surface area (FISA), the globularity of the substrate (glob), and QPlogS as descriptors. FISA showed inverse correlation with the experimental activity. As above, it implies that the phenol substrates need to be optimally hydrophobic to maximize activity. As the globularity of the compounds decreases, activity increases, correlating well with the extended nature of the substrate-binding site. The globularity parameter, which is surface area to SASA ratio, showed inverse correlation (−0.93) with the Vol/SASA ratio of the substrates. The solubility parameter QPlogS correlated positively with activity, in good agreement with the above observation for the global model. It was surprising and encouraging to us that the experimental activity of the phenols of dataset B can be described so well by a simple 3-parameter model; the features driving laccase activity toward phenolic substrates according to this model, hydrophobicity, solubility, shape and the electrostatic potential energy of the substrates, suggests that the substrates require optimum hydrophobic packing and solubility to reach the T1 site, but then additional favorable electronic properties to engage in the electron transfer. This double requirement (hydrophobic association, but also electronic alignment) may be of importance to future optimization of laccase activity toward phenolic substrates, perhaps even including large phenolic constituents of lignin, although this remains to be investigated further. The above models were obtained using conventional free ligand conformations. As explained we also wanted to study the bound ligand conformations separately. The best general 3-descriptor QSAR model for these conformations was (R2 = 0.80; Q2 = 0.59; standard error = 0.20): $$\mathrm{log}\,({\rm{activity}})=1.98+0.94\,{\rm{PISA}}+1.00\,{\rm{EA}}({\rm{eV}})-1.23\,{\rm{ESP}}\,{\rm{\max }}$$ (3) Model 3 includes ESP max (as in Model 1), the electron affinity (EA, measured in eV) and the solvent exposed π surface contributed by the carbon atoms, which is also an electronic property of the substrate (PISA). ESP max again exhibited inverse correlation with the experimental activity. In contrast, PISA and EA(eV) exhibited positive correlation, which revealed that as the solvent-exposed π surface of the carbon atoms and attached hydrogen and electron affinity increased, the activity also increased. The ADMET model 4 (Eq. 4) showed that in addition to QPlogPo/w and QPlogS, Vol/SASA contributes to activity. Vol/SASA showed similar inverse correlation with the globularity (glob) as for the free ligand states. The ADMET model equation was (R2 = 0.82; Q2 = 0.67; standard error = 0.24): $$\mathrm{log}\,({\rm{activity}})=-\,0.44+1.72\,{\rm{QPlogPo}}/{\rm{w}}+1.87\,{\rm{QPlogS}}+1.91\,{\rm{Vol}}/{\rm{SASA}}$$ (4) We conclude that a slightly better correlation is obtained using descriptors computed for the bound ligands than for the free ligands, but importantly, similar models are achieved in both cases, i.e. our results are not dependent on the approximations made in the ligand conformations. Furthermore, a simple 3-descriptor model can describe the experimental activity of these phenolic laccase substrates well. Thus, electronic properties of the substrates (ESP max), as well as the hydrophobicity (QPlogPo/w), solubility (QPlogS) and shape (Vol/SASA and glob) largely determine the activity of TvL toward phenolic substrates (both in free and bound ligand QSAR models). This should be of interest in future optimization of laccases toward phenolic substrates. To understand the basis of laccase activity further, we also developed QSAR models using dataset C with known Km values. For free ligand states, sodium-1-naphtyl phosphate (Supplementary Table S3), which exhibited a very high Km, was considered an outlier and therefore removed. In the global models, the ionization potential (IP, measured in eV) was identified as the most important parameter (Eq. 5) that showed inverse correlation with pKm. The best models obtained using free ligands and dataset C were: $${{\rm{pK}}}_{{\rm{m}}}=4.43-2.93\,{\rm{IP}}({\rm{eV}});\,({{\rm{R}}}^{2}=0.62;\,{{\rm{Q}}}^{2}=0.53;\,{\rm{standard}}\,{\rm{error}}=0.60)$$ (5) $${{\rm{pK}}}_{{\rm{m}}}=2.69+1.99\,{\rm{accptHB}}-2.57\,{\rm{IP}}({\rm{eV}})\,+1.51\,{{\rm{E}}}_{\mathrm{Solv}(\mathrm{PBF})};\,({{\rm{R}}}^{2}=0.77;\,{{\rm{Q}}}^{2}=0.66;\,{\rm{standard}}\,{\rm{error}}=0.49)$$ (6) The solvation energy (ESolv(PBF), computed from the PBF solvent model in kcal/mol) and the number of hydrogen bond acceptors (accptHB) are important parameters for pKm in dataset C (Eq. 6). The solvation energy and number of hydrogen bond acceptors correlated positively with pKm. The interpretation of the recommended models 5 and 6 is that the laccase substrates “start” with a favorable generic positive pKm common to all phenols, which is then increased further (in model 6) by favorable hydrogen bonds and dehydration from the water phase. In both models 5 and 6, pKm is impaired by the ionization potential indicating that a large cost of removing the electron impairs observed Km, i.e. the electron transfer is entangled with the Km as it only reflects the catalytically active state. It was again encouraging that the experimental Km of dataset C can be described by simple models. The solvation energy and the number of hydrogen bond acceptors suggest that the substrates require favorable binding at the T1 site; but the importance of the ionization potential suggests that the measured Km, in addition to the binding, also carries information about the actual electron transfer at the T1 site, probably because Km values are not solely interpretable (and separable) as binding affinities due to the nature of the Michaelis-Menten kinetics75. Using the bound ligand conformations of dataset C, a 3-descriptor global model was derived (Eq. 7); the experimental oxidation peak (Eo(expt)), the shape (Vol/SASA) and the hydrophilic surface area (FISA) of the substrates were the important parameters. The best global model equation was (R2 = 0.74; Q2 = 0.60; standard error = 0.53): $${{\rm{pK}}}_{{\rm{m}}}=3.04-1.14\,{\rm{FISA}}-1.79\,{{\rm{E}}}_{{\rm{o}}}({\rm{expt}})+3.98\,{\rm{Vol}}/{\rm{SASA}}$$ (7) Vol/SASA correlated positively, whereas Eo(expt) and FISA correlated inversely with pKm. The correlation of FISA with pKm was similar to as observed with log (activity) of the dataset B compounds. This model consistently suggests that the oxidation half potential, shape and hydrophilicity of the substrates contribute to the experimentally measured Km; the latter two can be related to formation of the active poses engaged in electron transfer, whereas the later relates to an activity effect on the measured Km that cannot be disentangled from the binding affinity. Our interpretation of these results is that favorable binding of the substrates at the T1 site of laccase is required for exhibiting a good (low) Km value. In addition, the involvement of the ionization potential and oxidation potential of the substrate indicates that the Km also reflects some information on the e transfer activity occurring at the T1 site, i.e. real measured Km represents an active binding state of the substrate rather than just the average binding affinity75. Furthermore, the observation that simple QSAR models describe the experimental Km values of the laccase for diverse substrates with only a few descriptors is very encouraging. These conclusions may be of the interest in further studies of enzymatic Km optimization in general and laccase optimization in particular. ### External validation of QSAR models To test whether our developed models have any predictive value, we compared our model performance against Km values of four substrates previously obtained in-house using TvL under the same experimental conditions37. These four Km values were analyzed with respect to the developed binding model and QSAR models. These compounds included sinapic acid, ferulic acid, p-coumaric acid and OH-dilignol. The important role of His-458 was clear from this analysis, as already discussed above. His-458 forms a coordination bond with T1 Cu along with two other residues (His and Cys) and helps the T1 Cu to attain a trigonal geometry6. Sinapic acid, ferulic acid and p-coumaric acid were very similar, differing only by the methoxy group at the ortho position of the phenolic OH (Fig. 6). The methoxy O-atom acts as a hydrogen bond acceptor and forms a hydrogen bond with the H-atom of Nε on His-458. The distance between His-458 Nε and the phenolic OH of these substrates was <5 Å. However, when this methoxy group is absent, as in p-coumaric acid, the hydrogen bond interaction with His-458 (TvL) is lost and the distance between His-458 Nε and the phenolic OH increased to ~6 Å. Thus, the absence of the methoxy group in p-coumaric acid prevents the substrate in obtaining what we suggest is the active conformation required for e transfer. This finding is in agreement with our suggestion based on the Km analysis that specific active conformations with short electron transfer distances are responsible for the real observed turnover in laccases. The pKm values of the four test compounds were predicted using our recommended pKm models developed using the best-quality dataset C (Fig. 7a and Supplementary Table S12). In model 7, the experimental oxidation potential is a descriptor, whose values were not available for the in-house compounds, and therefore, this model was not used. We found that the predicted and experimental pKm values were in good trend agreement (Fig. 7a and Supplementary Table S12). Considering that the Km values of the in-house compounds were obtained for TvL, this confirms the generality of the models for both the TvL and CuL datasets B and C. The relatively smaller pKm predicted for the p-coumaric acid (having the lowest experimental pKm) may hopefully indicate the ability of the QSAR models in differentiating substrates based on pKm. OH-dilignol was structurally more diverse than the other three compounds of the dataset and therefore, could plausibly be considered as outlier. In this case, the single descriptor QSAR model using only the ionization potential best reproduces the experimental relative pKm of the compounds. To critically test our models, we also analyzed another external dataset of 12 substrates with reported Km for a laccase from Trametes villosa (Fig. 7b and Supplementary Figure S16)76, using our recommended pKm models 5 and 6. All compounds in this dataset except 2,6-dimethylaniline are phenolic and thus suitable for testing our models once the aniline was removed (please see Supplementary Figure S16 for results with the aniline included; the models are clearly only suitable for phenolic substrates with the proper hydrogen bonding as explained above). We observed good trend agreement between the actual and predicted pKm values, even though the activities were reported for Trametes villosa laccase and the models were developed based on data for CuL, indicating again that local phenolic alignment near T1 is largely generic. We obtain R2 = 0.90 and 0.83 for predicted vs. observed pKm values using models 5 and 6, respectively. This confirms that our models can be used for broader predictions of phenolic turnover by laccases. ### Protein-state specific molecular dynamics To understand whether protein oxidation and protonation state would change the dynamics and structure of the substrate binding cavities of the two proteins, we performed a series of 12 MD simulations of variable pH and oxidation state (see Methods). All trajectories displayed stable RMSD after 50 nanoseconds (Supplementary Fig. S17), consistent with previous findings for these proteins77. Root mean square fluctuation (RMSF) plots and B factors of the residues were analyzed for the last 20 ns trajectory (Fig. 8 and Supplementary Table S13). RMSF plots revealed high flexibility of some loops and the C-terminal residues, as expected. Importantly, conserved residues involved in interactions with the substrates were stable with very low RMSF values. Asp-206 is located in the conserved loop region with the sequence Ile-Ser-Cys-Asp-Pro-Asn in both TvL and CuL. The residues of this region formed interactions with the substrates as reported above. The MD simulations show that this region is relatively stable in all TvL and CuL states with ≤0.5 Å fluctuations; thus, the importance of Asp-206 in guiding substrate binding discussed above does not seem to be dependent on the dynamics of the proteins and does not change with protein oxidation or pH state. Asn-264 and Phe-265 form important hydrogen bonding and π-π stacking interactions with substrates in TvL. Asn-264 was conserved in TvL and CuL, whereas Phe-265 was replaced by another hydrophobic residue Leu-265 in CuL. These residues are located in the loop region from Pro-263 to Asn-275, which forms part of the substrate-binding pocket. Except for two residues (TvL/CuL: Phe/Leu265 and Val/Thr268), this loop region was conserved. In TvL, this loop region fluctuated by less than 1.0 Å. However, in CuL, this segment exhibited higher fluctuations with a maximum RMSF seen for Gly-269 (3.15 Å) in the RO state of CuL (pH 4.5 Ash-206). The TvL segment Ile-455 to His-458 and the corresponding CuL region Ile-451 to His-454 form part of the binding cavity, with His-458 (TvL numbering) involved in hydrogen bonding and π-π contacts with substrates and electron transfer to T1 Cu. This region was dynamically stable and showed RMSF <1.0 Å in all the studied protein states, indicating that our conclusions are not sensitive to the specific protonation and oxidation state of the proteins. The TvL loop region from Pro-391 to Pro-396 and the corresponding CuL segment Leu-389 to Pro-394 also form an important part of the substrate-binding cavity. These regions displayed RMSF values < 1.6 Å. However, the adjacent region Thr-387 to Ala-390 in TvL showed comparatively higher fluctuations (Fig. 8 and Supplementary Table S13). In order to visualize the structural changes, clustering of the trajectories was applied to the last 20 ns of all 12 MD simulations. The centroid of the most populated cluster was chosen as a representative structure for each MD system. These structures at neutral and low pH were then compared. Considerable differences between low and neutral pH were seen, with RMSD >1.3 Å in both TvL and CuL states (Supplementary Table S14), which reflected a partial change in secondary structural elements and their conformations. This is of interest because experimental structural insight for the same proteins at different pH is not available, although the pH of experimental laccase assays vary substantially. The highest RMSD was observed between TvL RO (pH 7) and TvL RO (pH 4.5 Ash-206), where a major change was observed in the terminal residues, which attained a different conformation in the two structures. The region Trp-484 to Pro-489 existed as α helix in TvL RO (pH 7), whereas it was a loop in TvL RO (pH 4.5 Ash-206). Similar structural differences were also observed between TvL RO (pH 4.5) and TvL RO (pH 4.5 Ash-206) and when analyzing secondary structures that existed >70% of the last 20 ns (Supplementary Table S13). Similarly, in the 3-electron-reduced state of TvL (pH 7 and 4.5), a major difference occurred in the region from Pro-481 to Leu-494. This segment was helical at pH 4.5, but more disordered (Lys-482 to Ile-490) at pH 7. The loop region Tyr-152 to Asp-167 attained similar conformations in 3-electron-reduced TvL (at pH 4.5 with both protonated and non-protonated Asp-206), but fluctuated at pH 7 (Fig. 8e). This region also fluctuated in TvL RO (pH 7 and 4.5) states. In all CuL states, the loop regions Tyr-152 to Asp-166 and Gly-172 to Ala-186 showed high flexibility and achieved different conformations (Fig. 8e). We can conclude that the change in the protonation states of laccases (low and neutral pH) affect the dynamics of the protein, the three main loops surrounding the T1 site exhibit variable fluctuations (Fig. 8e), and the C-terminal residues generally show high mobility in all the TvL and CuL systems, as seen before77. However, the RO and 3e reduced states showed similar fluctuations of residues involved in substrate binding, i.e. the change in the copper oxidation states has no major effect on the protein dynamics of the binding sites, consistent with their buried and conserved nature. Thus, our QSAR results, which depend only on residues close to the substrates, will not depend on the protonation and oxidation state of the proteins. ## Conclusions In this work, we have explored the molecular determinants of laccase activity by using three experimental datasets and a variety of computational chemistry techniques. We explored QSAR properties both for the bound and free conformations of the ligands to test the importance of substrate conformation in explaining the experimental data. As far as we know this is the first attempt to quantitatively correlate experimental activity data of laccases to molecular descriptors. We hypothesized that the electronic descriptors of the substrates are important for explaining real laccase activity due to the nature of the involved electron transfer from the bound substrate to T1 of the laccase. We find that MMGBSA estimates of the binding free energy correlate with log Km for the largest, most well-behaved dataset. This indicates that Km values at least partly reflect the binding constant, but importantly other features contribute. Most of the docked phenolic substrates (14) displayed a hydrogen bond between the phenolic OH group and Asp-206 in a deprotonated state. We therefore conclude that this is the predominant active conformation of phenolic laccase substrates of importance to future optimization of laccase activity toward such substrates. The docked conformations of six ortho methoxyphenols to TvL showed a hydrogen bond between the methoxy O-atom and His-458. This residue also formed a hydrogen bond with the same OH group of six phenolic substrates, which were involved in hydrogen bonding with Asp-206. Thus, His-458 partly determines the active conformation of the bound substrates contributing to the observed Km values. All the docked phenolic substrates of dataset B displayed <5 Å distance between the phenolic OH and the His-458 Nε H-atom, as His 458 is the plausible electron acceptor at the T1 site. This indicates that the bound phenolic substrates have active conformations suitable for electron transfer. We show that simple QSAR models can describe the experimental activity parameters with good and predictive trend accuracy. We conclude that the phenolic laccase substrates mainly require an optimal shape, a good hydrophobic packing and an optimal water solubility to reach the T1 site, and then additional electronic features for the electron transfer. Our MD simulations show that changes in T2/T3 oxidation state and protonation state of the proteins, while substantially affecting some parts of the proteins, have little effect on the substrate binding site. Accordingly, our models are not sensitive to protein state, as could be hoped because the substrate site is relatively far (>12 Å) from the T2/T3 copper site. Our results suggest that laccase substrates require favorable binding at the T1 site in order to achieve a low Km, but that additional electronic properties of the substrates should be optimized as they affect Km. This suggests that Km contains some elements of a binding affinity of the substrate-enzyme complex, but also additional features relating to the electron transfer, in agreement with an interpretation of Michaelis-Menten kinetic parameters75. For laccases, this is particularly clear in the sense that only conformations that actively engage in electron transfer by optimal association with T1 manifest and contribute to observed kinetics, and this explains why Km depend on both affinity but also electronic features of the substrates. Finally, we tested the predictive capabilities of our recommended models 5 and 6 on a series of additional substrates. The trend prediction was accurate except for a few cases. The results show that the ortho methoxy group in sinapic acid and ferulic acid is the main cause of their low Km as compared to their analog p-coumaric acid with high Km and with no ortho methoxy group. This ortho methoxy group forms a hydrogen bond with His-458 of TvL and aids in stabilizing the identified active conformations, which probably accounts for real laccase turnover. Our results also show that a model of laccase turnover should largely account for the active substrate conformations rather than details of the T2/T3 site. ## Data Availability Statement The data required to reproduce the present computational work are included in the file named “Suppinfo.pdf”. It includes information about the ligand datasets A, B and C, ligand interaction diagrams, details on QSAR models, definitions of the model descriptors, RMSF plots from MD simulations, Ramachandran plot of the CuL model, sequence alignment of laccases, hydrophobicity plots of TvL and CuL, scatter plots of experimental log(relative-activity) and pKm versus ΔGbind(MMGBSA), QSAR descriptor correlation with pKm and log(relative-activity), prediction of the external dataset using QSAR models 5 and 6, and RMSD plots. ## References 1. 1. Solomon, E. I., Sundaram, U. M. & Machonkin, T. E. Multicopper Oxidases and Oxygenases. Chem. Rev. 96, 2563–2606 (1996). 2. 2. Solomon, E. I. et al. Copper active sites in biology. Chem. Rev. 114, 3659–853 (2014). 3. 3. Hakulinen, N. & Rouvinen, J. Three-dimensional structures of laccases. Cell. Mol. Life Sci. 72, 857–868 (2015). 4. 4. Jones, S. M. & Solomon, E. I. Electron transfer and reaction mechanism of laccases. Cell. Mol. Life Sci. 72, 869–883 (2015). 5. 5. Dreyer, J. L. Electron transfer in biological systems: An overview. Experientia 40, 653–675 (1984). 6. 6. Sitarz, A. K., Mikkelsen, J. D. & Meyer, A. S. Structure, functionality and tuning up of laccases for lignocellulose and other industrial applications. Crit. Rev. Biotechnol. 36, 70–86 (2016). 7. 7. Dwivedi, U. N., Singh, P., Pandey, V. P. & Kumar, A. Structure-function relationship among bacterial, fungal and plant laccases. J. Mol. Catal. B Enzym. 68, 117–128 (2011). 8. 8. Cañas, A. I. & Camarero, S. Laccases and their natural mediators: Biotechnological tools for sustainable eco-friendly processes. Biotechnol. Adv. 28, 694–705 (2010). 9. 9. Giardina, P. et al. Laccases: A never-ending story. Cell. Mol. Life Sci. 67, 369–385 (2010). 10. 10. Pogni, R., Baratto, M. C., Sinicropi, A. & Basosi, R. Spectroscopic and computational characterization of laccases and their substrate radical intermediates. Cell. Mol. Life Sci. 72, 885–896 (2015). 11. 11. Munk, L., Andersen, M. L. & Meyer, A. S. Direct rate assessment of laccase catalysed radical formation in lignin by electron paramagnetic resonance spectroscopy. Enzyme Microb. Technol. 106, 88–96 (2017). 12. 12. Kunamneni, A. et al. Engineering and applications of fungal laccases for organic synthesis. Microb. Cell Fact. 7, 32, https://doi.org/10.1186/1475-2859-7-32 (2008). 13. 13. Koschorreck, K. et al. Comparative characterization of four laccases from Trametes versicolor concerning phenolic C-C coupling and oxidation of PAHs. Arch. Biochem. Biophys. 474, 213–219 (2008). 14. 14. Pezzella, C., Guarino, L. & Piscitelli, A. How to enjoy laccases. Cell. Mol. Life Sci. 72, 923–940 (2015). 15. 15. Giardina, P. & Sannia, G. Laccases: Old enzymes with a promising future. Cell. Mol. Life Sci. 72, 855–856 (2015). 16. 16. Christopher, L. P., Yao, B. & Ji, Y. Lignin biodegradation with laccase-mediator systems. Front. Energy Res. 2, 12, https://doi.org/10.3389/fenrg.2014.0001 (2014). 17. 17. Rulísek, L. & Ryde, U. Theoretical studies of the active-site structure, spectroscopic and thermodynamic properties, and reaction mechanism of multicopper oxidases. Coord. Chem. Rev. 257, 445–458 (2013). 18. 18. Smith, M., Thurston, C. F. & Wood, D. A. In Multi-cooper oxidases 201–224, https://doi.org/10.1142/9789812830081_0007 (1997). 19. 19. Murphy, M. E. P., Lindley, P. E. & Adman, E. T. Structural comparison of cupredoxin domains: Domain recycling to construct proteins with novel functions. Protein Sci. 6, 761–770 (1997). 20. 20. Nakamura, K. & Go, N. Function and molecular evolution of multicopper blue proteins. Cell. Mol. Life Sci. 62, 2050–2066 (2005). 21. 21. Piontek, K., Antorini, M. & Choinowski, T. Crystal structure of a laccase from the fungus Trametes versicolor at 1.90-Å resolution containing a full complement of coppers. J. Biol. Chem. 277, 37663–37669 (2002). 22. 22. Sandhu, D. K. & Arora, D. S. Laccase production by Polyporus sanguineus under different nutritional and environmental conditions. Experientia 41, 355–356 (1985). 23. 23. Sulistyaningdyah, W. T., Ogawa, J., Tanaka, H., Maeda, C. & Shimizu, S. Characterization of alkaliphilic laccase activity in the culture supernatant of Myrothecium verrucaria 24G-4 in comparison with bilirubin oxidase. FEMS Microbiol. Lett. 230, 209–214 (2004). 24. 24. Baldrian, P. Fungal laccases-occurrence and properties. FEMS Microbiol. Rev. 30, 215–242 (2006). 25. 25. Stoilova, I., Krastanov, A. & Stanchev, V. Properties of crude laccase from Trametes versicolor produced by solid-substrate fermentation. Adv. Biosci. Biotechnol. 1, 208–215 (2010). 26. 26. Rogalski, J., Wojtas‐Wasilewska, M., Apalovič, R. & Leonowicz, A. Affinity chromatography as a rapid and convenient method for purification of fungal laccases. Biotechnol. Bioeng. 37, 770–777 (1991). 27. 27. Lorenzo, M., Moldes, D., Rodríguez Couto, S. & Sanromán, M. A. Inhibition of laccase activity from Trametes versicolor by heavy metals and organic compounds. Chemosphere 60, 1124–1128 (2005). 28. 28. Frasconi, M., Favero, G., Boer, H., Koivula, A. & Mazzei, F. Kinetic and biochemical properties of high and low redox potential laccases from fungal and plant origin. Biochim. Biophys. Acta 1804, 899–908 (2010). 29. 29. Polak, J. & Jarosz-Wilkolazka, A. Structure/Redox potential relationship of simple organic compounds as potential precursors of dyes for laccase-mediated transformation. Biotechnol. Prog. 28, 93–102 (2012). 30. 30. Giacobelli, V. G. et al. Repurposing designed mutants: a valuable strategy for computer-aided laccase engineering–the case of POXA1b. Catal. Sci. Technol. 7, 515–523 (2017). 31. 31. Santiago, G. et al. Computer-aided laccase engineering: toward biological oxidation of arylamines. ACS Catal. 6, 5415–5423 (2016). 32. 32. Kearse, M. et al. Geneious Basic: An integrated and extendable desktop software platform for the organization and analysis of sequence data. Bioinformatics 28, 1647–1649 (2012). 33. 33. Larkin, M. A. et al. Clustal W and Clustal X version 2.0. Bioinformatics 23, 2947–2948 (2007). 34. 34. Sirim, D., Wagner, F., Wang, L., Schmid, R. D. & Pleiss, J. The Laccase Engineering Database: A classification and analysis system for laccases and related multicopper oxidases. Database 2011, 1–7 (2011). 35. 35. Kyte, J. & Doolittle, R. F. A simple method for displaying the hydropathic character of a protein. J. Mol. Biol. 157, 105–132 (1982). 36. 36. Gasteiger, E. et al. In The Proteomics Protocols Handbook 571–607 (Humana press, 2005). 37. 37. Perna, V., Agger, J. W., Holck, J. & Meyer, A. S. Multiple Reaction Monitoring for quantitative laccase kinetics by LC-MS. Sci. Rep. 8, 8114 (2018). 38. 38. Bolton, E. E., Wang, Y., Thiessen, P. A. & Bryant, S. H. PubChem: Integrated platform of small molecules and biological activities. Annu. Rep. Comput. Chem. 4, 217–241 (2008). 39. 39. Schrödinger Release 2017-4: LigPrep, Schrödinger, LLC, New York, NY, 2017. 40. 40. Berman, H. M. et al. The protein data bank. Nucleic Acids Res 28, 235–242 (2000). 41. 41. Schrödinger Release 2017-4: Prime, Schrödinger, LLC, New York, NY, 2017. 42. 42. Schrödinger Release 2017-4: Schrödinger Suite 2017-4 Protein Preparation Wizard; Epik, Schrödinger, LLC, New York, NY, 2017; Impact, Schrödinger, LLC, New York, NY, 2017. 43. 43. Halgren, T. A. Identifying and characterizing binding sites and assessing druggability. J. Chem. Inf. Model. 49, 377–389 (2009). 44. 44. Friesner, R. A. et al. Extra precision glide: docking and scoring incorporating a model of hydrophobic enclosure for protein-ligand complexes. J. Med. Chem. 49, 6177–6196 (2006). 45. 45. Bhat, W. W. et al. Molecular characterization of UGT94F2 and UGT86C4, two glycosyltransferases from Picrorhiza kurrooa: Comparative structural insight and evaluation of substrate recognition. PLoS One 8, e73804 (2013). 46. 46. Thomas Leonard, J. & Roy, K. Comparative QSAR modeling of CCR5 receptor binding affinity of substituted 1-(3,3-diphenylpropyl)-piperidinyl amides and ureas. Bioorganic Med. Chem. Lett. 16, 4467–4474 (2006). 47. 47. Roy, K. & Leonard, J. T. QSAR modeling of HIV-1 reverse transcriptase inhibitor 2-amino-6-arylsulfonylbenzonitriles and congeners using molecular connectivity and E-state parameters. Bioorg. Med. Chem. 12, 745–754 (2004). 48. 48. Mehra, R. et al. Pro-apoptotic properties of parthenin analogs: A quantitative structure-activity relationship study. Med. Chem. Res. 22, 2303–2311 (2013). 49. 49. Leonard, J. T. & Roy, K. Classical QSAR modeling of HIV-1 reverse transcriptase inhibitor 2-amino-6-arylsulfonylbenzonitriles and congeners. QSAR Comb. Sci. 23, 23–35 (2004). 50. 50. Schrödinger Release 2017-4: QikProp, Schrödinger, LLC, New York, NY, 2017. 51. 51. Schrödinger Release 2017-4: Maestro, Schrödinger, LLC, New York, NY, 2017. 52. 52. Stewart, J. J. P. Optimization of parameters for semiempirical methods VI: More modifications to the NDDO approximations and re-optimization of parameters. J. Mol. Model. 19, 1–32 (2013). 53. 53. Schrödinger Release 2017-4: Jaguar, Schrödinger, LLC, New York, NY, 2017. 54. 54. Bochevarov, A. D. et al. Jaguar: A high-performance quantum chemistry software program with strengths in life and materials sciences. Int. J. Quantum Chem. 113, 2110–2142 (2013). 55. 55. Small-Molecule Drug Discovery Suite 2017-4, Schrödinger, LLC, New York, NY, 2017. 56. 56. Tirado-Rives, J. & Jorgensen, W. L. Performance of B3LYP density functional methods for a large set of organic molecules. J. Chem. Theory Comput. 4, 297–306 (2008). 57. 57. Zhan, C.-G., Nichols, J. A. & Dixon, D. A. Ionization potential, electron affinity, electronegativity, hardness, and electron excitation energy: molecular properties from density functional theory orbital energies. J. Phys. Chem. A 107, 4184–4195 (2003). 58. 58. Rocha, G. B., Freire, R. O., Simas, A. M. & Stewart, J. J. P. RM1: A reparameterization of AM1 for H, C, N, O, P, S, F, Cl, Br, and I. J. Comput. Chem. 27, 1101–1111 (2006). 59. 59. Schrödinger Release 2017-4: Strike, Schrödinger, LLC, New York, NY, 2017. 60. 60. Banks, J. L. et al. Integrated Modeling Program, Applied Chemical Theory (IMPACT). J. Comput. Chem. 26, 1752–1780 (2005). 61. 61. Shivakumar, D. et al. Prediction of absolute solvation free energies using molecular dynamics free energy perturbation and the opls force field. J. Chem. Theory Comput. 6, 1509–1519 (2010). 62. 62. Wang, L. et al. Accurate and reliable prediction of relative ligand binding potency in prospective drug discovery by way of a modern free-energy calculation protocol and force field. J. Am. Chem. Soc. 137, 2695–2703 (2015). 63. 63. Schrödinger Release 2016-4: Desmond Molecular Dynamics System, D. E. Shaw Research, New York, NY, 2016. 64. 64. Bowers, K. J. et al. Molecular dynamics—Scalable algorithms for molecular dynamics simulations on commodity clusters. In Proceedings of the 2006 ACM/IEEE conference on Supercomputing - SC ’06 84 (2006). 65. 65. Essmann, U. et al. A smooth particle mesh Ewald method. J. Chem. Phys. 103, 8577–8593 (1995). 66. 66. Jones, J. E. On the determination of molecular fields. —II. From the equation of state of a gas. Proc. R. Soc. London. Ser. A 106, 463–477 (1924). 67. 67. Martyna, G. J., Tobias, D. J. & Klein, M. L. Constant pressure molecular dynamics algorithms. J. Chem. Phys. 101, 4177–4189 (1994). 68. 68. Martyna, G. J., Klein, M. L. & Tuckerman, M. Nosé–Hoover chains: The canonical ensemble via continuous dynamics. J. Chem. Phys. 97, 2635–2643 (1992). 69. 69. Christensen, N. J. & Kepp, K. P. Setting the stage for electron transfer: Molecular basis of ABTS-binding to four laccases from Trametes versicolor at variable pH and protein oxidation state. J. Mol. Catal. B Enzym. 100, 68–77 (2014). 70. 70. Mehra, R. et al. Benzothiazole derivative as a novel Mycobacterium tuberculosis shikimate kinase inhibitor: Identification and elucidation of its allosteric mode of inhibition. J. Chem. Inf. Model. 56, 930–940 (2016). 71. 71. Mehra, R. et al. Discovery of new Mycobacterium tuberculosis proteasome inhibitors using a knowledge-based computational screening approach. Mol. Divers. 19, 1003–1019 (2015). 72. 72. Mehra, R., Sharma, R., Khan, I. A. & Nargotra, A. Identification and optimization of Escherichia coli GlmU inhibitors: An in silico approach with validation thereof. Eur. J. Med. Chem. 92, 78–90 (2015). 73. 73. Mehra, R. et al. Computationally guided identification of novel Mycobacterium tuberculosis GlmU inhibitory leads, their optimization, and in vitro validation. ACS Comb. Sci. 18, 100–116 (2016). 74. 74. Genheden, S. & Ryde, U. The MM/PBSA and MM/GBSA methods to estimate ligand-binding affinities. Expert Opin. Drug Discov. 10, 449–461 (2015). 75. 75. Northrop, D. B. On the Meaning of Km and V/K in Enzyme Kinetics. J. Chem. Educ. 75, 1153–1157 (1998). 76. 76. Tadesse, M. A., D’Annibale, A., Galli, C., Gentili, P. & Sergi, F. An assessment of the relative contributions of redox and steric issues to laccase specificity towards putative substrates. Org. Biomol. Chem. 6, 868 (2008). 77. 77. Christensen, N. J. & Kepp, K. P. Stability Mechanisms of a Thermophilic Laccase Probed by Molecular Dynamics. PLoS One 8, e61985 (2013). ## Acknowledgements This study was supported by The Danish Council for Independent Research (Project ref. DFF-4184-00355). ## Author information R.M. performed and analyzed the molecular modeling and cheminformatics studies. J.M. performed and analyzed the sequence alignment. K.P.K. and A.S.M. planned, supervised and helped to analyze the work. R.M. drafted the manuscript. K.P.K., R.M., A.S.M. and J.M. contributed to the manuscript proof reading. Correspondence to Anne S. Meyer or Kasper P. Kepp. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Mehra, R., Muschiol, J., Meyer, A.S. et al. A structural-chemical explanation of fungal laccase activity. Sci Rep 8, 17285 (2018) doi:10.1038/s41598-018-35633-8 • ### Engineering of a fungal laccase to develop a robust, versatile and highly-expressed biocatalyst for sustainable chemistry • Felipe de Salas • , Pablo Aza • , Joan F. Gilabert • , Gerard Santiago • , Sibel Kilic • , Mehmet E. Sener • , Jesper Vind • , Víctor Guallar • , Angel T. Martínez •  & Susana Camarero Green Chemistry (2019) • ### Contribution of substrate reorganization energies of electron transfer to laccase activity • Rukmankesh Mehra •  & Kasper P. Kepp Physical Chemistry Chemical Physics (2019) • ### Laccase Induced Lignin Radical Formation Kinetics Evaluated by Electron Paramagnetic Resonance Spectroscopy • Valentina Perna • , Jane W. Agger • , Mogens L. Andersen • , Jesper Holck •  & Anne S. Meyer ACS Sustainable Chemistry & Engineering (2019)
A Derivation of collisional results # Threshold for Electron Trapping Nonlinearity in Langmuir Waves ## Abstract We assess when electron trapping nonlinearity is expected to be important in Langmuir waves. The basic criterion is that the inverse of the detrapping rate of electrons in the trapping region of velocity space must exceed the bounce period of deeply-trapped electrons, . A unitless figure of merit, the “bounce number” , encapsulates this condition and defines a trapping threshold amplitude for which . The detrapping rate is found for convective loss (transverse and longitudinal) out of a spatially finite Langmuir wave. Simulations of driven waves with a finite transverse profile, using the 2D-2V Vlasov code loki, show trapping nonlinearity increases continuously with for transverse loss, and significant for . The detrapping rate due to Coulomb collisions (both electron-electron and electron-ion) is also found, with pitch-angle scattering and parallel drag and diffusion treated in a unified manner. A simple way to combine convective and collisional detrapping is given. Application to underdense plasma conditions in inertial confinement fusion targets is presented. The results show that convective transverse loss is usually the most potent detrapping process in a single laser speckle. For typical plasma and laser conditions on the inner laser cones of the National Ignition Facility, local reflectivities are estimated to produce significant trapping effects. nonlinear Langmuir waves; trapped electrons; laser-plasma interaction; inertial confinement fusion; stimulated Raman scattering ###### pacs: 52.25.Dg, 52.35.Fp, 52.35.Mw, 52.38.Bv, 52.38.-r, 52.57.-z ## I Introduction The nonlinear behavior of Langmuir waves (LWs) is a much-studied problem in basic plasma physics from the 1950s to the present. In this paper, we focus on nonlinearity due to electron trapping in the LW potential well. This intrinsically kinetic effect has motivated theoretical work such as nonlinear equilibrium or Bernstein-Greene-Kruskal (BGK) modes Bernstein, Greene, and Kruskal (1957), Landau damping reduction O’Neil (1965), nonlinear frequency shift Manheimer and Flynn (1971); Morales and O’Neil (1972); Dewar (1972), and the sideband instability Wharton, Malmberg, and O’Neil (1968); Kruer, Dawson, and Sudan (1969). Important applications of trapping occur in LWs driven by coherent (e.g., laser) light, including the laser plasma accelerator Tajima and Dawson (1979) and stimulated Raman scattering (SRS) Goldman and DuBois (1965); Drake et al. (1974); Kruer (2003). The latter allows the prospect of laser pulse compression to ultra-high amplitudes (the backward Raman amplifier) Malkin, Shvets, and Fisch (1999). In addition, SRS is an important risk to ICF Lindl et al. (2004); Atzeni and Meyer-ter-Vehn (2004), both due to loss of laser energy and the production of energetic (or “hot”) electrons that can pre-heat the fuel. Ignition experiments at the National Ignition Facility (NIF) Moses and Wuest (2005) have shown substantial Stimulated Raman backscatter (SRBS) from the inner cones of laser beams Meezan et al. (2010). The current study is prompted primarily by SRS-driven LW’s. Much recent work has focused on nonlinear kinetic aspects of SRS, including “inflation” due to Landau damping reduction Vu, DuBois, and Bezzerides (2001); Strozzi et al. (2007); Bénisti et al. (2009, 2010); Ellis et al. (2012), saturation by sideband instability Brunner and Valeo (2004), and LW self-focusing in multi-D particle-in-cell simulations Yin et al. (2009, 2007); Fahlen et al. (2011), Vlasov simulations Banks et al. (2011), and theory Rose and Yin (2008). One goal is to find reduced descriptions, such as envelope equations, that approximately incorporate kinetic effects Yampolsky and Fisch (2009); Dodin and Fisch (2012); Bénisti, Yampolsky, and Fisch (2012). Our aim is to provide theoretical estimates for when electron trapping nonlinearity is important in LW dynamics. These allow for self-consistency checks - or invalidations - of linear calculations of LW amplitudes. This work is therefore not primarily intended to study nonlinear LW dynamics, although we do present Vlasov simulations to quantify the onset of trapping in the presence of convective transverse loss. We consider a single, quasi-monochromatic wave with electron number density fluctuation , and slowly-varying, unitless amplitude where is the background electron density. We refer to an electron as “trapped” if it is within the phase-space island centered about the phase velocity and bounded by the separatrix in the instantaneous wave amplitude, regardless of how long it has been there. The dielectric response of the plasma depends on the distribution function, and therefore manifests trapping effects only after enough time has passed for the (typically space-averaged) distribution to be distorted. We call such a distribution trapped or flattened, since trapping produces a plateau in the space-averaged distribution centered at . Deeply-trapped electrons have an angular frequency ( defines the plasma frequency in SI units), known as the bounce frequency, corresponding to a bounce period . In our language, an electron is trapped instantaneously, but a distribution becomes trapped over a time . For a process that detraps electrons at a rate , the unitless “bounce number” measures how many bounce orbits a trapped electron completes before being detrapped. Our estimates stem from the assumption that nonlinear trapping effects are significant when is roughly unity. Trapping nonlinearity develops continuously with wave amplitude, and is not an instability with a hard threshold. Vlasov simulations presented in Sec. IV of driven LWs with a finite transverse profile demonstrate this. In addition, transit-time damping calculations Rose (2006) show the reduction in Landau damping varies continuously with and obtains a 2x reduction for . Bounce number estimates are qualitative and demonstrate basic parameter scalings. The quantitative role of trapping depends on the specific application. We consider two detrapping processes: convective loss and Coulomb collisions. For a LW of finite spatial extent, electrons enter and leave the wave from the surrounding plasma (assumed here to be in thermal equilibrium, i.e. Maxwellian). Trapping will only be effective if these electrons complete a bounce orbit before transiting the wave. We find the detrapping rate for both longitudinal end loss, which can be important in finite-domain 1D kinetic simulations, and for transverse side loss in 2D and 3D. To quantify the effect of trapping in a LW with finite transverse extent, we perform 2D-2V simulations with the parallel Vlasov code lokiBanks et al. (2011); Banks and Hittinger (2010) of a LW driven by an external field with a smooth transverse profile. Our results are in qualitative agreement with Sec. IV of Ref. Banks et al., 2011. That work considered a free LW excited by a driver of finite duration, while we consider a driver that remains on. We present a unified calculation of collisional detrapping due to electron-ion and electron-electron collisions, including both pitch-angle scattering and parallel slowing down and diffusion. This relies on the fact that (see the Appendix) the distribution in the trapping region can be Fourier decomposed into modes for , and the diffusion rate of mode is proportional to . After a short time, only electrons in the fundamental mode remain trapped. The collisional detrapping rate scales as , since the trapping width in velocity increases with wave amplitude. We discuss two ways to compare the relative importance of detrapping by side loss and collisions, which is complicated by their different scaling with . Our calculations are applied to ICF plasma conditions, particularly LW’s driven by stimulated Raman backscatter (SRBS) on the NIF. Transverse side loss out of laser speckles in a phase-plate-smoothed beam is generally a more effective detrapping process than collisions. The threshold for trapping to overcome side loss decreases with density and increases with temperature, while the collisional threshold decreases with density and slightly increases with temperature. For conditions typical of backscatter on NIF ignition experiments, namely =2 keV and with the critical density for laser light of wavelength 351 nm, a reflectivity of W cm produces linear Langmuir waves above the side loss threshold. Such values are likely to occur in intense speckles. We also show that smoothing by spectral dispersion (SSD) Skupsky et al. (1989) is ineffective at detrapping in NIF-relevant conditions. The paper is organized as follows. Section II provides some general considerations on our detrapping analysis. We present in Sec. III convective loss calculations for both longitudinal (end) and transverse (side) loss. Section IV contains Vlasov simulations with the loki code which study the competition of trapping and side loss. Detrapping by Coulomb collisions is treated in Sec. V. Our results are applied to SRBS in underdense ICF conditions in Sec. VI. We conclude in Sec. VII. The Appendix presents details of our collisional derivation and discusses the validity of our Fokker-Planck model. ## Ii General Considerations This section presents our overall framework for estimating the trapping threshold, and lays out some definitions. Consider the trapped electrons in a LW field, attempting to undergo bounce orbits. There is a time-dependent condition for trapping to distort the distribution significantly, even in the absence of any detrapping process. For instance, if a LW is suddenly excited in a Maxwellian plasma, electrons execute bounce orbits according to what we call the dynamic bounce number NdynB(t)=∫t0dt′τB(t′). (1) The time dependence of allows for a slowly-varying wave amplitude . Vlasov simulations presented in Sec. IV show that trapping starts to significantly affect the dielectric response when . That is, it takes a finite time for the distribution to reflect trapping. The early works of Morales and O’Neil O’Neil (1965); Morales and O’Neil (1972) indicate such behavior, where the damping rate and frequency shift evolve over several bounce periods until approaching steady values as the system reaches a Bernstein-Greene-Kruskal (BGK) state Bernstein, Greene, and Kruskal (1957). To estimate the threshold for trapping to overcome a detrapping process, we assume the wave has been present long enough that . The distribution has had enough time to become flattened, to the extent the detrapping process allows. For flattening to occur, an appreciable fraction of trapped electrons must remain so for about a bounce period before being detrapped. We are interested in the number of electrons in the trapping region, and how long they stay there. We define the “trapping region” to extend from where is the full width of the phase-space trapping island and with . Throughout this paper, we use uX≡vX/vTe (2) to denote the scaled velocity for various subscripts . Let denote the fraction of electrons in the trapping region at the initial time , that continuously remain so to some later time (note ). At we take the electron distribution to be Maxwellian. The fact that only some electrons in the trapping region lie within the separatrix (depending on their initial phase ) is not relevant, since all the detrapping processes considered here are insensitive to the electron’s phase in the wave. That is, the rate at which electrons leave the trapping region is independent of . The detrapping rate is defined by assuming exponential decay for the trapped fraction: . We allow for several independent detrapping processes to occur simultaneously, in that the overall detrapping rate is the sum of the rates for each th process considered separately. Since a detrapping process generally does not strictly follow exponential decay, we choose a critical fraction , which obtains for a critical time , and let . is independent of for exponential decay. We set in what follows. Given the approximate nature of our calculation, further refinement of has little value. In the literature, detrapping processes are sometimes approximated by a 1D kinetic equation with a Bhatnagar-Gross-Krook relaxation (or simply a Krook) operator Bhatnagar, Gross, and Krook (1954): [∂t+v∂x−(e/me)E∂v]f=νK⋅(nf0/n0−f). (3) The linear electron susceptibility for this kinetic equation is χ(ω,k)=−Z′(ζ)2(kλDe)2[1+iνKkvTe√2Z(ζ)]−1, (4) where and is the plasma dispersion function Fried and Conte (1961). The Krook operator relaxes the electron distribution function to an equilibrium , and locally conserves number density . The above operator does not conserve momentum or energy, although it can easily be generalized to do so. In a 1D-1V system, a Krook operator can mimic detrapping by transverse convective loss (a higher space-dimension effect) or Coulomb collisions (a higher velocity-dimension effect), such as in Ref. Rose and Russell, 2001. Any perturbation from decays exponentially at the rate , so for such an operator. This is especially useful for a detrapping process which has independent of wave amplitude; this is the case for convective loss but not for collisions (as shown below). SRS simulations with a 1D Vlasov code and Krook operator, and its suppression of kinetic inflation, are presented in Ref. Strozzi et al., 2011a. In this paper, we do not use a Krook operator to model detrapping, although we do use one in our 2D Vlasov simulations to make them effectively finite in the transverse direction (a purely numerical purpose), and to include collisional LW damping in our application to ICF conditions in Sec. VI. We take the bounce period of all trapped electrons to be , the result for deeply-trapped electrons. The actual period slowly increases to infinity for electrons near the separatrix. We then define the bounce number for process as Missing or unrecognized delimiter for \right (5) We have expressed as a ratio of the LW amplitude to a “threshold” amplitude , to some power . Recall that trapping effects like the Landau damping reduction develop continuously with , so the threshold for trapping nonlinearity is not a hard one. Besides the dependence of , also depends on in a process-dependent way. For independent of wave amplitude, which we show below is the case for convective loss, the power . This is not the case for detrapping by Coulomb collisions, which is shown in Sec. V to have . The overall detrapping rate , gives an overall bounce number via . We also define an overall threshold amplitude such that ; it is not generally true that . ## Iii Convective Loss: Theory In a LW of finite spatial extent, electrons remain in the trapping region only until they transit the wave. This detrapping manifests itself by longitudinal loss out of the ends of the wavepacket (the direction for our field representation ), as well as transverse loss out the sides. End loss is found by considering a wavepacket of length and infinite transverse extent. We work in the rest frame of the wavepacket, which may differ from the lab frame depending on application. For instance, a free LW propagates at group velocity for , while a LW driven by a driver fixed in the lab frame (such as the ponderomotive drive in SRS) will essentially be at rest. For we can treat all trapped electrons as moving forward at . Thus for end loss . To find , we take , which gives and with . The bounce number for end loss is , with exponent and threshold amplitude . In practical units where is in cm, is in keV, and is in m. For transverse side loss, consider a cylindrical wavepacket of transverse diameter and infinite longitudinal length. In total spatial dimensions, the cylinder has an dimensional cross-section. Electrons with a Maxwellian distribution are transiting the cylinder, with unnormalized distribution where is the transverse speed and is the number of electrons per . The average for , indicating that detrapping is faster in 3D than in 2D. We find the number of initially trapped electrons , that remain so after time , by summing the fraction of electrons with a given that remain trapped, times . All electrons with with have escaped, so this sets the limits of integration. In 2D, the trapped fraction is for , and the total trapped fraction is N2Dtr,sl = (2π)−1/2∫1/^t−1/^tdu⊥ e−u2⊥/2[1−|u⊥|^t] (6) = erf[1/^t√2]+(2/π)1/2^t(e−1/2^t2−1). (7) In 3D we obtain N3Dtr,sl = ∫1/^t0du⊥ u⊥ e−u2⊥/2⋅ (8) [1−2π(arcsin[u⊥^t]+u⊥^t[1−(u⊥^t)2]1/2)]. The factor in square brackets is the trapped fraction. The limiting forms are N2Dtr,sl(^t≪1) ≈ 1−[2/π]1/2^t, (9) N2Dtr,sl(^t≫1) ≈ 1/[2π]1/2^t, (10) N3Dtr,sl(^t≪1) ≈ 1−[8/π]1/2^t, (11) N3Dtr,sl(^t≫1) ≈ 1/8^t2. (12) In both limits the decrease is more rapid in 3D than in 2D. Figure 1 displays the various formulas for . The resulting detrapping rate, based on , is νd,sl=KslvTeL⊥ (13) with in (2D, 3D). As expected, the 3D detrapping rate is faster. The 3D detrapping rate exceeds the 2D one by a larger factor than the average transverse speed because the faster electrons leave first, and the relative surplus of electrons in 3D over 2D (proportional to ) increases with transverse speed. A wavepacket with asymmetric (e.g. elliptical) cross-section should have a rate between the 2D and 3D result with taken as the shortest transverse length. In a laser beam smoothed with phase plates, elliptical speckles can be produced by certain polarization-smoothing schemes or a non-spherical lens; Langmuir waves driven by SRS in such speckles would also acquire an elliptical cross-section. Comparing the end loss and side loss rates gives Extra open brace or missing close brace (14) is in the wavepacket frame. For the LW to not experience strong Landau damping, we have . depends on the physical situation (laser speckles are discussed in Sec. VI). The bounce number for side loss is analogous to end loss: , with exponent and threshold amplitude . In practical units and for the 3D , . ## Iv Vlasov simulations of convective side loss In this section, we quantify the competition between convective side loss and electron trapping in a driven Langmuir wave. We use the parallel, 2D-2V Eulerian Vlasov code lokiBanks and Hittinger (2010). This code employs a finite-volume method which discretely conserves particle number. The discretization uses a fourth-order accurate approximation for well-resolved features, and smoothly transitions to a third-order upwind method as the size of solution features approaches the grid scale. This construction enables accurate long-time integration by minimizing numerical dissipation, while retaining robustness for nonlinearly generated high frequencies. As a result, the method is not strictly monotone- or positivity-preserving, nor does it eliminate the so-called recurrence problem. This occurs at a recurrence time of when further linear evolution of a sinusoidal perturbation cannot be represented on a given grid. Our simulations are 1D or 2D, with the longitudinal coordinate as above, and the transverse coordinate. Only electrons are mobile, there is a fixed, uniform neutralizing background charge, and there is no magnetic field. The total electric field is , where the internal electric field and . The external driver field is with Ed=E0A(t)h(y)cos(k0x−ω0t). (15) There is no component to the driver field, which would be needed if the driver were derived from a scalar potential. The temporal envelope ramps up from zero to unity over a time and then stays constant. The transverse profile is h(y) = Missing or unrecognized delimiter for \right (17) 0otherwise. . The numerical aspects of our runs are as follows. The domain extends for one driver wavelength, with periodic boundaries for fields and particles. zones in was used for all runs in this paper, except for two cases in Fig. 2(a). 2D runs had periodic boundaries for fields and particles at . A Krook operator with for and rising rapidly in the boundary region was used to relax the distribution to the initial Maxwellian near the transverse boundaries. The runs were thus effectively finite in . We used 11 to 45 zones in , with more used for larger and to check convergence. The and grids both extended to . zones in were used throughout. is set by two requirements: the trapping region must be adequately resolved, and recurrence phenomena must not be significant. We found was sufficient to give converged results. loki’s advection scheme is designed to mitigate aliasing problems, and we only saw modest effects related to it when comparing runs with different . The convergence of our numerical results is shown in Fig. 2(a). The black curve is typical: it uses and has a typical , which we kept similar by varying with wave amplitude and . We first present 1D runs with and , which are detailed in Table 1. From linear theory with , where ElinxE0=∣∣∣11+χ∣∣∣=[(1+Reχ)2+(Imχ)2]−1/2. (18) is the linear electron susceptibility from Eq. (4) with , evaluated at the driver and . We chose to give nearly the maximum for a given . For , a linearly resonant exists where ; the maximum then occurs close to this point. No linear resonance exists for , which is called the loss of resonance Rose and Russell (2001). Some still maximizes in this regime. The non-resonant case differs from the resonant one, in that reducing and Landau damping, e.g. by flattening the distribution at the phase velocity by electron trapping or some other means, does not lead to a large enhancement in the Langmuir wave response to an external drive. The term in Eq. (18) keeps finite even if . For the parameters of the run 1D.7a, we find for the full, complex , while setting slightly increases it to . Similar logic applies to kinetic inflation of stimulated Raman scattering. Electron trapping and the resultant Landau damping reduction can greatly increase the scattering at a resonant wavelength. However, scattering at a non-resonant wavelength is not subject to inflation, and can even decrease, due to reducing . Non-resonant SRS can occur in a situation seeded away from resonance Ellis et al. (2012), or if the plasma conditions are such that no resonance exists for any scattered wavelength, namely high and low . Figure 2 presents the results of our 1D runs. Panel (a) shows the time evolution of the amplitude of for , normalized to the linear value from Eq. (18). Early in time () the linear response is achieved, which validates the linear dispersion and properties of loki when using the chosen grid resolution. As time progresses the response increases due to the damping reduction, and then oscillates due to the interplay of the frequency shift and the fixed driver. Similar behavior was seen in Ref. Yampolsky and Fisch, 2009. We plot the results vs. the dynamic bounce number from Eq. (1), using the time-dependent , in the center and right panels. is thus a trapping-based re-scaling of time. The other runs from Table 1 are included as well. The driver strength was chosen in runs 1D.35b, 1D.5a, and 1D.7a to give similar bounce periods. In all cases, the linear response is achieved after a transient period related to driver turn-on, until . After this point the response increases, until the frequency shift develops at . As increases, the enhancement above linear response decreases. This is likely due to the rapid increase of the frequency shift with , as shown by most theoretical calculations, e.g. Ref. Morales and O’Neil, 1972. For , there is a slight enhancement to 1.3x the linear response, followed by a dip to about 0.7x and subsequent oscillation about unity. This lack of significant trapping nonlinearity agrees with the above discussion of the non-resonant regime. From Eq. (13), the 2D side loss rate is , where we have taken , the full-width at half-max of . The side loss bounce number is then NB,sl=Ly25.6λDeδN1/2. (19) Recall that electrons feel the total electric field (drive plus interal), and is an equivalent density fluctuation. Gauss’s law gives , where is the amplitude of the Fourier mode of the on-axis field , and denotes a normalized field. Using the linear response from Eq. (18), we obtain the linear estimate NB,sl=Ly25.6λDe∣∣∣k0λDe1+χ∣∣∣1/2~E1/20. (20) The 2D loki runs are listed in Table 2. All runs used , , and , the same as run 1D.35c. For these values, our linear estimate becomes . The field magnitude is plotted vs. the dynamic bounce number found using for the 2D runs in Fig. 3. The black curve is the analogous 1D run 1D.35c. For there is a continuous increase in the response with profile width . This allows us to quantify trapping nonlinearity vs. , which we do in Fig. 4. The abscissa in that figure is the side loss bounce number, , computed with linear response as in Eq. (20). The ordinate is the field enhancement due to trapping, scaled to the same quantity for the 1D run. This is shown at times corresponding to several values of ranging from 0.75 to 2. These times are early enough that the amplitudes have been mostly increasing, with little oscillation due to the frequency shift. The curves agree well, and demonstrate the continuous development of trapping effects with wide profiles. Slightly more than half the 1D trapping effect obtains for , which vindicates our approximate threshold for trapping. The plasma response to a driver with transverse profile differs from the 1D case. This can be seen in the ordinate of Fig. 4 falling below zero for the smallest . There have been several linear calculations of transit-time damping in LWs of finite extent, mostly by integration along particle orbits Short and Simon (1998); Skjæraasen, Robinson, and Melatos (1999). Ref. Short and Simon, 1998 showed that, for a potential with a step-function profile in space, the transit-time damping exceeds that for an infinite plane-wave for , while for it can be less. We adopt the alternative approach of writing the response as a superposition of responses to the Fourier modes comprising the drive. This is particularly convenient for our , which (when periodically repeated) is composed of only two Fourier modes. For simplicity we present the result for periodically repeated, instead of the actual loki profile with compact support over . The compact case would lead to a continuous Fourier transform rather than discrete series, and introduce a line width around the dominant modes. This does not change the qualitative result. Unlike Ref. Short and Simon, 1998, our compact profile is not a step function but smooth, with and continuous at all points (although is not). The drive , made periodic in , is Ed=E04ei(k0x−ω0t)[1+12eik1y+12e−ik1y]+c.c. (21) A standard kinetic calculation, accounting for the fact that has no component and thus does not come from a potential, gives the field at : Elinx(x,t,y=0)=E0|R|cos(k0x−ω0t+α), (22) 2R=11+χ0+1+(1+(k0/k1)2)−1χ+1+χ+. (23) Note that the linear for our . is a real phase. is the collisionless susceptibility for from Eq. 4, which depends only on and . and with . For , we recover the 1D result Eq. (18). Physically, the higher- modes induced by the transverse profile are more Landau damped (as well as being slightly off resonance for the fixed ), which reduces the response. For the parameters of Table 2, we find for where is the value for . We obtain a slight decrease in the linear response for our sharpest profile (), and an insignificant change for wider ones. This is borne out by Fig. 3. The red curve for shows no signs of trapping, and reaches a steady level slightly more than 0.8 times the 1D linear value. The blue curve () shows a slight trapping enhancement, and reaches a steady level slightly above 1.2x linear after about 2 bounce periods. ## V Coulomb Collisions Collisions remove electrons from the trapping region via pitch-angle scattering (from electron-ion and electron-electron collisions) as well as parallel drag and diffusion (from only electron-electron collisions since ). We adopt a Fokker-Planck collision operator, and discuss its validity in the Appendix: ∂tf = ν0(1+Zeff)u−3∂μ[(1−μ2)∂μf] (24) +2ν0u−2∂u(f+u−1∂uf). where is the pitch angle between and the direction, and . is a thermal electron-electron collision rate: ν0≡ωpelnΛee8πNDe. (25) and ( in cm, in eV) is the electron-electron Coulomb logarithm appropriate for eV (Ref. Huba, 2007, p. 34). The effective charge state is Zeff≡∑ifiZ2i¯ZlnΛeilnΛee, (26) where is the total ion density, with ; , and is the electron-ion Coulomb logarithm Huba (2007). In section VI we apply our results to Langmuir waves generated by Raman scattering in underdense ICF plasmas, which are typically low-Z. For instance, NIF ignition hohlraum designs currently use an He gas fill (with H/He mixtures contemplated), and plastic ablators (57% H, 42% C atomic fractions). This gives when fully-ionized and . Be and diamond ablators are also being considered. For illustration, we take as the lowest reasonable value (fully-ionized H), and use (fully-ionized Be) to represent an ablator plasma. It is useful to define a unitless time (different from the side loss used above), which demonstrates some of the basic collisional scaling: ^t ≡ νctδN, (27) νc ≡ π216ν0u3p(kλDe)2=π128(kλDe)5(ω/ωpe)3lnΛeeNDeωpe. (28) Our collisional calculation of the trapped fraction is detailed in the Appendix. The key observation is that the distribution in the trapping region can be decomposed into Fourier modes for , and the diffusion rate of mode is proportional to . After a short time, only electrons in the mode remain trapped, so it suffices to consider just the number in the mode. At , this is 81% of the total (the other 19% rapidly diffuses out). The upshot is that , the fraction of initially trapped particles remaining in the fundamental mode after time , is Ntr,c(^t,Zeff,up)=0.81∫∞0du⊥u⊥exp[−u2⊥/2−D^t]. (29) is given in Eq. (53). Eq. (29) is an implicit, integral equation for as a function of , , and . We find the “exact” solution by performing the integral numerically, and interpolating for a desired . We derive an approximate solution, valid for , for in the Appendix. The result is ^t≈^t0+^t1u−2p. (30) and are both positive and depend only on , so decreases with increasing . Figure 5 plots for several and , using the exact results (solid curves) and the approximate form for of Eq. (60) (dashed curves). Few electrons remain trapped at . The approximate forms are quite good, even though is not that large. Figure 6 displays the relative error between for computed two ways. The exact is found numerically, and is from Eq. (30), with Eq. (61) for and Eq. (66) for . The agreement is excellent, within 1% for most of parameter space. The collisional detrapping rate is νd,c=νcδNln(1/Ntr)^t. (31) Note that since : the larger the wave amplitude, the wider the trapping region extends in velocity, and collisions take longer to remove the electron velocity from this region. Recall that depends slightly on the choice of due to the non-exponential decay of with ; as with convective loss we choose . The collisional bounce number is NB,c=[δNδNc]3/2,δNc=[2πνcωpeln(1/Ntr)^t]2/3. (32) The amplitude exponent for collisions is , unlike the convective loss value of 1/2. This stems from the fact that for collisions is amplitude-dependent while for convective loss it is not. We now construct the overall bounce number for convective side loss and collisions, as outlined above. Assuming that separate detrapping processes are independent, and their detrapping rates add, yields N−1B,O=N−1B,sl+N−1B,c=[δNslδN]1/2+[δNcδN]3/2. (33) We define an overall threshold amplitude such that . Eq. (33) gives a cubic equation for : a3−δN1/2sla2−δN3/2c=0. (34) There are two ways to compare the relative importance of side loss and collisions. One is: for which process must the wave amplitude be larger for trapping to be significant ()? The other is: for a given , which process will detrap more effectively? The two views are not equivalent, due to the different dependence of the side loss and collisional detrapping rate on . The first amounts to comparing the thresholds and , which can be computed just from plasma and wave properties without knowing . The ratio of detrapping rates can be written in terms of a critical amplitude : νd,cνd,sl=δNcrδN,δNcr≡ln2^tKslνcωpeL⊥λDe. (35) ## Vi Parameter study for ICF underdense plasmas We now apply our analysis to ICF conditions where stimulated Raman scattering (SRS) can occur, namely the underdense coronal plasma. SRS is a parametric three-wave process where a pump light wave such as a laser (we which label mode 0) decays to a scattered light wave (mode 1) and a Langmuir wave (mode 2). We restrict ourselves to exact backscatter (SRBS; anti-parallel to ), as this generates the largest (smallest ) and thus makes trapping effects more important (small transverse components to have little effect on the phase velocity). Both measurements and simulations with the paraxial-envelope propagation code pf3d Berger et al. (1998) have shown backscatter to be the dominant direction for SRS. With , the phase-matching conditions are and with . We employ the (cold) light-wave dispersion relation for modes 0 and 1, and use the vacuum wavelength . Frequency matching thus requires , with the critical density for mode , and . For specific examples we choose = 351 nm, appropriate for frequency-tripled UV light currently in use on NIF. Specific plasma conditions thought to be typical for SRBS on NIF ignition targets, during early to mid peak laser power, are and keV ( nm) Strozzi et al. (2011b). The scattered wavelength continuously increases during a NIF experiment, consistent with the hohlraum filling to higher density. An important case for this paper is LW’s driven by SRBS in the speckles of a phase-plate-smoothed laser beam Kato et al. (1984). For a laser wavelength and square RPP with optics F-number , the intense speckles have and (see Ref. Garnier and Videau, 2001). A speckled beam is not the only situation where SRS can occur; for instance, there has been recent interest in re-amplification of backscatter by crossing laser beams Kirkwood et al. (2011) and backward Raman amplifiers Yampolsky and Fisch (2011). However, for a single laser beam, experiments at Omega and pf3d simulations show speckle physics, and its modification by beam smoothing, must be accounted for to accurately model SRS Froula et al. (2009, 2010). Experiments have also verified the increase in backscatter with increased gain per speckle length, by changing the laser aperture and thus the effective Froula (). We therefore focus on speckles. On NIF, four laser beams, each smoothed by a phase plate and with an overall square aperture, are grouped into a “quad” which yields an effective square aperture of . We thus use for illustration. As the beams of a quad propagate through a target, they can separate from one another, refract, and undergo other effects that change the shape of their effective aperture and speckle pattern. We do not pursue this further here, but it should be born in mind when applying our analysis. Also the ratio is so small that (3D) is small for essentially all speckles of interest. Thus side loss is a more potent detrapping mechanism than end loss, in speckles. To quantify detrapping rates, we consider the threshold amplitudes and . Unlike , depends on and of the Langmuir wave. For a given set of plasma conditions, the choice of is not unique but depends on the application. For SRS developing locally, one can choose the LW corresponding to the largest growth rate for those conditions. Another approach is to consider a single scattered-light frequency as it propagates through a target. We consider only variations induced by spatial profiles and not variations due to temporal plasma evolution Dewandre, Albritton, and Williams (1981) (which is mostly relevant to stimulated Brillouin scattering). In this case, the matching conditions given the local plasma properties dictate how varies. Figure 7 presents the local for SRBS computed in two ways. The black curves are found by phase-matching with a “natural” LW, by which we mean where complex satisfies 1+χ[k2r,ω2c]=0 (36) with real . To find , we set and recover the usual collisionless . We use below as a simple way to include collisional LW damping when Landau damping is negligible. The red curves in Fig. 7 are the which maximizes the local spatial SRBS gain rate in the strong damping limit Strozzi et al. (2008): ∂zlni1(λ1,z)=[−2πremec2
## 15.84 Derived completion for Noetherian rings Let $A$ be a ring and let $I \subset A$ be an ideal. For any $K \in D(A)$ we can consider the derived limit $K' = R\mathop{\mathrm{lim}}\nolimits (K \otimes _ A^\mathbf {L} A/I^ n)$ This is a functor in $K$, see Remark 15.78.10. The system of maps $A \to A/I^ n$ induces a map $K \to K'$ and $K'$ is derived complete with respect to $I$ (Lemma 15.82.13). This “naive” derived completion construction does not agree with the adjoint of Lemma 15.82.10 in general. For example, if $A = \mathbf{Z}_ p \oplus \mathbf{Q}_ p/\mathbf{Z}_ p$ with the second summand an ideal of square zero, $K = A[0]$, and $I = (p)$, then the naive derived completion gives $\mathbf{Z}_ p[0]$, but the construction of Lemma 15.82.10 gives $K^\wedge \cong \mathbf{Z}_ p[1] \oplus \mathbf{Z}_ p[0]$ (computation omitted). Lemma 15.83.2 characterizes when the two functors agree in the case $I$ is generated by a single element. The main goal of this section is the show that the naive derived completion is equal to derived completion if $A$ is Noetherian. Lemma 15.84.1. In Situation 15.82.14. If $A$ is Noetherian, then for every $n$ there exists an $m \geq n$ such that $K_ m^\bullet \to K_ n^\bullet$ factors through the map $K_ m^\bullet \to A/(f_1^ m, \ldots , f_ r^ m)$. In other words, the pro-objects $\{ K_ n^\bullet \}$ and $\{ A/(f_1^ n, \ldots , f_ r^ n)\}$ of $D(A)$ are isomorphic. Proof. Note that the Koszul complexes have length $r$. Thus the dual of Derived Categories, Lemma 13.12.5 implies it suffices to show that for every $p < 0$ and $n \in \mathbf{N}$ there exists an $m \geq n$ such that $H^ p(K_ m^\bullet ) \to H^ p(K_ n^\bullet )$ is zero. Since $A$ is Noetherian, we see that $H^ p(K_ n^\bullet ) = \frac{\mathop{\mathrm{Ker}}(K_ n^ p \to K_ n^{p + 1})}{\mathop{\mathrm{Im}}(K_ n^{p - 1} \to K_ n^ p)}$ is a finite $A$-module. Moreover, the map $K_ m^ p \to K_ n^ p$ is given by a diagonal matrix whose entries are in the ideal $(f_1^{m - n}, \ldots , f_ r^{m - n})$ if $p < 0$ (in fact they are in the $|p|$th power of that ideal). Note that $H^ p(K_ n^\bullet )$ is annihilated by $I = (f_1^ n, \ldots , f_ r^ n)$, see Lemma 15.28.6. Now $I^ t \subset (f_1^{m - n}, \ldots , f_ r^{m - n})$ for $m = n + tr$. Thus by Artin-Rees (Algebra, Lemma 10.50.2) for some $m$ large enough we see that the image of $K_ m^ p \to K_ n^ p$ intersected with $\mathop{\mathrm{Ker}}(K_ n^ p \to K_ n^{p + 1})$ is contained in $I \mathop{\mathrm{Ker}}(K_ n^ p \to K_ n^{p + 1})$. For this $m$ we get the zero map. $\square$ Proposition 15.84.2. Let $A$ be a Noetherian ring. Let $I \subset A$ be an ideal. The functor which sends $K \in D(A)$ to the derived limit $K' = R\mathop{\mathrm{lim}}\nolimits ( K \otimes _ A^\mathbf {L} A/I^ n )$ is the left adjoint to the inclusion functor $D_{comp}(A) \to D(A)$ constructed in Lemma 15.82.10. Proof. Say $(f_1, \ldots , f_ r) = I$ and let $K_ n^\bullet$ be the Koszul complex with respect to $f_1^ n, \ldots , f_ r^ n$. By Lemma 15.82.17 it suffices to prove that $R\mathop{\mathrm{lim}}\nolimits (K \otimes _ A^\mathbf {L} K_ n^\bullet ) = R\mathop{\mathrm{lim}}\nolimits (K \otimes _ A^\mathbf {L} A/(f_1^ n, \ldots , f_ r^ n) ) = R\mathop{\mathrm{lim}}\nolimits (K \otimes _ A^\mathbf {L} A/I^ n ).$ By Lemma 15.84.1 the pro-objects $\{ K_ n^\bullet \}$ and $\{ A/(f_1^ n, \ldots , f_ r^ n)\}$ of $D(A)$ are isomorphic. It is clear that the pro-objects $\{ A/(f_1^ n, \ldots , f_ r^ n)\}$ and $\{ A/I^ n\}$ are isomorphic. Thus the map from left to right is an isomorphism by Lemma 15.78.12. $\square$ Lemma 15.84.3. Let $I$ be an ideal of a Noetherian ring $A$. Let $M$ be an $A$-module with derived completion $M^\wedge$. Then there are short exact sequences $0 \to R^1\mathop{\mathrm{lim}}\nolimits \text{Tor}_{i + 1}^ A(M, A/I^ n) \to H^{-i}(M^\wedge ) \to \mathop{\mathrm{lim}}\nolimits \text{Tor}_ i^ A(M, A/I^ n) \to 0$ A similar result holds for $M \in D^-(A)$. Proof. Immediate consequence of Proposition 15.84.2 and Lemma 15.78.4. $\square$ As an application of the proposition above we identify the derived completion in the Noetherian case for pseudo-coherent complexes. Lemma 15.84.4. Let $A$ be a Noetherian ring and $I \subset A$ an ideal. Let $K$ be an object of $D(A)$ such that $H^ n(K)$ a finite $A$-module for all $n \in \mathbf{Z}$. Then the cohomology modules $H^ n(K^\wedge )$ of the derived completion are the $I$-adic completions of the cohomology modules $H^ n(K)$. Proof. The complex $\tau _{\leq m}K$ is pseudo-coherent for all $m$ by Lemma 15.62.18. Thus $\tau _{\leq m}K$ is represented by a bounded above complex $P^\bullet$ of finite free $A$-modules. Then $\tau _{\leq m}K \otimes _ A^\mathbf {L} A/I^ n = P^\bullet /I^ nP^\bullet$. Hence $(\tau _{\leq m}K)^\wedge = R\mathop{\mathrm{lim}}\nolimits P^\bullet /I^ nP^\bullet$ (Proposition 15.84.2) and since the $R\mathop{\mathrm{lim}}\nolimits$ is just given by termwise $\mathop{\mathrm{lim}}\nolimits$ (Lemma 15.78.1) and since $I$-adic completion is an exact functor on finite $A$-modules (Algebra, Lemma 10.96.2) we conclude the result holds for $\tau _{\leq m}K$. Hence the result holds for $K$ as derived completion has finite cohomological dimension, see Lemma 15.82.18. $\square$ Lemma 15.84.5. Let $I$ be an ideal of a Noetherian ring $A$. Let $M$ be a derived complete $A$-module. If $M/IM$ is a finite $A/I$-module, then $M = \mathop{\mathrm{lim}}\nolimits M/I^ nM$ and $M$ is a finite $A^\wedge$-module. Proof. Assume $M/IM$ is finite. Pick $x_1, \ldots , x_ t \in M$ which map to generators of $M/IM$. We obtain a map $A^{\oplus t} \to M$ mapping the $i$th basis vector to $x_ i$. By Proposition 15.84.2 the derived completion of $A$ is $A^\wedge = \mathop{\mathrm{lim}}\nolimits A/I^ n$. As $M$ is derived complete, we see that our map factors through a map $q : (A^\wedge )^{\oplus t} \to M$. The module $\mathop{\mathrm{Coker}}(q)$ is zero by Lemma 15.82.7. Thus $M$ is a finite $A^\wedge$-module. Since $A^\wedge$ is Noetherian and complete with respect to $IA^\wedge$, it follows that $M$ is $I$-adically complete (use Algebra, Lemmas 10.96.5, 10.95.11, and 10.50.2). $\square$ Lemma 15.84.6. Let $I$ be an ideal in a Noetherian ring $A$. 1. If $M$ is a finite $A$-module and $N$ is a flat $A$-module, then the derived $I$-adic completion of $M \otimes _ A N$ is the usual $I$-adic completion of $M \otimes _ A N$. 2. If $M$ is a finite $A$-module and $f \in A$, then the derived $I$-adic completion of $M_ f$ is the usual $I$-adic completion of $M_ f$. Proof. For an $A$-module $M$ denote $M^\wedge$ the derived completion and $\mathop{\mathrm{lim}}\nolimits M/I^ nM$ the usual completion. Assume $M$ is finite. The system $\text{Tor}^ A_ i(M, A/I^ n)$ is pro-zero for $i > 0$, see Lemma 15.27.3. Since $\text{Tor}_ i^ A(M \otimes _ A N, A/I^ n) = \text{Tor}_ i^ A(M, A/I^ n) \otimes _ A N$ as $N$ is flat, the same is true for the system $\text{Tor}^ A_ i(M \otimes _ A N, A/I^ n)$. By Lemma 15.84.3 we conclude $R\mathop{\mathrm{lim}}\nolimits (M \otimes _ A N) \otimes _ A^\mathbf {L} A/I^ n$ only has cohomology in degree $0$ given by the usual completion $\mathop{\mathrm{lim}}\nolimits M \otimes _ A N/ I^ n(M \otimes _ A N)$. This proves (1). Part (2) follows from (1) and the fact that $M_ f = M \otimes _ A A_ f$. $\square$ Lemma 15.84.7. Let $I$ be an ideal in a Noetherian ring $A$. Let ${}^\wedge$ denote derived completion with respect to $I$. Let $K \in D^-(A)$. 1. If $M$ is a finite $A$-module, then $(K \otimes _ A^\mathbf {L} M)^\wedge = K^\wedge \otimes _ A^\mathbf {L} M$. 2. If $L \in D(A)$ is pseudo-coherent, then $(K \otimes _ A^\mathbf {L} L)^\wedge = K^\wedge \otimes _ A^\mathbf {L} L$. Proof. Let $L$ be as in (2). We may represent $K$ by a bounded above complex $P^\bullet$ of free $A$-modules. We may represent $L$ by a bounded above complex $F^\bullet$ of finite free $A$-modules. Since $\text{Tot}(P^\bullet \otimes _ A F^\bullet )$ represents $K \otimes _ A^\mathbf {L} L$ we see that $(K \otimes _ A^\mathbf {L} L)^\wedge$ is represented by $\text{Tot}((P^\bullet )^\wedge \otimes _ A F^\bullet )$ where $(P^\bullet )^\wedge$ is the complex whose terms are the usual $=$ derived completions $(P^ n)^\wedge$, see for example Proposition 15.84.2 and Lemma 15.84.6. This proves (2). Part (1) is a special case of (2). $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
### Home > CC2 > Chapter 3 > Lesson 3.3.2 > Problem3-110 3-110. This problem is a checkpoint for multiplication of fractions and decimals. It will be referred to as Checkpoint 3. Multiply each pair of fractions or each pair of decimals. Simplify if possible. Homework Help ✎ 1. $\frac{3}{4}\cdot\frac{3}{5}$ 1. $\frac{9}{20}\cdot\frac{4}{9}$ 1. $3\frac{1}{6}\cdot1\frac{1}{3}$ 1. $2\frac{1}{4}\cdot3\frac{1}{5}$ 1. $3.62·3.4$ 1. $0.26·0.0008$
HEJ, HU ISSN 1418-7108Manuscript no.: ANM-000926-A ## Prime tests Let be an odd number after the separation of the small factors. First we have to decide, if this number is prime or not. A test based on Fermat's little theorem Already FERMAT has proved the following very important theorem: THEOREM 7. FERMAT 's little theorem If is a prime which does not divide , then mod ). REMARK This theorem is a special case of a theorem of EULER, which asserts for relatively prime integers and , that mod , where the function gives the number of the relative prime positive integers less equal to . FERMAT's theorem gives a very effective tool to filter out most of the composite numbers. If e.g. mod , then we know surely, that the number is composite. > 2^118 mod 119; Though the computing of these powers seems to be difficult, there are very efficient and fast algorithms for this ([1], [8]). In Maple we can use the more efficient modp and power functions instead of the traditional power operator. Thus, we can prove immediately, that the th Fermat number is not prime. > 3^(2^32) mod (2^32+1); Error, integer too large in context > modp(power(3,2^32),2^32+1); Unfortunately, the reverse of the theorem does not hold entirely. So if we get mod , then in spite of this is it possible for to be a composite number. Such number is called a pseudoprime (in base ). The pseudoprimes are fairly rare, analysing the chance of their occurence compared to the primes we get at most a few per thousand. > pseudo:=proc(n) > local i,s; > s:=NULL; > for i from 3 by 2 to n do > if not(isprime(i)) and (2^(i-1) mod i)=1 then s:=s,i > fi > od; > RETURN(s) > end: > s1:=pseudo(3001) Moreover, additional pseudoprimes can be unveiled choosing another base instead of , such as . > 3^1386 mod 1387, 1387=ifactor(1387); There are however such extreme numbers (called Carmichael numbers), which are pseudoprimes in all bases, which are relative prime to all of their divisors. These numbers are very rare, the chance of choosing such a (not very small) number is less than approximately one to a million. The first such number is . The Carmichael numbers have at least 3 different prime divisors, so one of their divisors is less than ([8]). We can search the Carmichael numbers with the following small program, the input parameter of which is a sequence of pseudoprimes. > carmic:=proc(s) > local s2,i,n,j,carm; > s2:=NULL; > for i from 1 to nops([s]) do > carm:=true: n:=trunc(sqrt(s[i])): > for j from 3 by 2 to n do > if modp(power(j,n-1),n)<>1 and (n mod j)>0 then > carm:=false fi > od; > if carm then s2:=s2,s[i] fi > od; > RETURN(s2) > end; > carmic([s1]); If a number passes through the filters and then it is recommended to stop this method. There are some other methods to unveil such composite numbers. A strong pseudoprime test Let us assume, that mod , where ( is odd), and gcd. In this case is a divisor of . If is a prime number, then it divides exactly one of the factors (else it would divide the difference of the factors too), thus mod or mod . However, if is not prime, then we have a good chance, that some divisor of divides , and an another divides . Thus we get for the remainder mod , since does not divide any of the factors. We can continue in the same manner with . Eventually, we are able to filter out most of the pseudoprimes. > n:=1387: > 2^1386 - 1 = (2^693-1)*(2^693+1); > modp(power(2,693),n); The Miller-Rabin test The following random method is very likely to be the best tool for primality testing. It was composed and analysed by G. L. MILLER and M. O. RABIN in the late s. Let be a prime number, and let mod , where is an integer with . In this case mod . Since is prime, thus or , i.e. for mod we get or . Stepping back similarly we get eventually, that in the remainder sequence mod , mod , mod mod the last value must be , and before it we get . If the number is composite, then this probably does not hold. Thus the algorithm proceeds as follows: we choose a random with . We produce the remainder sequence as above. If this sequence ends with and , then the number is (likely) prime. If and then the number is (surely) not prime. We illustrate the operation of this method to unveil a Carmichael number. > n:=1729: > ifactor(n-1); > j:=3^3: > seq(2^(2^i*j) mod n,i=0..6); If the algorithm says, that a number is not prime, then this is surely true, and if the answer is prime, then this is not yet certain. M. O. RABIN has proved, that for arbitrary the algorithm gives wrong answer with a probability at most ([8]). So choosing new random bases, after e.g. 20 executions, the probability, that the answer is still prime and the number is in spite of this composite, is less than . Thus, in practice we can say, that this answer is correct (the exact proof of the prime property is discussed in a subsequent subsection). The significance of this method is, that we get a reliable answer in relatively short time for very large numbers (containing several hundred digits), too. The isprime function in Maple uses this method too ([3]), for such numbers, which seem to be primes with methods described earlier. HEJ, HU ISSN 1418-7108Manuscript no.: ANM-000926-A
# Gravity never zero Discussion in 'Astronomy, Exobiology, & Cosmology' started by Ivan, Dec 18, 2011. Not open for further replies. 1. ### Robittybob1BannedBanned Messages: 4,199 The more work that is done in increasing the size of the Universe, to the point of making it infinite as some are now saying, the less certain the Big Bang is in my opinion. It is very difficult to expand a walnut sized object into infinity in just 13.7 Billion years. 3. ### EmilValued Senior Member Messages: 2,801 In my opinion, the Big Bang (if indeed it happened) was a local phenomenon in space and time. Space and time was not created by the Big Bang. 5. ### originHeading towards oblivionValued Senior Member Messages: 11,593 Personal opinions based on hunches are not usually the big drivers in science. 7. ### Robittybob1BannedBanned Messages: 4,199 It's the ability to follow up on these hunches. They pay off if you can invest the time and energy into them. 8. ### OnlyMeValued Senior Member Messages: 3,914 Origin, you miss the point by dissecting posts. Every comment becomes a discussion out of context. Theories are no longer theories, once they are proven. Sometimes, a portion of a theory may become proven and elevated to the level of an observed or tested fact, which may or may not also elevate the underlying theory to the same level. The big bang remains an unproven theory. This is all I was saying. Well, that and a comment on the fact that all to often, many posters, myself included at times, talk about "theory" as if it were fact. The fact, that people become psychologically dependent on their belief systems is well known and "proven". This applies in science as well as other areas. At least some portion of the redshift is involved in circular reasoning.... The expansion of the universe is demonstrated by the redshift, which is caused by the expanding universe. That is circular reasoning... Note I did not say it was wrong. It just has not been proven and can be seen as circular reasoning. Most of what we believe about the universe today, will almost certainly change over time, as it has down through the ages. 9. ### OnlyMeValued Senior Member Messages: 3,914 Have you ever wondered why most of the really big shifts and changes in theoretical science come from young men and women just starting out intheir careers? They are the one's with hunches and the time and freedom to explore them. They have not yet invested too much in their theories, to push them aside for new ideas. Messages: 11,593 Like what? 11. ### originHeading towards oblivionValued Senior Member Messages: 11,593 The operative word here is explore the ideas to see if they have merit. I was commenting on the lazy gits who say "in my opinion bla bla bla", they will never take the initative or expend the effort to educate themselves to move beyond the opinion point. 12. ### keith1Guest You and your opinion remain as they always have been--objects withinyour walnut shell analogy. In this respect, your shell was always infinite in size. That was an easy one. 13. ### keith1Guest It remains the most descriptive possibility of the dynamics possible, from the observed results. It must be close to those dynamics, in order to get these observed results. 14. ### keith1Guest You cannot bandy what you cannot prove, nor inflect from observation. 15. ### OnlyMeValued Senior Member Messages: 3,914 The tendency to discuss theory as fact has a long record and is not limited to any individual, or credentials. The specific quote that triggered my attempt at reminder of the difference between what is known as a matter of fact and what remains the subject of theory was, I read the above, perhaps incorrectly as implying that we do know with certainty what happened after the initial $10^{-43}$ seconds, of the Big Bang. Theoretically yes. With certainty no. Mark this one up to my current favorite complaint. I read as many research papers as I can these days. I am retired and have the time, most of the time. Most often credible papers make it clear that they are presenting a theory or their theory and often even point out their own flaws. By contrast in discussion groups theories are often described as if they were settled fact, rather than our best explanations. I never really meant anything personal, but an example as you requested, might be general relativity. While it has been very successful and many predictions have been proven, the theory remains a theory and as a whole is yet to be raised beyond the level of theory. The standard model of particle physics has been one of our great theoretical achievements and yet at very high energy levels it begins to return unreasonable results and predictions. The big bang remains a theory with less experimental confirmation than either of the above. And yes it does fit observation well... But then we human beings have always been good at finding believable explanatintions that fit what we see, very nicely, until we discover things work differently than we thought. As I mentioned earlier, I would say I am an agnostic as to what the beginning, if there was a beginning looked like..., way to long before my time. I will say that I believe it is likely that both GR and what we currently know of QM will both look very different in another few hundred years. Even though they are our best descriptions and explanations of what and how we see the world today. 16. ### OnlyMeValued Senior Member Messages: 3,914 It is close enough to explain things as they appear from our frame of reference, but we cannot step outside of the picture to see the whole of what is. History is full of theories and ideas that were the best explanations of their day, which have as a matter of routine been replaced by a better explanation as our understand and knowledge has grown. Just because an explanation is our best explanation now, does not mean it will be in the future or that it is even an accurate description. It is just the best we can do with what things look like, to us, now. 17. ### EmilValued Senior Member Messages: 2,801 You forget one thing. You have to prove that space and time were created with the Big Bang. Show the evidence and the reasoning leading to this hypothesis. 18. ### keith1Guest No I don't. You're the one making the claims. You claim an existing space-time which must be disturbed by a new intruding, expanding space-time field. That evidence would be unobservable--beyond the event horizon. Observable expanding spacetime exists along with, in the same setting, the same set, as-- the other observable forces. I'll venture that much. 19. ### EmilValued Senior Member Messages: 2,801 So you understand that I claim an expanding space-time field ? 20. ### keith1Guest Yeah, yeah, yeah everything is just hunky. Messages: 2,801 lol... 22. ### hansdaValued Senior Member Messages: 2,424 Creation of the universe has to be associated with some well tested and proven theories ; which have no limitaions or less limitations . 'The theory of creation of our universe' should not contradict any of the existing Law of Physics or else Physics is to be rewritten with new Laws . If we consider BB created the Universe , there is still remaining many unanswered questions and contradictions with the existing theories . So, I think creation of the universe can be associated with either of the two well established Laws of Physics . These two Laws are : 1) " Mass and Energy neither can be created nor destroyed " and 2) GR . The First Law has no limitations and is always TRUE . BB-theory would have contradicted this Law . Einstein's Equation , E = MC^2 ; supports this Law . The Second Law of GR predicted the possibility of expansion and contraction of space . This theory can explain the expanding universe but still GR has some limitations . So, we have to choose between these two Laws , to be associated with the creation of our Universe . 23. ### originHeading towards oblivionValued Senior Member Messages: 11,593 No, we don't. It really is rather simple. Evidence show that the universe is expanding and in the past was concentrated in a tiny volume comprised of only energy. This theory has made predictions, these predictions have turned out to be accurate. New data continues to support the big bang theory. Has it been proven? Of course not. Are there aspects of the creation of the universe that are a mystery? Of course there are. It may be that we will never know the answers. None of that is really relevent to the robust and well tested theory we call the big bang. There is stuff we don't know about alot of things. It is OK, only religions and non-science people speak in absolutes. Besides, it would be a rather dull universe if we knew it all!
• Advertisement # pointer question This topic is 3627 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts hi all This is the problem Im having at the moment. I have 2 pointers, p1 and p2, And I have class. P2 is stored within a std::vector. Inside a loop p2 points to the class, then p1 points to p2 and p1 is stored into the vector. On each update p2 points to a different struct. Heres the problem, each p1 in the vector = the NEW value p2 is pointing at, not its original value that it was stored at.(I hope this is understandable :) ) So in short, every p2 in the vector is equal to the last p2 stored in the vector. I want to keep it all pointers for speed and memory sake. Any Ideas on how to fix it? Thanks. #### Share this post ##### Share on other sites Advertisement Never mind. All fixed :) #### Share this post ##### Share on other sites Quote: I want to keep it all pointers for speed and memory sake. Any Ideas on how to fix it? Rule number one of optimization: make it run, then make it run fast. Your priority is to have a program that works, because if you optimize a program that doesn't work then all your optimization efforts will be lost when you have to change the code to make it work. Besides, if your program didn't work in the first place, how did you determine that it needed to be optimized in that particular area? Quote: Original post by Helo7777hi all This is the problem Im having at the moment. I have 2 pointers, p1 and p2, And I have class. P2 is stored within a std::vector. Inside a loop p2 points to the class, then p1 points to p2 and p1 is stored into the vector. On each update p2 points to a different struct. Heres the problem, each p1 in the vector = the NEW value p2 is pointing at, not its original value that it was stored at.(I hope this is understandable :) ) So in short, every p2 in the vector is equal to the last p2 stored in the vector. No, it's not understandable. You should either explain what you want to do (something which is generally possible in plain English), or you should provide code or pseudocode to explain how you are doing it (something which is quite difficult in plain English). Besides, your terminology is weird: you can't point to a class (only instances of a class) and you can't have 'every p2' since you have only one p2, the value of which was stored in the vector. However, by attempting to decipher your explanation, it seems that there is a problem with your code. I suspect the following, though I'm not sure: while (condition){ T object_on_the_stack; T* pointer = &object_on_the_stack; vector.push_back(pointer);} #### Share this post ##### Share on other sites • Advertisement • Advertisement • ### Popular Tags • Advertisement • ### Popular Now • 10 • 14 • 11 • 10 • 11 • Advertisement
## Sunday, July 31, 2016 ### July 31, 2016 I'm sorry I'm late with this post, in case you're keeping track so far. Life gets in the way sometimes, and there's just not enough time to do everything I want to do. Anyway, I'm reading Storm of Steel, by Ernst Jünger, and it's fascinating. It's a WWI book by a German soldier who survived the entire war, often in some of the biggest battles for months at a time, and then went on to be a prolific author and die aged 102! I've read some things on WWI before, but this is another level in terms of the crudeness of the conflict. Jünger's book is famous for not editorializing much, and simply telling things as they happen. He doesn't go into the reasons for the conflict, or whether or not it was just, or any of the comments you see in other WW1 literature. He just tells, in honest detail, what the soldiers went through. I'm about halfway through at this point, and I think I'll finish it by midweek. ## Saturday, July 23, 2016 ### Thoughts for July 23, 2016 Walking with PhD buddies on our way to lunch earlier this week, one of us mentioned the various atrocious and tragic events that have happened over the last couple of weeks. One of my buddies, who is from Chile, jokingly remarked: "All of a sudden, Mexico doesn't seem that bad anymore, huh?" We all had a good chuckle, and carried on towards our meal. There's a saying among some comedians, which I think may itself be rooted in a quote from Oscar Wilde, that to be funny one must be telling the truth. Mexico has a myriad problems, from the daily life of its citizens all the way up to macroeconomic issues. But when one looks at international headlines, at least from the comfort of a middle-class PhD student, Mexico really isn't all that bad at all. Europe is turning into a huge 1980's Beirut or, as some others have remarked, modern day Israel. There are now constant attacks by competing islamists against Jews (largely underreported), and ordinary citizens (especially women), not to mention the large-scale attacks like the ones in Nice last week. The Middle East, it seems, has found a way to make its troubles more horrifying and, on top of that, to export them. It doesn't help that Turkey is having an Islamic Revolution of its own, and may finally descend into full islamic theocracy soon. Syria is so bad that it's barely even reported anymore, except as a tally of dead and displaced that goes up every day. As I write this, there is news of a blast in Kabul, Afghanistan, carried out by ISIS militants against a Shia protest. To top everything off, the American election is one huge joke, with a competition between a plastic candidate that will change nothing and a dangerous, narcissistic idiot on the other. So yes, Mexico is quite comfortable for now, from where I'm sitting (big caveat there!). ## Saturday, July 16, 2016 ### Thoughts for the week, July 16 My actual PhD work is coming along slowly but steadily. I've briefly mentioned my general area of study, which is General Relativity(GR) and, in particular, an approach to Quantum Gravity known as Causal Dynamic Triangulations (CDT). I've thought of writing something about this, but I can do no better than the explanation of the subject at The Physics Mill. There's a lot of good stuff over there and Jonah, the author, is much further along this area of study than I am (I do not know him in any world, real or virtual). I can, however, explain a little bit of what I'm trying to do or, at least, what my professor has me working on at present. The idea is that, according to a proposed model of spacetime, we can have an elementary "cell" that divides into other spacetime cells, and those cells each divide as well, and so on. Each cell is a tiny, four-dimensional spacetime pyramid called a simplex. The way to keep track of the divisions is to keep track of the vertices, which is a lot easier, since each vertex is just a point. Simplex division in this case follows a specific set of rules: for example, the distance between any two vertices has to be greater than or equal to a minimum length, $$l$$. This is the consequence of insisting that spacetime be quantized, which is the whole point of what CDT is about: we want to get tiny, indivisible chunks of spacetime that, when seen from far away, look like the smooth, continuous spacetime that GR describes. My job, then, is to use computational methods to put a specific model of CDT to the test. The paper that outlines this model, sadly, is behind a paywall, but related papers on the physics arxiv are here and here. Anyway, here is a preliminary result: I know, I know, it's not very impressive—but it took hours and hours of coding! An obvious remark would be that this is only a three dimensional spacetime, since that's what can be plotted in an image. This 3-D version of spacetime is a proxy for the real, 4-D thing that can't be visualized. The radius of the sphere that contains the vertices increases by a magnitude equal to the minimum length $$l$$ at each step, and each step is a quantum "tick" of time. The mechanism by which the points divide is by "mating" with other points according to a specific operation and generating a new point at the new radius. All points mate with all other points, and the resulting points are located on the next sphere. This creates an enormous number of points that can't possibly fit on the surface of the sphere and maintain the condition that the distance between them is at least $$l$$, so points are eliminated one by one until this condition is met (you can see the number of points in each stage at the top of the figure). *   *   * On completely different topics, this has been a tumultuous week. There was the attack in Nice, France, and the attempted military coup in Turkey. I wish I could write about these events, but I'm still overwhelmed by the amount of information to be digested in order to write something worth reading. ## Friday, July 8, 2016 ### Weekly thoughts, July 8: Gun control In case you are interested, the view from Mexico about the gun control situation in the U.S. is, actually, not that trivially aligned with that of the wider world. I am an atypical Mexican in that I follow political and social developments in other countries closely, especially in the anglosphere. There's a common stereotype that Americans know nothing of the outside world, but I can assure you, if you're an American reader, that Mexicans are comparably ignorant about or--perhaps understandably--busy with other things besides world politics. So I can give a detailed picture about what I think, for what it's worth, but only some general remarks about Mexican society. First, I find it scandalous that nothing has been done about the problem at all.  One would think that even the most rabid gun nut would acknowledge that something has to be done, but they actually double down and insist on "the cost of freedom" or some other platitude, and sometimes simply quote the final four words of the Second Amendment, "...shall not be infringed!", as if that were a knock-down argument for any gun control.  The trivial reply from someone like me would be, "well, then change what your stupid second amendment says!" It just seems incredible that, in the couple of decades that I've been paying attention, nothing has been done at all. Indeed, things have gotten somewhat worse since the assault weapons ban expired in the late 90's: yes, assault weapons account for only a tiny percentage of deaths, as most are due to handguns, but that's no consolation to the people in Aurora or Orlando. Yes, I understand how lobbying works and that the NRA does a lot of it; yes, I know there's a "gun culture" in the US that's different from that in some European countries with widespread gun ownership; yes, I know there are 300 million guns already out there and prohibition will create a black market and all of that; and yes, obviously some components of the problem are due to mental illness and religious terrorism; but still, the answer from Americans is we should do nothing? From what I understand, most Americans actually do want to do something, like basic background checks before gun purchases and closing the gun-show loophole. But these things don't even get discussed at the political level because the gun nuts are very well organized in their lobbying via the NRA. I don't see Americans mobilizing en masse to do some effective lobbying of their own, as this would require a broad spectrum of warring groups in the culture war to cooperate extensively. Perhaps they could rally behind a strong presidential candidate with lots of political capital, but then that's not going to happen in this election. Over 30,000 annual deaths, plus the high-impact mass shootings we see every few months, have simply not been enough to get legislators to grow a pair and do something. This sentiment of bewilderment at the situation is shared by most educated people in Mexico, though many laypeople who read of the latest mass shooting simply shrug it off with "Well, we all know Americans are crazy. So of course one of them would walk into an elementary school and kill 20 kids and their teachers at some point. That's just what Americans do." There is another strand of thought, if we could call it that, that seems to want to have something like the 2nd amendment here in Mexico. Gun ownership by citizens is regulated in theory, with citizens being able to own handguns up to a .38 caliber. Everything else is deemed "for exclusive use of the army." Very few Mexicans go through the hoops necessary to get legal ownership of a gun, though those hoops are purely bureaucratic and have nothing at all to do with training in the use of firearms. In practice, Mexico is flooded with illegal guns, which come mostly from the US and are owned by the cartels. There have been cases of .50 caliber machine guns, grenades, and rocket launchers found in the hands of the narcos; they usually out-gun the police and sometimes even the army. Still, ordinary citizens tend to shy away from gun ownership, except for the small but growing group that I mentioned above: these people, usually right-leaning or anarchist, speak of "the people" taking arms and overthrowing the corrupt government, as they perceive that every other thing has been tried already and failed. There is also a strong current of vigilantism that advocates for citizens doing police work, as police in Mexico are generally regarded as worse than the criminals. I don't think these Mexican NRA-types will have anything in the form of political success anytime soon, but I do worry about the vigilantism they inspire. There are already cases of mobs lynching suspected criminals, and there's no way that could get any better if the mobs were armed with guns instead of pitchforks. Politicians, especially at the local levels, have already been subject to attacks usually attributed to organized crime. I can't see how that would get any better if any Joe Schmo (or Juan Pérez, rather) could do the same as well. Domestic violence deaths, suicides, and accidents would obviously increase, too. I'm by no means a pacifist--I'm just squeamish about people self-righteously appointing themselves as "good guys with guns" with no one even writing their name down somewhere. ## Friday, July 1, 2016 ### Weekly Summary: July 1, 2016 I’m done with chapter 3 in Carroll’s book, although I still have to go through the problems at the end. Some of these will be quite tricky, if the problems in previous chapters are any indication. I already got quite a workout from following the explanations in the text, as Carroll leaves lots of details up to the reader. In particular, I spent a couple of days trying to understand the derivation he makes of the Riemann curvature tensor, $$R^{\rho}_{\,\,\sigma \mu \nu}$$. This tensor completely defines the curvature of a surface, and a space is geometrically flat when it equals zero. This tensor is one of the main players in General Relativity, and is defined as $R^{\rho}_{\,\,\sigma \mu \nu} = \partial_{\mu}\Gamma^{\rho}_{\sigma\nu}-\partial_{\nu}\Gamma^{\rho}_{\sigma\mu}+\Gamma^{\rho}_{\lambda\mu}\Gamma^{\lambda}_{\sigma\nu}-\Gamma^{\rho}_{\lambda\nu}\Gamma^{\lambda}_{\sigma\mu} .$ I stole the image here from the Wikipedia article to ilustrate the concept of parallel transport: you start out with a vector at a point A, as in the picture, and keep the vector pointing along the path you follow along the surface (here, northward along a meridian on a sphere). If you then turn towards another path and make your way back to your starting point, you will find that on a curved surface the vector you dragged along has been rotated. This happens in surfaces that are intrinsically curved, and the Riemann tensor is a measure of that curvature. Typically, the strategy to derive $$R^{\rho}_{\,\,\sigma \mu \nu}$$ is to imagine doing the parallel transport along an infinitesimal rectangular area, as is done in the book by Schutz. Carroll, however, goes for a more direct, but also much more abstract derivation: if you have the commutator of a covariant derivative acting on a vector field, you end up with a neat expression, after some work, that includes the Riemann tensor as part of the result: $[\nabla_{\mu},\nabla_{\nu}]V^{\rho} = R^{\rho}_{\,\,\sigma \mu \nu}V^{\sigma} – 2\,T^{\lambda}_{\,\,\,\mu \nu}\nabla_{\lambda}V^{\rho} .$ The term $$T^{\lambda}_{\,\,\,\mu \nu}$$ is the torsion tensor, which was previously introduced in the chapter. So what we have is that when taking the (covariant) derivative of a vector field in two different directions, it matters which direction you pick first. This only happens in curved spaces, where you end up with vectors being tilted with respect to the direction they had when they first started out. Carroll does the demonstration of this in three lines, and hand-waves a bit about how to get from one line to the next. So, I took it as practice to reproduce the derivation myself, and found I was successful, without the aid of the text book, after three attempts. Typing the whole thing would be impossible here, but fortunately I did take a picture (click to enlarge): The definitions for the covariant derivatives of vectors and one-forms are in the box in the upper right corner, and the derivation begins on the upper left. On the lower left is just a reminder to myself of the definition of an anti-symmetrized tensor, which I used in the next-to-last line. The whole thing took about half an hour, with constant erasing and starting over at several points—as it should be. A valuable lesson, then, is that following the text and constructing the arguments yourself pays off in valuable practice; you can anchor your results in the steps that the author does show, and work out the entrails of the calculation yourself. As an added bounus, often times the end-of-chapter problems ask you to “fill in the missing steps in the derivation of equation X”, so you can tick problems off your set (indeed, I have been able to check many problems in Schutz’s book off my list this way).
In the past, I did a lot of multi-level modelling with MLwiN 2.02, which I quickly learned to loath. Back in the late 1990s, MLwiN was perhaps the first ML software that had a somewhat intuitive interface, i.e. it allowed one to build a model by pointing and clicking. Moreover, it printed updated estimates on the screen while cycling merrily through the parameter space. That was sort of cool, as it could take minutes to reach convergence, and without the updating, one would never have been sure that the program had not crashed yet. Which it did quite often, even for simple models. Worse than the bugs was the lack of proper scriptability. Pointing and clicking  loses its appeal when you need to run the same model on 12 different datasets, or when you are looking at three variants of the same model and 10 recodes of the same variable. Throw in the desire to semi-automatically re-compile the findings from these exercises into two nice tables for inclusion in $LaTeX$ again and again after finding yet another problem with a model, and you will agree that any  piece of software that is not scriptable is pretty useless for scientists. MLwiN’s command language was unreliable and woefully underdocumented, and everything was a pain. So I embraced xtmixed when it came along with Stata 9/10, which solved all of these problems. runmlwin presentation (pdf) But xtmixed is slow with large datsets/complex models. It relies on quadrature, which is exact but computationally intensive. MLwiN works with approximations of the likelihood function (quick and dirty) or MCMC (strictly speaking a Bayesian approach, but people don’t ask to many questions because it tends to be faster than quadrature). Moreover, MLwiN can run a lot of fancy models that xtmixed cannot, because it is a highly specialised program that has been around for a very long time. Enter the good people over at the Centre for Multilevel Modelling at Bristol, who have come up with runmlwin, an ado that essentially makes the functionality of MLwiN available as a Stata command, postestimation analysis and all. Can’t wait to see if this works with Linux, wine and my ancient binaries, too. MLwiN is one of the granddaddies of multi-level modelling software (the other being HLM).  Essentially, it is a 1990s-ish looking and sometimes quirky GUI wrapped around  an old DOS program (MLn). The one feature that set MLwiN apart in the late 1990s is point-and-click interface that allows you to build the equations for a multi-level in a stepwise fashion. The underlying command language is still slightly confusing and less than well documented, and some of the modern features (such as modelling categorical dependent variables) are implemented as external macros, which does not need to concern you unless something goes horribly wrong, which happens occassionally. That said, MLwiN is reasonably fast, does now incorporate modern MCMC estimators, has an interface with WINBUGS and can be convinced to do most things you would possibly want to do with it.  I bought version 1.10 ca. 1998, received free upgrades to 2.02 and good support well until 2004/2005 or so.  These days, Stata, R and MPlus can all estimate multi-level models, but working with MLwiN may still be worthwhile for you (by the way, you can download the free stata2mlwin addon from UCLA academic technology to export your variables from Stata to MLwiN). Rather amazingly, MLwiN is now freely available for anyone working in UK universities: just enter your details including your ac.uk-email, and few days later, they will send you a download link.
# Prooving Elastic Collisions equations 1. May 8, 2009 ### Lord Dark 1. The problem statement, all variables and given/known data hi everyone ,, how are you all ? ,, i have these two equations and i don't know how to get them : v1f=((m1-m2)/(m1+m2))*v1i +(2m2/(m1+m2))*v2i V2f=((2m1/m1+m2))*v1i+((m2-m1)/(m1+m2))*v2i 2. Relevant equations m1v1i+m2v2i=m1v1f+m2v2f (conservation of momentum) 0.5m1v1^2i+0.5m2v2i^2=0.5m1v1f^2+0.5m2v2f^2 (conservation of Kinetic Energy) 3. The attempt at a solution the book reach to taking m1 & m2 as common factor then says divide the kinetic by the momentum then I'll get the results above, but when i divide i get : v2f=v1i+v1f ,, so someone help me to get the results above because i don't like memorizing and i know that I'll forget in the exam -_- 2. May 8, 2009 ### rl.bhat m1v1i+m2v2i=m1v1f+m2v2f (conservation of momentum) 0.5m1v1^2i+0.5m2v2i^2=0.5m1v1f^2+0.5m2v2f^2 (conservation of Kinetic Energy) In both the equations collect the terms containing m1 on one side and m2 on other side. Then divide left hand side and right hand side and equate. You will get relation between v1i, v1f ,v2i and v2f. Using this you can find the required result.
# Integration – Terms of the form axn Integration can be thought of as the inverse of differentiation. In the initial video I introduce you to this concept and explain the notation used and run through how we apply the following formulae. Formulae to remember: • • • ## Fractional and root type terms Integrals that contain one term where x is in the denominator can often be rearranged and then integrated. In the video that follows I show you how to do the following. Examples in the video 2017-11-06T14:52:10+00:00
## Box Plots ### Learning Outcomes • Display data graphically and interpret graphs: stemplots, histograms, and box plots. • Recognize, describe, and calculate the measures of location of data: quartiles and percentiles. Box plots (also called box-and-whisker plots or box-whisker plots) give a good graphical image of the concentration of the data. They also show how far the extreme values are from most of the data. A box plot is constructed from five values: the minimum value, the first quartile, the median, the third quartile, and the maximum value. We use these values to compare how close other data values are to them. To construct a box plot, use a horizontal or vertical number line and a rectangular box. The smallest and largest data values label the endpoints of the axis. The first quartile marks one end of the box and the third quartile marks the other end of the box. Approximately the middle $50$ percent of the data fall inside the box. The “whiskers” extend from the ends of the box to the smallest and largest data values. The median or second quartile can be between the first and third quartiles, or it can be one, or the other, or both. The box plot gives a good, quick picture of the data. ### Note You may encounter box-and-whisker plots that have dots marking outlier values. In those cases, the whiskers are not extending to the minimum and maximum values. Consider, again, this dataset. $1$, $1$, $2$, $2$, $4$, $6$, $6.8$, $7.2$, $8$, $8.3$, $9$, $10$, $10$, $11.5$ The first quartile is two, the median is seven, and the third quartile is nine. The smallest value is one, and the largest value is $11.5$. The following image shows the constructed box plot. ### Note See the calculator instructions on the TI web site. The two whiskers extend from the first quartile to the smallest value and from the third quartile to the largest value. The median is shown with a dashed line. ### Note It is important to start a box plot with a scaled number line. Otherwise the box plot may not be useful. ### Example The following data are the heights of $40$ students in a statistics class. $59$; $60$; $61$; $62$; $62$; $63$; $63$; $64$; $64$; $64$; $65$; $65$; $65$; $65$; $65$; $65$; $65$; $65$; $65$; $66$; $66$; $67$; $67$; $68$; $68$; $69$; $70$; $70$; $70$; $70$; $70$; $71$; $71$; $72$; $72$; $73$; $74$; $74$; $75$; $77$ Construct a box plot with the following properties; the calculator instructions for the minimum and maximum values as well as the quartiles follow the example. • Minimum value = $59$ • Maximum value = $77$ • Q1: First quartile = $64.5$ • Q2: Second quartile or median= $66$ • Q3: Third quartile = $70$ 1. Each quarter has approximately $25$% of the data. 2. The spreads of the four quarters are $64.5 – 59 = 5.5$ (first quarter), $66 – 64.5 = 1.5$ (second quarter), $70 – 66 = 4$ (third quarter), and $77 – 70 = 7$ (fourth quarter). So, the second quarter has the smallest spread and the fourth quarter has the largest spread. 3. Range = maximum value – the minimum value = 77 – 59 = 18 4. Interquartile Range: $IQR$ = $Q_3$ – $Q_1$ = $70 – 64.5 = 5.5$. 5. The interval $59–65$ has more than $25$% of the data so it has more data in it than the interval $66$ through $70$ which has $25$% of the data. 6. The middle $50$% (middle half) of the data has a range of $5.5$ inches. ### USING THE TI-83, 83+, 84, 84+ CALCULATOR To find the minimum, maximum, and quartiles: Enter data into the list editor (Pres STAT 1:EDIT). If you need to clear the list, arrow up to the name L1, press CLEAR, and then arrow down. Put the data values into the list L1. Press STAT and arrow to CALC. Press 1:1-VarStats. Enter L1. Press ENTER. Use the down and up arrow keys to scroll. Smallest value = $59$. Largest value = $77$. $Q_1$: First quartile = $64.5$. $Q_2$: Second quartile or median = $66$. $Q_3$: Third quartile = $70$. To construct the box plot: Press 4:Plotsoff. Press ENTER. Arrow down and then use the right arrow key to go to the fifth picture, which is the box plot. Press ENTER. Arrow down to Xlist: Press 2nd 1 for L1 Arrow down to Freq: Press ALPHA. Press 1. Press Zoom. Press 9: ZoomStat. Press TRACE, and use the arrow keys to examine the box plot. ### Try It The following data are the number of pages in $40$ books on a shelf. Construct a box plot using a graphing calculator, and state the interquartile range. $136$; $140$; $178$; $190$; $205$; $215$; $217$; $218$; $232$; $234$; $240$; $255$; $270$; $275$; $290$; $301$; $303$; $315$; $317$; $318$; $326$; $333$; $343$; $349$; $360$; $369$; $377$; $388$; $391$; $392$; $398$; $400$; $402$; $405$; $408$; $422$; $429$; $450$; $475$; $512$ This video explains what descriptive statistics are needed to create a box and whisker plot. For some sets of data, some of the largest value, smallest value, first quartile, median, and third quartile may be the same. For instance, you might have a data set in which the median and the third quartile are the same. In this case, the diagram would not have a dotted line inside the box displaying the median. The right side of the box would display both the third quartile and the median. For example, if the smallest value and the first quartile were both one, the median and the third quartile were both five, and the largest value was seven, the box plot would look like: In this case, at least $25$% of the values are equal to one. Twenty-five percent of the values are between one and five, inclusive. At least $25$% of the values are equal to five. The top $25$% of the values fall between five and seven, inclusive. ### Example Test scores for a college statistics class held during the day are: $99$; $56$; $78$; $55.5$; $32$; $90$; $80$; $81$; $56$; $59$; $45$; $77$; $84.5$; $84$; $70$; $72$; $68$; $32$; $79$; $90$ Test scores for a college statistics class held during the evening are: $98$; $78$; $68$; $83$; $81$; $89$; $88$; $76$; $65$; $45$; $98$; $90$; $80$; $84.5$; $85$; $79$; $78$; $98$; $90$; $79$; $81$; $25.5$ 1. Find the smallest and largest values, the median, and the first and third quartile for the day class. 2. Find the smallest and largest values, the median, and the first and third quartile for the night class. 3. For each data set, what percentage of the data is between the smallest value and the first quartile? the first quartile and the median? the median and the third quartile? the third quartile and the largest value? What percentage of the data is between the first quartile and the largest value? 4. Create a box plot for each set of data. Use one number line for both box plots. 5. Which box plot has the widest spread for the middle $50$% of the data (the data between the first and third quartiles)? What does this mean for that set of data in comparison to the other set of data? ### Try It The following data set shows the heights in inches for the boys in a class of $40$ students. $66$; $66$; $67$; $67$; $68$; $68$; $68$; $68$; $68$; $69$; $69$; $69$; $70$; $71$; $72$; $72$; $72$; $73$; $73$; $74$ The following data set shows the heights in inches for the girls in a class of $40$ students. $61$; $61$; $62$; $62$; $63$; $63$; $63$; $65$; $65$; $65$; $66$; $66$; $66$; $67$; $68$; $68$; $68$; $69$; $69$; $69$ Construct a box plot using a graphing calculator for each data set, and state which box plot has the wider spread for the middle $50$% of the data. ### example Graph a box-and-whisker plot for the data values shown. $10$; $10$; $10$; $15$; $35$; $75$; $90$; $95$; $100$; $175$; $420$; $490$; $515$; $515$; $790$ The five numbers used to create a box-and-whisker plot are: • Min: $10$ • $Q_1$: $15$ • Med: $95$ • $Q_3$: $490$ • Max: $790$ The following graph shows the box-and-whisker plot. ### Try It Follow the steps you used to graph a box-and-whisker plot for the data values shown. $0$; $5$; $5$; $15$; $30$; $30$; $45$; $50$; $50$; $60$; $75$; $110$; $140$; $240$; $330$ ## Concept Review Box plots are a type of graph that can help visually organize data. To graph a box plot the following data points must be calculated: the minimum value, the first quartile, the median, the third quartile, and the maximum value. Once the box plot is graphed, you can display and compare distributions of data. ## References Data from West Magazine.
Confused about which database to use - nt or nr? 2 0 Entering edit mode 9 weeks ago DNAngel ▴ 240 Hi all, I work with metagenomic and metatranscriptomic data. I know the nt/nr database is for nucleotide reads which my data is all in nucleotide form (not in protein form so I wouldn't be using blastp). But when I do a local blast, I am just wondering how to know if I should use nt or nr. Because with blastp, you only use nr for protein nucleotide sequences. If I have metatranscrptomic reads, I did use blastn against the nt database and I got a lot of hits so I felt that was okay. When I use FragGeneScan to extract protein reads from the RNA sequences I did use nr database and just wondering if this makes sense or should I have used the nr database from the start? I've read a lot on this on NCBI but it is very clear to me in practise. blast • 342 views 0 Entering edit mode 9 weeks ago Mensur Dlakic ★ 21k Because with blastp, you only use nr for protein nucleotide sequences. There is no such thing as protein nucleotide sequences. The sequences are either nucleotide (nt) or protein (nr). blastn is for comparing nucleotide queries to nt database, and blastp for comparing proteins to nr database. 0 Entering edit mode While unrelated to this question peptide nucleic acids exist: https://en.wikipedia.org/wiki/Peptide_nucleic_acid 0 Entering edit mode ...or blastx for translated dna against the nr 0 Entering edit mode ... or tblastn for protein against translated DNA in nt. I don't think the original question was intended for all BLAST flavors and all search permutations. 0 Entering edit mode 9 weeks ago "translated DNA:protein (BLASTX) searches are far far more sensitive than DNA:DNA searches" - Bill Pearson if you're happy with your blastn results that's good but blastx to the nr would be advisable 0 Entering edit mode If you have transcripts (3 potential reading frames) it's rather wasteful to do blastx with its 6 reading frames. It's far more computationally efficient to predict proteins and then do blastp. No loss in sensitivity. It doesn't really matter if you're querying just a few hundred sequences or whatever but with large sets there will be a huge cost..
# Does it make any sense to prove $0.999\ldots=1$? I have read this post which contains many proofs of $0.999\ldots=1$. ## Background The main motivation of the question was philosophical and not mathematical. If you read the next section of the post then you will see that I have asked for a "meaning" of the symbol $0.999\ldots$ other than defining it to be $1$. Now here is a epistemological problem and this is mainly the problem from which the question arose. Suppose you know that $1$ is a real number. Now I give you a symbol, say $0.999\ldots$ which from now on I will denote as $x$. Now I ask you whether $x$ is a real number. To answer this, if you define $x=1$ then you are already attributing the properties of $1$ to $x$ among which one is it being a real number without proving whether to $x$ we can indeed attribute the properties of $1$. A common response to this question has been to define the symbol $x$ as the limit of the sequence $\left(\displaystyle\sum_{i=1}^n \dfrac{9}{10^i}\right)_{n\in\mathbb{N}}$ and then prove that the limit of this sum is indeed $1$. But again the problem is that you are defining the symbol $x$ to be a real number and hence are assuming a priori that the symbol $x$ denotes a real number. As per the discussion that has been conducted with Simply Beautiful Art let me state my position again in brief, Also let me say that I do not disallow $0.999…$ to be a real number. My impression that if you assume $0.999…$ to be a real number then there is no sense in proving that $0.999…$ is indeed equal to $1$ because either you define it to be $1$ or you prove the equality as a theorem. But if you are going to use the limit definition of $0.999…$ then what you are a priori assuming it to be a real number which is an assumption that I don't allow. What can be allowed is that $0.999…$ is a number (but not necessarily a real number). ## Question My question is, Does it make any sense to prove this equality? Can one give any "meaning" of the symbol $0.999\ldots$ other than defining it to be $1$? • Well...you can define $.999...=\sum \frac 9{10^i}$ and then prove that this sum converges to $1$. I'd say that had non-trivial content, no? – lulu Jan 5 '16 at 15:41 • The point is to show that $0.999\ldots$ is equal to 1. There is meaning to defining infinitely repeating decimals. – Morgan Rodgers Jan 5 '16 at 15:46 • The first answer to the question you link to, with more than 250 upvotes, is likely to be as good an answer as you could possible get here, asking again. – Ethan Bolker Jan 5 '16 at 15:47 • $.999\ldots$ is not defined to be equal to $1$; it already has a definition as soon as decimal notation is defined. When we define decimal notation, we agree that $.a_1 a_2 a_2 \ldots \text{ means } a_1/10 + a_2/100 + a_3/1000 + \cdots$. It turns out that $.999... = 1$, but that's only a consequence of the definition of decimal notation. – littleO Jan 5 '16 at 15:47 • Otherwise, how can you talk about being equal? – Future Jan 5 '16 at 17:33 The OP asked whether one can assign any meaning to the symbol $0.999\ldots$ other than defining it to be $1$. That question cannot be answered without analyzing what informal pre-mathematical meaning is assigned to $0.999\ldots$, prior to interpreting it in a formal mathematical sense. This of course can only be known to the OP himself but judging from the level of the OP's questions the OP seems to be a student and perhaps a freshman; see, e.g., here. Now beginning students often informally describe this as "zero, dot, followed by infinitely many $9$s", or something similar. Such a description of course does not refer to any sophisticated number system such as the real numbers, since at this level the students will typically not have been exposed to such mathematical abstractions, involving as they do equivalence classes of Cauchy sequences, Dedekind cuts, and the like. It is also known that at this level, about $80\%$ of the students feel that such an object necessarily falls a little bit short of $1$. The question is whether such intuitions are necessarily erroneous, or whether they could find a mathematically rigorous interpretation in the context of a suitable number system. An article by R. Ely in this publication in a leading education journal argues that such intuitions are not necessarily mathematically erroneous because they can find a rigorous implementation in the context of a hyperreal number system, where a number with an infinite tail of $9$s can fall infinitesimally short of $1$ as outlined in a comment by user @GBeau on this page, namely if $H$ is an infinite hypernatural then $\displaystyle\sum_{n=1}^H \frac{ 9}{10} =0.999\ldots9$ where the digit $9$ occurs $H$ times. This is of course a terminating infinite string of $9$s different from the one usually envisioned in real analysis, but it respects student intuitions and can be helpful in learning the calculus, as argued in Ely's fascinating study. The existence of such an interpretation suggests that we indeed do assume that such a string represents a real number when we prove that it necessarily equals $1$. Note I. If one thinks of the infinite string as being represented by the sequence $0.9, 0.99, 0.999, \ldots$ then one can obtain an alternative interpretation as follows. Instead of taking its limit (which is by definition real-valued), one can take what Terry Tao refers to as its ultralimit, to obtain a number than falls infinitesimally short of $1$. These issues are dealt with in more detail in this recent publication. The challenging philosophical issue here is the idea that there are distinct ways of formalizing infinity in mathematics, and the possibility of an attendant ambiguity of the symbol in question. These issues were dealt with in more detail in this publication in a leading education journal. Note II. A certain number of objections have been raised by a colleague who wishes to remain anonymous. Given below are the objections together with my responses. (0) You have not provided a meaningful syntactic representation of $1/3$ in the hyperreals. Well $\dfrac13$ is the unending decimal $0.333\ldots$ (indexed by the hypernaturals). If truncated at infinite hypernatural rank $H$ this would produce a hyperrational falling infinitesimally short of a third, similarly to the $0.999\ldots{}$ situation. (1) Nobody can legitimately disagree that hyperreals can be constructed via the ultraproduct of the reals $\bf{R}$ within $\sf{ZFC}$, which is the mainstream foundations for mathematics. True, analysis with infinitesimals can be done over the hyperreals, as pointed out by Robinson in 1961. Alternatively, this can be done syntactically in the context of the ordinary real line, following Edward Nelson's approach. Nelson's approach, called Internal Set Theory $(\sf{IST})$, involves enriching the language of set theory by the introduction of a single-place predicate $\textbf{st}$, as well as three additional axiom schemas governing its interaction with the other set-theoretic axioms. Here $\textbf{st}(x)$ reads "$x$ is standard". (2) Philosophically nobody has provided non-circular ontological arguments justifying $\sf{ZFC}$ (especially with replacement and choice). No logician, whether on Math SE or on Math Overflow or whom I have met, have done anything close to it. This is a much broader issue. It is possible that $\sf{ZFC}$ has serious flaws. Nonetheless it happens to be currently the standard against which much of modern mathematics is tested. This doesn't mean that we must accept it, but it does mean that such philosophical problems are no smaller for the reals than for the hyperreals (especially in view of Nelson's syntactic approach mentioned above). I accept various things such as consistency of $\sf{ZF}$ implying consistency of $\sf{ZFC}$, but consistency is quite irrelevant to soundness besides being necessary. Unless you're happy with just $\prod_1$-soundness. If the sound alternative is predicativism as developed by Sol Feferman and others, then certainly $\sf{ZF}$ is no less problematic than $\sf{ZFC}$. Practically speaking, $\sf{ZF}$ is not enough for some rather standard applications. Consider the following example: it is consistent with $\sf{ZF}$ that there exists a strictly positive real function with vanishing Lebesgue integral; see https://arxiv.org/abs/1705.00493 (3) The construction of the hyperreals is via the ultraproduct of the reals R. If you can construct the hyperreals, then you also can construct $\bf{R}$ and prove the usual second-order real axioms for $\bf{R}$. It would be self-contradictory to say that the properties of $\bf{R}$ (including $0.999... = 1$ suitably interpreted) are not intuitive and then claim that the hyperreals are intuitive. After all, we define an infinitesimal in the hyperreals as a nonzero sequence of reals that converges to zero... I wouldn't argue that the properties of the reals are not intuitive. Rather, what was explored in several articles in the recent literature is the possibility that there may be multiple approaches to interpreting the business with "a tail with an infinite number of $9$s", some of which may be helpful in harnessing student intuitions in a productive direction rather than merely declaring them to be erroneous. Incidentally, your definition of a hyperreal infinitesimal is not quite correct. An important distinction here is between procedures taught in a calculus class and set-theoretical justification (ontology of the entities involved) usually treated in an analysis course. This applies both to the reals and the hyperreals. (4) Let $\bf{R}^\ast$ be the hyperreals and $\varepsilon = 1 - 0.999\ldots$. You claim that $\varepsilon$ is nonzero in a suitable interpretation of $0.999\ldots$ Ignoring the fact that you cannot represent $1/3$ meaningfully in similar decimal form, I now present you another fact that you can't represent $\varepsilon/2$, not to say $\sqrt{\varepsilon}$. Wait, what does the latter even mean in the hyperreals. Can your students figure that out? Are you sure hyperreals are so intuitive now? I am not sure what you mean. Both $1/3$ and $\sqrt{\varepsilon}$ are well-defined in the hyperreals, simply by the transfer principle. As far as teaching the set-theoretic justification of the hyperreals in terms of the ultrapower, as I mentioned this belongs in a more advanced course, just like set-theoretic justification of the reals. In contrast, asymptotic expansion can happily deal with $\sqrt{x}$ for any asymptotic expression $x$ that is non-negative. No trouble at all. $x^{1+x}$ for positive $x$? No problem. All of these are well-defined over the hyperreals by the transfer principle. • @Mikhail It might prove helpful to fully cite the mentioned articles so that readers have an immediate clue that they refer to scholarly publications (as opposed to web pages authored by cranks). Also it might prove helpful to establish this scholarly context at the beginning of the answer. This may help to discourage any further downvotes from users who have not invested proper effort to distinguish this fine answer from the many cranky answers that are often given for such FAQs. – Bill Dubuque Jun 9 '17 at 14:22 • I want to emphasize my +1 on this; it's great that this post emphasizes that it's discussing a terminating decimal with infinitely many $9$'s. So many expositions neglect to give any indication that they're talking about something different from a nonterminating decimal, which I think sabotages the education of people who haven't yet conceived just what limits (and similar) are. – Hurkyl Jun 9 '17 at 18:52 • For me, this is the answer that hits the nail on the head, namely that whether or not such a number equals one depends on the axiomatic system in which we are working. Therefore the answer is YES. In a given axiomatic system in which both symbols define elements of a set, one can legitimately ask for a proof that the elements they define are equal, when this is the case. – Daniel Moskovich Jun 16 '17 at 10:50 • What do you mean by "meaningful" here @user21820? (I thought that for a concept to be meaningful, having an instantiation of it was both necessary and sufficient but since you write, "...but it does not imply that the notion is meaningful and has an instantiation." - I presume that you use the terms in different sense. Feel free to correct me if I am wrong.) – user 170039 Sep 12 '17 at 8:34 • @user21820: If we have some prior notion of "the reals", then we don't know that Z is enough to construct them. Z constructs an object it calls $\mathbb{R}$ that we can identify in any model of Z, but there is no reason whatsoever that that object should be "the reals". It may even be possible that there isn't any model of Z whose $\mathbb{R}$ is "the reals". – Hurkyl Sep 16 '17 at 1:15 We have to agree about what the symbols $$0.99999\dots$$ are supposed to mean. The symbols capture an intuitive idea, but it doesn't have meaning unless we agree on what that meaning is. When you write these symbols down, everyone will agree that what you mean is the following: $$.9 + .09 + .009 + \dots = \frac{9}{10} + \frac{9}{100} + \frac{9}{100} + \dots = \sum_{i = 1}^\infty \frac{9}{10^i}$$ There is no proof of that -- it is an agreement. If you wrote this, perhaps similar-looking, string of symbols down \begin{aligned} 0.00000\dots1\\ \end{aligned} then there is no agreement on what you mean. You would seem to be talking about a real number which is smaller than all other real numbers -- an object that doesn't exist. So what string of symbols means what, rigorously, is a matter of agreement. Usually we elide that fact when it seems intuitive what we mean, but we do not all share the same intuition (or the same knowledge of how to express that intuition), so we must drag it out on occasions like now. Anyway, the content of a proof of $0.9999\dots = 1$ is not that we must agree to define $0.9999\dots$ as $1$. The content is to define $0.9999\dots$ as the sum above, then by deduction show this sum is equal to $1$. • So, we are assuming beforehand that $0.999\ldots$ is a real number, right? – user 170039 Jan 5 '16 at 16:57 • @user170039: Yep. We're assuming it to mean $\sum_{i=0}^\infty \frac{9}{10^i}$, an infinite sum, which is a real number because it converges. – Eli Rose Jan 5 '16 at 17:05 • We aren't assuming. Before we even think about this we define what real numbers are and prove the real number system exists (very analytical and obscure). By the proof and construction of what real numbers are, it comes out that all bounded sets of rational numbers have limits and these limits are always real numbers. {.9, .99,.999, etc.} is one of these bounded sets or rational numbers and $.999\overline9$ is the notation of its limit. And therefore we know $.999\overline9$ is a real number because all limits are real numbers and all bounded sets have limits. – fleablood Jan 14 '16 at 10:53 • @EliRose I believe you meant to start your sum at $i=1$. – GPhys Jan 14 '16 at 11:14 • @user72694: I agree -- this question was marked as duplicate, but the question it's putatively a duplicate of is about proving that $0.999\dots = 1$, not about whether it makes sense to do so. Voted to reopen. – Eli Rose Jan 14 '16 at 18:52 Many of the OP's questions in the comments (both to his own question and to Eli Rose's answer) keep circling back to the question "Are you assuming that $0.999\dots$ is a real number"? The answer is no, we are not assuming it -- it can be proven. More generally, the following theorem can be proven: Let $(a_1,a_2,a_3,\dots)$ be any sequence of numbers where each $a_i$ is chosen from the set $\{0,1,2,\dots,9\}$. Then the sequence $$0.a_1, \space 0.a_1a_2, \space 0.a_1a_2a_3,\dots$$ converges to a unique real number. Again I want to stress that the theorem above is not assumed; it can be proven. The notation $0.999\dots$ denotes the unique real number that is the limit of the sequence $$0.9, \space 0.99, \space 0.999, \space 0.9999\dots$$ This is just an individual instance of the general case considered in the theorem. We know that such a limit exists by the theorem , so there is no need to assume that $0.999\dots$ is a real number. Once we know that $0.999\dots$ is a real number, and that in particular it is the limit of the sequence above, we can observe that this particular sequence converges to $1$. Since the theorem says that the limit of the sequence is unique, that proves that $0.999\dots \space = 1$. • By first proving a general theorem and then introducing a notational convention. – mweiss Jan 6 '16 at 15:30 • (1) Prove the theorem I highlighted in my box. We now know that there is a real number that represents the limit of a certain sequence. – mweiss Jan 7 '16 at 16:07 • (2) Introduce the notational convention that an "infinite decimal" represents the limit of that sequence. – mweiss Jan 7 '16 at 16:08 • (3) Now prove that the limit of the particular sequence under consideration is 1. – mweiss Jan 7 '16 at 16:08 • At no point in this process is anything being assumed to exist. – mweiss Jan 7 '16 at 16:08 I understand that this question is more than 1.5 years old, and I presume you now know how the structure of the real numbers is constructed in mathematics and proven to satisfy the second-order completeness axiom, which can then be used to define $0.99\overline9$ and prove that it is equal to $1$. But in the interest of future readers, here is a similar looking question that may give insight into why this question stems from a conceptual misunderstanding: I have seen many proofs of $1+2+\cdots+n = \frac12 n(n+1)$ for natural $n$. Now suppose you know that $\frac12 n(n+1)$ is an integer. Now I give you an expression, say $1+2+\cdots+n$ which from now on I will denote as $x$. Now I ask you whether $x$ is an integer. To answer this, if you define $x = \frac12 n(n+1)$ then you are already attributing the properties of $\frac12 n(n+1)$ to $x$ among which one is it being an integer without proving whether to $x$ we can indeed attribute the properties of $\frac12 n(n+1)$. A natural response to this question is to define the expression $x$ as the sum of all the integers from $1$ to $n$, and then prove that this sum is indeed $\frac12 n(n+1)$. But again the problem is that you are defining the expression $x$ to be an integer and hence are assuming a priori that the expression $x$ denotes an integer. My impression that if you assume $1+2+\cdots+n$ to be an integer then there is no sense in proving that $1+2+\cdots+n$ is indeed equal to $\frac12 n(n+1)$ because either you define it to be $\frac12 n(n+1)$ or you prove the equality as a theorem. But if you use the summation definition of $1+2+\cdots+n$ then you are a priori assuming it to be an integer, an assumption that I don't allow. What can be allowed is that $1+2+\cdots+n$ is a number (but not necessarily an integer). My question is, does it make any sense to prove this equality? Can one give any "meaning" of the expression $1+2+\cdots+n$ other than defining it to be $\frac12 n(n+1)$? I hope it is clear where this goes wrong: • We indeed define the expression "$1+2+\cdots+n$" to have a certain value, in a certain way (using recursion). For this we need to work in a foundational system that can construct the needed recursion and then prove the existence of a function that satisfies the recursion, and then define the value of "$1+2+\cdots+n$" according to that function. It then is non-trivial to prove that this value is $\frac12 n(n+1)$. • We never assume a priori that the expression "$1+2+\cdots+n$" denotes an integer. Defining "$1+2+\cdots+n$" by the summation does not assume that the sum is an integer. As in the first point, the "summation" here is the recursive construction, which we can easily prove yields a function from naturals to integers, and so in particular "$1+2+\cdots+n$" has integer value. • It does not make sense to say "What can be allowed is that $1+2+\cdots+n$ is a number (but not necessarily an integer).". Why? Because even if you have defined what "number" means, how can you 'allow' some arbitrary expression to be a number? As in the first point, we define the value of the expression, and whether or not it is an integer is not up to us. Similarly: • We can define "$0.99\overline9$" to denote the unique real $x$ that is a lowest upper bound for the set $\{0,0.9,0.99,\cdots\}$. You can ask how we define "$\{0,0.9,0.99,\cdots\}$". Exactly the same kind of recursion as in the previous analogy. You can further ask how we define things like "0.99". Again, by some suitable recursion. After all, we cannot define decimal notation without recursion. This definition of "$0.99\overline9$" is valid, because after the standard construction of the structure of the reals, we can prove the second-order completeness axiom, which gives us the theorem that there is indeed a unique such $x$... • One might ask whether it is valid to assign values to expressions based on objects that we have proven to exist. We can do so with no qualms if we can uniquely identify each object that we wish to assign to each expression. In logic this is equivalent to asking whether we can extend a first-order theory by a function-symbol (for the value assignment function) if we can prove that there is a $2$-parameter property $P$ such that for every input $i$ from the desired domain there is a unique object $j$ such that $P(i,j)$ is true. This is not only possible (see here for the technical details), but also yields a conservative extension, so we are not making any more philosophical commitment than we already did in using the original system. • If you think that the above definition of "$0.99\overline9$" is not intuitive, here is another one. Define "$0.99\overline9$" as the unique real $x$ that lies in all the intervals $[0,1],[0.9,1.0],[0.99,1.00],\cdots$. After all, any layman that has read about $π$ 'knows' that "$3.14...$" denotes something that lies between $3.14$ and $3.15$ inclusive, and that having more digits will narrow down that interval. It turns out that this 'more intuitive' definition is 'more demanding' than the earlier one, since ignoring the upper endpoints of the sequence of intervals gives the earlier one. If anyone objects that my analogy is inaccurate because "everyone knows what $1+2+\cdots+n$ means", it just shows that they themselves do not know how to precisely define it. Every rigorous definition of "$1+2+\cdots+n$" must invoke recursion. Every rigorous definition of "$0.99\overline9$" that captures the notion of the infinite decimal expansion must likewise invoke recursion. The second definition of "$0.99\overline9$" using narrowing intervals has the pedagogical advantage that it is far easier to understand each decimal as an approximation scheme for reals. A decimal could be understood as a oracle that spits out one digit at a time, each of which puts a more precise bound on the 'real' value. Furthermore, it is natural to consider computable decimals, namely those digit oracles that are programs. One can then see that there is a crucial distinction between decimals and the reals they represent; clearly "$0.99\overline9$" and "$1.00\overline0$" are represented by different digit oracles, and whether they approximate the same 'real' value is a totally separate question. By the way, to explicitly address the notion that the idea that $0.99\overline9 < 1$ can be 'correct' with a different definition of "$0.99\overline9$", note that if it were so, then one would naturally expect to have $0.33\overline3 = 0.99\overline9 \div 3 < 1 \div 3 = \frac13$ since there is exact division of each digit. But then $\frac13$ would not have a decimal representation. How weird... It may be possible to 'fix' this somehow, but any 'fix' is going to be weirder and less intuitive than $0.99\overline9 = 1$. Try it! Finally, mathematicians do not define "$0.99\overline9$" to be $1$, because it is as meaningless as defining it to be $2$. However, if you choose to define "$0.99\overline9$" as $1$ then it becomes non-trivial that there is no upper bound for $\{0,0.9,0.99,\cdots\}$ that is smaller than $0.99\overline9$. So whatever way you pick, the fact remains that there is a non-trivial theorem that corresponds to $\sum_{k=1}^n 9·10^{-k} \to 1$ as $n \to \infty$. • Concerning rigorous recursion, see here. – user21820 Sep 8 '17 at 13:31 • You write "By the way, to explicitly address the notion that the idea that $0.99\overline9 < 1$ can be 'correct' with a different definition of "$0.99\overline9$", etc." This is quite correct: one cannot define the unending decimal $0.99\overline9$ in any other way, whether in the reals or the hyperreals. However, it seems to me that you are changing the subject--or moving the goalpost--here. The OP is clearly not referring to the unending decimal $0.99\overline9$ which is certainly uniquely defined and equals $1$, but rather to a broader discussion of what kind of meaning to be assigned... – Mikhail Katz Sep 10 '17 at 14:47 • ... to "a tail of an infinite number of 9s". Given that there is a whole literature devoted to this subject and its usefulness in harnessing students' intuitions in a constructive direction, your stomping the ground a little harder does little to advance the debate. @user170039 – Mikhail Katz Sep 10 '17 at 14:48 • @MikhailKatz: You said in your own answer "Instead of taking its limit (which is by definition real-valued), one can take what Terry Tao refers to as its ultralimit, to obtain a number than falls infinitesimally short of 1.", which explicitly states that there is an interpretation of "0.999..." that makes "0.999... < 1" true. I did not move any goalpost; I explicitly mention that to show that any such alternative interpretation is not intuitive. Nowhere did I say in my answer that the asker believed that, so what you thought (seems to be moving goalpost) is simply false. – user21820 Sep 11 '17 at 11:54 • What's involved here is the distiction between procedure on the one hand and (set-theoretic) ontology, on the other, as I mentioned in the extended version of my answer. I can elaborate if you like. – Mikhail Katz Sep 12 '17 at 7:39 Note that $0.99999\dots$ means $\dfrac 9 {10} + \dfrac 9 {100} + \dfrac 9 {1000} + \dots = \sum \limits _{n=1} ^\infty \dfrac 9 {10^n}$. Now, in your opinion, does it make sense to "define" this series to be $1$? Of course not, because otherwise we may "define" anything to be anything. For instance, we may "define" $\sum \limits _{n=1} ^\infty \dfrac 1 n$ to be $3$; does this look correct to you? Does it make sense to show that the above series converge and that its limit is $1$? Yes, of course, it"s a simple exercise in real analysis. Therefore, it does make sense to prove that $0.99999\dots = 1$. • Surely "we may "define" $\sum \limits _{n=1} ^\infty \dfrac 1 n$ to be $3$". But I don't understand what you wanted to mean when you said "does this look correct to you?" What do you mean by looking "correct" here? – user 170039 Jan 15 '16 at 13:49 • @user170039: No, of course you can't "define" it, because $\sum \limits _{n=1} ^\infty x_n$ is already defined as $\lim \limits _{N \to \infty} \sum \limits _{n=1} ^N x_n$. Given that $\lim \limits _{N \to \infty} \sum \limits _{n=1} \frac 1 n = \infty$, defining $\lim \limits _{N \to \infty} \sum \limits _{n=1} ^\infty \frac 1 n$ to be $3$ will lead to the contradiction $3 = \infty$. To conclude: given that $\sum \limits _{n=1} ^\infty \dfrac 9 {10^n}$ already has a meaning as a convergent series, it is equal to $1$, not defineable to be $1$! – Alex M. Jan 15 '16 at 15:31 • I can "define" it. What you are saying is that whether my definition is "consistent" or not. But that has no (at least it seems so to me) relation to the "definition" itself. – user 170039 Jan 15 '16 at 15:39 • Also, why "$0.999\ldots$ means $\dfrac{9}{10}+\dfrac{9}{10}+\dfrac{9}{10}+\ldots=\displaystyle\sum_{n=1}^\infty \dfrac{9}{10^n}$"? – user 170039 Jan 15 '16 at 15:41 As far as I understand the original question, the question has not been answered. Not that the other posts are wrong but they always touch another topic. Actually two questions were asked: Does it make sense to prove $0.999\ldots=1$? Can one give any "meaning" of the symbol $0.999\ldots$ other that defining it to be 1? Regarding the first question: I'll understand it in the way "Why are we interested in this?" Mathematicians are often interested if something is unique in this case the decimal representation. $0.999\ldots=1$ proves by example that the decimal representation is not unique. Now let's consider the second question. Yes we could give it another meaning, bacause we can define anything we want. But if we define it in another way without changing other definitions we obtain a contradiction. Could we change the definitions in a way that $0.999\ldots \neq 1$ and they still are reasonable in a way? I would say probably yes but I'm no expert in this matter. The Wikipedia article about $0.999\ldots$ states following: The equality of 0.999… and 1 is closely related to the absence of nonzero infinitesimals in the real number system, the most commonly used system in mathematical analysis. Some alternative number systems, such as the hyperreals, do contain nonzero infinitesimals. In most such number systems, the standard interpretation of the expression 0.999… makes it equal to 1, but in some of these number systems, the symbol "0.999…" admits other interpretations that contain infinitely many 9s while falling infinitesimally short of 1. You might want to read the wikipedia article in more detail. Can we assign a meaning to $0.999...$? $1=0.999...$ may be written as \begin{align} 1&=\frac{9}{10}\sum_{k=0}^\infty \frac{1}{10^k}\\ &=\left(1-\frac{1}{10}\right) \sum_{k=0}^\infty \frac{1}{10^k}\\ \end{align} This is case $r=\frac{1}{10}$ of the more general $$1=(1-r)\sum_{k=0}^\infty {r}^k, |r|<1$$ The product $(1-p)p^k$ has probabilistic interpretations. For instance, in the context of Nondeterministic finite automata, it is the probability of exiting a state that has self-transition probability $p$ after exactly $k$ self-transitions. Summing for all nonnegative integers represents the probability of exiting after any possible number of self-transitions. Therefore, the question about $.999...=1$ may be rewritten as What is the probability that a state with self-transition probability $p=\frac{1}{10}$ is exited? • Indeed, it might be an infinitesimal probability, as explored in a number of recent articles. Your approach answers the question in the negative only if you assume that there are no infinitesimals in the number system being used. – Mikhail Katz Jun 7 '17 at 7:53 • An infinitesimal probability that it is never exited, right? – Jaume Oliver Lafont Jun 7 '17 at 8:08 • Right. If your model of the process is that there is an infinite number (in the sense of Robinson's framework, for instance) of simultaneous tries then indeed there will be a positive infinitesimal probability that it does not exit. – Mikhail Katz Jun 7 '17 at 8:13 • I admit I should read those articles before, but just for intuition, let me ask: is that infinitesimal probability any similar to the probability of picking exactly $\frac{1}{2}$ when choosing randomly a real number in $(0,1)$ under uniform distribution? It is possible, not impossible as picking, say, $\frac{3}{2}$. – Jaume Oliver Lafont Jun 7 '17 at 8:22 • Jaume, very good question. The answer is negative: it is not similar. Rather, it is similar to picking a number in an infinitesimal interval centered on $\frac{1}{2}$. If you are intrigued by these ideas I would recommend Keisler's excellent textbook Elementary Calculus as a first try. – Mikhail Katz Jun 7 '17 at 8:25 We don't simply define $0.9999....$ to be equal to 1. We do a lot of background analysis first which involves constructing the real numbers out of the rational numbers. In very short summary: Consider this basic fact about rational numbers: no matter how close together two rational numbers are we can find a rational number between them (which also means for any value greater than zero, no matter how small, you can find a smaller one between it and zero). This is nice. It means we can get as close as we like to any value using rational numbers. Now consider this monkey wrench: There are many values we can't express (for example: pi and the square root of 2). [This is weird because we can get as close as we like to to these values by the statement above.] And consider this horrible result: Between any two rationals no matter how close we get, there is always one of the inexpressible holes between them! So how can we think of these ... irrational values... Well, a lot of subtle analysis and picayune debating we notice that in the rational numbers we can have an infinite set of rationals, where all the rationals are within a range, but the set needn't have a biggest value. For example: {all the rational numbers less than pi}; all these values are less than 3 1/3 so it's bounded, but there is no precise rational upper bound to this set. (Another such set is all the numbers $.9, .99, .999, .9999$ etc. It's infinite but each term is rational. It has no biggest value. And all values are less than 1.) So the issue was can we come up with a bigger system of numbers than the rationals, where every set will have a precise upper bound? Mathematicians decided they could and it was called the Real numbers[1]. Now here is the definition of the real numbers and how the were created (it's very subtle): Every real number is the least upper bound limit of a bounded set of rational numbers. And every bounded set of rational numbers has a real number as an upper bound. That's the definition of the real numbers. So {.9, .99, .999, .9999,.....} is an bounded infinite set of rational numbers. By definition it has a real number least upper bound limit. We call that real number .999999999999...... Okay, we have defined .99999...... and we DIDN'T define it as 1. So now we can prove that it does equal 1. The general idea is that if .99999......= c < 1 then can find a finite number in {.9, .99, .999, .9999,.....} that is bigger than c. So we were wrong about c being an upper bound of {.9, .99, .999, .9999,.....} So .9999....... $\ge$ 1. But 1 is bigger than any of the {.9, .99, .999, .9999,.....}. So .99999....... $\not >$ 1. The only consistent option left is .99999.... = 1. [1] Well, the didn't just wave their hands and declare it. They had to prove constructing such a number system was possible. The proof is ... abstract. And tedious. And abstract. # ===== 2nd answer: Different approach and philosophy==== The OP has a point that we don't really "need" to prove .999.... = 1. If .999... equals anything at all, showing that it must by 1 is trivially easy. The idea of proving .999.... = 1 misses the point. The point really is how do we know that .999.... equals anything at all. The OP isn't entirely accurate in stating we defined .999... to be 1. We defined .999.... to be something that can be shown to be 1. (Slight difference but a difference with ramifications.) Other comments claim we make an assumption via definitions that .999... is a limit of bounded sequence of rational numbers and another assumption via definition that limits of bounded sequence of rational numbers are real numbers. These aren't entirely accurate either. The definitions weren't assumptions. They were analysis as to what the real numbers are and the discoveries came along the way. Suppose we know nothing about the real numbers. ... well, not nothing but suppose we have no real sense of them. With or without real numbers we do have to define $0.9999....$ as $\sum_{n = 1}^{\infty} \frac{9}{10^n}$. But we don't really know anything about it. It's possible that it's unbounded and "blows up". (Actually, we can show that can't happen but I don't want to get into that yet.) It's possible that it adds up to an actual number. It could be rational or it could be one of those numbers that can't be written as a rational like square root of two or pi. But the natural concern is that it might simply never resolve into anything. To me, the surprising and subtle thing about analyzing the real numbers is the realization that the very nature of the real numbers means that this "simply never resolving into anything" is impossible. Everything in the reals that is bounded resolves to something, and every real is a resolution of something. This really astounded me when I finally wrapped my head ahead. I mean it really really surprised me! (Yes, I'm that much a geek.) Okay, so this leads to the fundamental question: Real Numbers. What the Heck are they? We know that we can measure discrete units via integers, like $n$. And we know we can chop these discrete units into $m$ pieces as $1/m$ and as $m$ can be arbitrarily large, these $1/m$ can be arbitrarily precise and this collection of ratios, so all the possible $n/m; m \ne 0$ form a system of arbitrarily precise measurements that span every possible range. So these Rationals can measure anything to arbitrary precision. That ought to be enough. But it isn't. Because we know there are always numbers like $\sqrt{2}$ and $\pi$ that can't be written as any Rational = $m/n$ where $m$ and $n$ are integers. So w have gone for a thousand years with a number system that we don't really understand and cant describe. What are these "holes" and what do we know about them? Enter the Dedekind cut. An example: Let's "cut" the rationals into two sets. Set $A =${all the rational numbers, $r$, such that $r^2 < 2$} and $B =${all the rational numbers, $s$, such that $s^2 < 2$}. Several things we note about these sets: 1) they are completely disjoint and neither are empty; 2) all the elements or 1 are smaller than all the elements of the other; 3) A does not have a largest element and B does not have a smallest element; and 4) They are arbitrarily close together. (That is, for any number $e > 0$ no matter how small we can always find an $a \in A$ and a $b \in B$ so that $b - a < e$. Always.) Any "procedure" that can cut the Rational numbers into two sets with those 4 properties is called a "cut" and we can, for sake of vocabulary, refer to such cut as $\overline x$ and the two sets as $A\overline x$ and $B\overline x$. And we can consider the collection of all such cuttings. One thing is about these cuts is that we don't necessarily need to know how to describe them for them to exist. One such cut could be $\overline \omega$ such that $A\overline \omega =$ {all rationals less than some .99...9} and $B\overline \omega =$ {all rationals larger than all .99...9}. [That's not actually true and I pulled a fast one on you. A brownie point to any one who can figure out my deception.] Now the astute reader might have noticed that although these cuts cut the rational in two, the two sets don't always contain all the rationals. Ex: A = {all rationals < 27 1/2} and B = {all rationals > 27 1/2} is a cut. But the rational number 27 1/2 is not in either A nor B. This is okay. The sets don't have to contain all the rationals. But notice, 27 1/2 is the only rational that is not in either set. We can say a cut like that one cuts the rationals "on" a rational number and others cut the rationals "between" rational numbers. There is a correspondence between the rational numbers and the cuts that cut on the rational numbers and we can refer to such a cut as $\overline{1/2}$ as the cut that cuts on 1/2. The other cuts that cut between rational numbers are extra cuts. So... back to the collection of all possible cuts... They form a number system. For two cuts $\overline x$ and $\overline y$ we say $\overline x < \overline y$ if $A\overline x \subset A\overline y$. We note that if we define $\alpha$ = $A\overline x + A\overline y =$ {all rationals that are sums of two rationals one in $A\overline x$ and the other in $A\overline y$} and $\beta$ = {all the rationals that are bigger than all the rationals in $\alpha$}, then $\alpha$ and $\beta$ perform a cut. [Again, I'm pulling a fast one. Another brownie point to whoever finds it. It's the same as the fast one I pulled above.] We define $\overline x + \overline y$ as finding that type of cut. Similarly we define subtraction, multiplication, and division. (Divisions kind of a pain to define but we do it.) So the collection of all cuts form a number system. What's more, the cuts that cut "on" rational numbers behave equivalently as the rational numbers do. These "cuts" are the real numbers. Every real number is a point at which we "cut" the rational numbers into two sets. If we cut "on" a rational number that point is the rational number, if we cut "between" rational numbers that point is some irrational number. That's it. That's what the heck Real Numbers are. ... deep sigh and coffee break.... Okay, WHAT THE #@&! DOES THAT HAVE TO DO WITH .9999....? Notice that we now have discovered (NOT "defined"; NOT "assumed") that every real number cuts the rationals in two. So every bounded sequence of rationals can be fit into the $A\overline x$ set of some cut $\overline x$ and we can find precisely the lowest cut that will do that. And that cut point is a precise and unique real number. Thus the FUNDAMENTAL PROPERTY OF REAL NUMBERS!!!! Every real number is a limit of a sequence of rational numbers and every bounded sequence of rational numbers has a real number limit. And this real number is the point that cuts the rational numbers at precisely the "edge" of the sequence. So let's look at $.9999...... = \sum_{n = 1}^{\infty} \frac{9}{10^n}$ and let's look at this bounded sequence of rational numbers: {.9; .99; .999; etc.} and let's look at the number $1$ which is larger than all the elements of the sequence. So we know: 1)There is a cut $\overline {\omega}$ where $A\overline{\omega}$ = {all the rationals that are less than or equal to sum .999...9} 2)This cut occurs at a real number, $\omega$. 3)$\omega = \lim$ of the sequence {.9;.99;.999; etc.} 4).999.... "lingers" between the elements of $A\overline {\omega}$ and the elements of $B\overline{\omega}$. Therefore we can conclude $.999.... = \omega$. And that's it: $.9999..... = \sum 9/10^n = \lim \{.9'.99;.999;...\}$ is a real number. Now it's trivial to prove $.99999..... = 1$. And there was not a single assumption or circular definition. • Well, to be subtle... analysis has proven that all bounded sequences of rationals get closer and closer to one and precisely one real number. And analysis has shown an infinite sum of decimals is the limit of the sequence. So we know that the sequence gets closer to exactly one number and we know that what the number is is .9999... What remains is to show that this number happens to be 1. – fleablood Jan 14 '16 at 18:26 • To say ".9, .99, .9999 gets closer and closer to 1 so if we do it an infinite number of times we say that is 1" is a bit like saying "Well, we put a cat in a box and we don't know if the cat is alive or dead so we say the cat is both". They miss the basic point and mislead into thinking everything is an estimate. – fleablood Jan 14 '16 at 18:29 • Put much simpler: x = 0.999..., 10x = 9.999..., 9x = 9, x = 1. – user117644 Jan 14 '16 at 22:06 • "With or without real numbers we do have to define $0.9999$ as $\displaystyle\sum \dfrac{9}{10^n}$"-why? – user 170039 Jan 15 '16 at 14:11 • One important thing. We never started talking about infinite decimals until AFTER analysts such as Dedekind had defined and proven the reals were such a system with the least upper bound so that all sequences of rationals that infinitely approach have real limits. It's only AFTER that, that is was determined that infinite decimals were meaningful and well-defined and !*useful*! They were shown to be a method to thoroughly describe the reals. Unfortunately, they became a magic mantra to elementary school teachers and befuddled students. – fleablood Jan 15 '16 at 19:01 My answer is not essentially different from some of the ones presented above, except for some notation. My apologies beforehand. Let $X = \{0, 1,2,3, \cdots , 9\}^{\mathbb{N}}$ ($\mathbb{N} = \{1, 2, \dots\}$) where $X$ is equipped with the product topology. $X$ is a compact (metrizable) space, the space of "decimals". Define a canonical'' continuous surjective map $\varphi: X \rightarrow [0, 1]$ by $\{r_i\}_{i \in \mathbb{N}}\ \mapsto \sum_{k=1}^\infty \ \ r_k/10^k$. The map $\varphi$ is not injective since for instance $\varphi$ ( 5,0,0,0, $\ldots$)=$\varphi$ ( 4,9,9,9, $\ldots$) ## protected by J. M. is a poor mathematicianMay 26 '17 at 19:09 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
# Revision history [back] ### No 'Form-Based Filters' Icon in Form Navigation Bar I have a standalone form connected to a database with the navigation bar inserted into the form itself. However when I open the standalone form and use it, there is no icon for the 'Form-Based Filters' function which I use heavily. If I open my databse file, and open the Form listed within that file I have the options from the Form Navigation Toolbar in the window. I attached two images below. Is this a bug / intentional / and any fix or explanation? ### No 'Form-Based Filters' Icon in Form Navigation Bar I have a standalone form connected to a database with the navigation bar inserted into the form itself. However when I open the standalone form and use it, there is no icon for the 'Form-Based Filters' function which I use heavily. If I open my databse file, and open the Form listed within that file I have the options from the Form Navigation Toolbar in the window. I attached two images below. Is this a bug / intentional / and any fix or explanation?
# flatspin# flatspin is a GPU-accelerated simulator for systems of interacting nanomagnet spins arranged on a 2D lattice, also known as Artificial Spin Ice (ASI). flatspin can simulate the dynamics of large ASI systems with thousands of interacting elements. flatspin is written in Python and uses OpenCL for GPU acceleration. flatspin comes with extra bells and whistles for analysis and visualization. Some example ASI systems are shown below: flatspin is open-source software. You are free to modify and distribute the source-code under under the GPLv3 license. flatspin is developed and maintained by an interdisciplinary group of researchers at NTNU. If you use flatspin in any work or publication, we kindly ask you to cite: “flatspin: A Large-Scale Artificial Spin Ice Simulator”, Phys. Rev. B 106, 064408 (2022). @article{Flatspin2022, title = {flatspin: A large-scale artificial spin ice simulator}, author = {Jensen, Johannes H. and Str\o{}mberg, Anders and Lykkeb\o{}, Odd Rune and Penty, Arthur and Leliaert, Jonathan and Sj\"alander, Magnus and Folven, Erik and Tufte, Gunnar}, journal = {Phys. Rev. B}, volume = {106}, issue = {6}, pages = {064408}, numpages = {17}, year = {2022}, month = {Aug}, publisher = {American Physical Society}, doi = {10.1103/PhysRevB.106.064408},
# zbMATH — the first resource for mathematics ## Gillespie, James Compute Distance To: Author ID: gillespie.james Published as: Gillespie, James Homepage: https://phobos.ramapo.edu/~jgillesp/ External Links: MGP · Wikidata Documents Indexed: 25 Publications since 2004 Reviewing Activity: 1 Review #### Co-Authors 20 single-authored 2 Estrada, Sergio 1 Bravo, Daniel 1 Hovey, Mark A. 1 Odabaşı, Sinem all top 5 #### Serials 3 Communications in Algebra 3 Journal of Pure and Applied Algebra 3 Homology, Homotopy and Applications 2 Transactions of the American Mathematical Society 2 Journal of Homotopy and Related Structures 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Rocky Mountain Journal of Mathematics 1 Advances in Mathematics 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Bulletin of the London Mathematical Society 1 Fundamenta Mathematicae 1 Journal of Algebra 1 Mathematische Zeitschrift 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 Frontiers of Mathematics in China #### Fields 22 Category theory; homological algebra (18-XX) 14 Algebraic topology (55-XX) 5 Associative rings and algebras (16-XX) 2 Algebraic geometry (14-XX) 1 Commutative algebra (13-XX) #### Citations contained in zbMATH 24 Publications have been cited 407 times in 181 Documents Cited by Year The flat model structure on $$\mathbf{Ch}(R)$$. Zbl 1056.55011 Gillespie, James 2004 Model structures on modules over Ding-Chen rings. Zbl 1231.16005 Gillespie, James 2010 Kaplansky classes and derived categories. Zbl 1134.55016 Gillespie, James 2007 Model structures on exact categories. Zbl 1315.18019 Gillespie, James 2011 Cotorsion pairs and degreewise homological model structures. Zbl 1140.18011 Gillespie, James 2008 Gorenstein complexes and recollements from cotorsion pairs. Zbl 1343.18013 Gillespie, James 2016 The flat model structure on complexes of sheaves. Zbl 1094.55016 Gillespie, James 2006 Gorenstein model structures and generalized derived categories. Zbl 1230.18008 Gillespie, James; Hovey, Mark 2010 Absolutely clean, level, and Gorenstein AC-injective complexes. Zbl 1346.18021 Bravo, Daniel; Gillespie, James 2016 How to construct a Hovey triple from two cotorsion pairs. Zbl 1316.18012 Gillespie, James 2015 Hereditary abelian model categories. Zbl 1372.18001 Gillespie, James 2016 On Ding injective, Ding projective and Ding flat modules and complexes. Zbl 1443.16006 Gillespie, James 2017 The flat stable module category of a coherent ring. Zbl 1392.16012 Gillespie, James 2017 Exact model structures and recollements. Zbl 1386.18041 Gillespie, James 2016 The derived category with respect to a generator. Zbl 1342.18021 Gillespie, James 2016 The homotopy category of $$N$$-complexes is a homotopy category. Zbl 1310.18006 Gillespie, James 2015 The projective stable category of a coherent scheme. Zbl 1423.18053 2019 Models for homotopy categories of injectives and Gorenstein injectives. Zbl 1373.18007 Gillespie, James 2017 Pure exact structures and the pure derived category of a scheme. Zbl 1396.18006 Estrada, Sergio; Gillespie, James; Odabaşi, Sinem 2017 Models for mock homotopy categories of projectives. Zbl 1346.14041 Gillespie, James 2016 Duality pairs and stable module categories. Zbl 1409.13034 Gillespie, James 2019 AC-Gorenstein rings and their stable module categories. Zbl 1437.16010 Gillespie, James 2019 On the homotopy category of AC-injective complexes. Zbl 1397.18035 Gillespie, James 2017 Gorenstein AC-projective complexes. Zbl 1408.18034 Gillespie, James 2018 The projective stable category of a coherent scheme. Zbl 1423.18053 2019 Duality pairs and stable module categories. Zbl 1409.13034 Gillespie, James 2019 AC-Gorenstein rings and their stable module categories. Zbl 1437.16010 Gillespie, James 2019 Gorenstein AC-projective complexes. Zbl 1408.18034 Gillespie, James 2018 On Ding injective, Ding projective and Ding flat modules and complexes. Zbl 1443.16006 Gillespie, James 2017 The flat stable module category of a coherent ring. Zbl 1392.16012 Gillespie, James 2017 Models for homotopy categories of injectives and Gorenstein injectives. Zbl 1373.18007 Gillespie, James 2017 Pure exact structures and the pure derived category of a scheme. Zbl 1396.18006 Estrada, Sergio; Gillespie, James; Odabaşi, Sinem 2017 On the homotopy category of AC-injective complexes. Zbl 1397.18035 Gillespie, James 2017 Gorenstein complexes and recollements from cotorsion pairs. Zbl 1343.18013 Gillespie, James 2016 Absolutely clean, level, and Gorenstein AC-injective complexes. Zbl 1346.18021 Bravo, Daniel; Gillespie, James 2016 Hereditary abelian model categories. Zbl 1372.18001 Gillespie, James 2016 Exact model structures and recollements. Zbl 1386.18041 Gillespie, James 2016 The derived category with respect to a generator. Zbl 1342.18021 Gillespie, James 2016 Models for mock homotopy categories of projectives. Zbl 1346.14041 Gillespie, James 2016 How to construct a Hovey triple from two cotorsion pairs. Zbl 1316.18012 Gillespie, James 2015 The homotopy category of $$N$$-complexes is a homotopy category. Zbl 1310.18006 Gillespie, James 2015 Model structures on exact categories. Zbl 1315.18019 Gillespie, James 2011 Model structures on modules over Ding-Chen rings. Zbl 1231.16005 Gillespie, James 2010 Gorenstein model structures and generalized derived categories. Zbl 1230.18008 Gillespie, James; Hovey, Mark 2010 Cotorsion pairs and degreewise homological model structures. Zbl 1140.18011 Gillespie, James 2008 Kaplansky classes and derived categories. Zbl 1134.55016 Gillespie, James 2007 The flat model structure on complexes of sheaves. Zbl 1094.55016 Gillespie, James 2006 The flat model structure on $$\mathbf{Ch}(R)$$. Zbl 1056.55011 Gillespie, James 2004 all top 5 #### Cited by 132 Authors 29 Liu, Zhong-kui 19 Gillespie, James 17 Estrada, Sergio 16 Yang, Gang 14 Liang, Li 14 Yang, Xiaoyan 11 Di, Zhenxing 8 Šťovíček, Jan 7 Ding, Nanqing 7 Hu, Jiangsheng 7 Lu, Bo 6 Hafezi, Rasool 6 Iacob, Alina C. 6 Wang, Zhanping 6 Zhang, Xiaoxiang 5 Chen, Wenjing 5 Guil Asensio, Pedro A. 5 Odabaşı, Sinem 5 Pérez, Marco A. 4 Asadollahi, Javad J. 4 Bahiraei, Payam 4 Bazzoni, Silvana 4 Geng, Yuxian 4 Ren, Wei 4 Thompson, Peder 4 Trlifaj, Jan jun. 4 Winther Christensen, Lars 4 Xu, Aimin 3 Gao, Zenghui 3 Holm, Henrik 3 Huang, Zhaoyong 3 Li, Zhiwei 3 Positselski, Leonid Efimovich 3 Torrecillas Jover, Blas 3 Wang, Junpeng 3 Yang, Chunhua 3 Zhao, Tiwei 2 Alonso Tarrío, Leovigildo M. 2 Becerril, Víctor 2 Chen, Jianlong 2 Cortés-Izurdiaga, Manuel 2 Dalezios, Georgios 2 Di Brino, Gennaro 2 Enochs, Edgar E. 2 Fu, Xianhui 2 Groth, Moritz 2 Hosseini, Esmaeil 2 Jeremías López, Ana 2 Jørgensen, Peter Bjørn 2 Mao, Lixin 2 Mendoza, Octavio 2 Pérez Rodríguez, Marta 2 Pištalo, Damjan 2 Poncin, Norbert 2 Prest, Mike 2 Ren, Wei 2 Santiago, Valente 2 Šaroch, Jan 2 Vahed, Razieh 2 Vale Gonsalves, María J. 2 Virili, Simone 2 Zhang, Chunxia 2 Zhang, Dongdong 2 Zhao, Renyu 2 Zhongkui, Liu 2 Zhu, Haiyan 1 Asgharzadeh, Mohsen 1 Bahlekeh, Abdolnaser 1 Becker, Hanno 1 Bravo, Daniel 1 Bravo, Diego 1 Cheng, Haixia 1 Crivei, Septimiu 1 Da, Xuanshang 1 Dehghanpour, Tahereh 1 Divaani-Aazar, Kamran 1 Du, Ruijuan 1 Eshraghi, Hossein 1 Hassoun, Souheila 1 Herbera, Dolors 1 Herzog, Ivo 1 Hörmann, Fritz 1 Hovey, Mark A. 1 Hu, Kui 1 Jenda, Overtoun M. G. 1 Jiang, Qinghua 1 Li, Yunxia 1 Liang, Li 1 Lim, Jung Wook 1 Liu, Haiyu 1 Liu, Yifu 1 Liu, Yu 1 Makkai, Michael 1 Nakamura, Tsutomu 1 Nakaoka, Hiroyuki 1 Neeman, Amnon 1 Nematbakhsh, Ali 1 Nuiten, Joost Jakob 1 Peng, Jie 1 Rada, Juan ...and 32 more Authors all top 5 #### Cited in 47 Serials 35 Communications in Algebra 17 Journal of Algebra and its Applications 15 Journal of Algebra 14 Journal of Pure and Applied Algebra 12 Advances in Mathematics 6 Glasgow Mathematical Journal 5 Algebra Colloquium 5 Acta Mathematica Sinica. English Series 5 Journal of Homotopy and Related Structures 4 Rocky Mountain Journal of Mathematics 4 Proceedings of the American Mathematical Society 4 Frontiers of Mathematics in China 3 Proceedings of the Edinburgh Mathematical Society. Series II 3 Rendiconti del Seminario Matematico della Università di Padova 3 Bulletin of the Iranian Mathematical Society 3 Selecta Mathematica. New Series 3 Journal of the Australian Mathematical Society 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 2 Quaestiones Mathematicae 2 Transactions of the American Mathematical Society 2 Bulletin of the Korean Mathematical Society 2 Forum Mathematicum 2 Applied Categorical Structures 2 Turkish Journal of Mathematics 1 Bulletin of the Australian Mathematical Society 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Archiv der Mathematik 1 Bulletin of the London Mathematical Society 1 Fundamenta Mathematicae 1 Journal of the Korean Mathematical Society 1 Journal für die Reine und Angewandte Mathematik 1 Mathematische Nachrichten 1 Mathematische Zeitschrift 1 Pacific Journal of Mathematics 1 Chinese Annals of Mathematics. Series B 1 Applied Mathematics. Series B (English Edition) 1 Vietnam Journal of Mathematics 1 Taiwanese Journal of Mathematics 1 Algebras and Representation Theory 1 Annals of Mathematics. Second Series 1 Communications of the Korean Mathematical Society 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 Mediterranean Journal of Mathematics 1 Science China. Mathematics 1 Kyoto Journal of Mathematics 1 Arabian Journal of Mathematics all top 5 #### Cited in 10 Fields 151 Category theory; homological algebra (18-XX) 100 Associative rings and algebras (16-XX) 40 Algebraic topology (55-XX) 22 Commutative algebra (13-XX) 11 Algebraic geometry (14-XX) 3 Mathematical logic and foundations (03-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Group theory and generalizations (20-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Partial differential equations (35-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
# I don't know how to write formulas for integration, power series, etc. in Math.SE [duplicate] I want to ask questions from calculus but I do not know how to write the formulas; it causes me lots of problems. I do not know to how write power series as there is no option in my keyboard. Can anybody guide me on how to write integration and power series? I don't know how to write the big equation of geometry. Can anybody help me, or give me some idea, so that I can more easily ask questions about calculus? • Here is a brief tutorial about MathJax, MathJax basic tutorial and quick reference Aug 7, 2017 at 15:31 • Have a look around for Questions (on the main site, Math.SE) about calculus, power series, etc. If you right-click on a formula, and choose the submenu Show Math as ... TeX Commands, a smallish text popup will show you the exact $\LaTeX$ syntax for that expression (which you can cut, paste, and edit to get your desired effect). Aug 7, 2017 at 18:25 • Click the edit link of a post that contains the formulas and formats you need to see the source code of the text. Press cancle to cancle the edit. And use the context manue as hardmath has described Aug 15, 2017 at 21:47 General math formulas require dollar signs around them. Single for inline and double for displayed (the below are displayed) $1+2=3$ renders as: $1+2=3$ $$1+2=3$$ renders as: $$1+2=3$$ # If you want integration: Use \int for regular integrals. It displays as $$\int$$ If you want bounds, use \int_{a}^{b}. It displays as $$\int_a^b$$ To add stuff inside the integral, I recommend the format \int_{a}^{b} f(x)~dx. It displays as $$\int_{a}^{b} f(x)~dx$$ For double, triple, or quadruple integrals, everything above applies and use \iint,\iiint,\iiiint, which respectively displays as $$\iint,\iiint,\iiiint$$ For a closed line integral, use \oint, which displays as $$\oint$$ # For a sum, please use \sum, not \Sigma. The difference is shown below: $$\text{\sum}:~\sum\qquad\text{\Sigma}:~\Sigma$$ To add bounds, the same procedure from above applies. For example, use \sum_{n=1}^{5} n^2 to get $$\text{\sum}:~\sum_{n=1}^{5} n^2\qquad\text{\Sigma}:~\Sigma_{n=1}^{5} n^2$$ If one of your bounds happens to be infinity or negative infinity, use \infty or -\infty. For example, \sum_{n=0}^{\infty} to get $$\sum_{n=0}^{\infty}$$ And use \int_{-\infty}^{\infty} to get $$\int_{-\infty}^{\infty}$$ • The difference between \sum and \Sigma is not only what is shown in this posting: $$\sum_{k=0}^n a_k \text{ versus } \Sigma_{k=0}^n a_k$$ The first of these is \sum; the second is \Sigma. The point is that besides the size and shape, there is the issue of positions of subscripts and superscript. $\qquad$ Aug 8, 2017 at 2:41 • Note that {curly braces} are needed only when more than one object is to be included. Thus in \sum_{k=0}^n or \sum_{k=0}^\infty one needs the braces in {k=0} but one need not write \sum_{k=0}^{n} or \sum_{k=0}^{\infty}. But \sum_{k=0}^{+\infty} is rendered as $\displaystyle \sum_{k=0}^{+\infty}$ whereas \sum_{k=0}^+\infty appears as $\displaystyle \sum_{k=0}^+\infty. \qquad$ Aug 8, 2017 at 2:45 • @MichaelHardy To the second comment, some people get confused as to whether or not something counts as "one object", and since both ways work in that situation, I thought it better to mention the method that always works. Aug 8, 2017 at 11:54 • thanks a lot @ simply beautiful art Aug 8, 2017 at 12:44 • Nice list, @SimplyBeautifulArt But there's nowhere you've mentioned that the expressions only render in Mathjax when surrounded on each side by a dollar sign (or two). Aug 11, 2017 at 23:34 • @amWhy True. I kind of assumed the OP knew generally how MathJax work, given his/her main site content, and was only asking about the integral and sum stuff. Aug 11, 2017 at 23:51 • Fair enough; but for others encountering this post, particularly new users who are at Calc II level, I can never hurt to mention it. My comment mainly was meant as a compliment regarding your helpful post. Aug 11, 2017 at 23:54
Tags: crypto aes-ecb Rating: TUCTF 2018: AESential Lesson ============================= ## Description Thought I'd give you an essential lesson to how you shouldn't get input for AES in ECB mode. nc 18.218.238.95 12345 The server run the python file redacted.py #!/usr/bin/env python2 from Crypto.Cipher import AES from select import select import sys INTRO = """ Lol. You think you can steal my flag? I\'ll even encrypt your input for you, but you can\'t get my secrets! """ flag = "REDACTED" # TODO Redact this key = "REDACTED" # TODO Redact this if __name__ == '__main__': padc = 'REDACTED' #TODO Redact this assert (len(flag) == 32) and (len(key) == 32) cipher = AES.new(key, AES.MODE_ECB) sys.stdout.write(INTRO) sys.stdout.flush() while True: try: sys.stdout.flush() rlist, _, _ = select([sys.stdin], [], []) inp = '' if rlist: plaintext = inp + flag l = len(plaintext) padl = (l // 32 + 1)*32 if l % 32 != 0 else 0 except KeyboardInterrupt: exit(0) ## Solution ### Analysis of the code * First of all we see that the challenge is based on AES in ECB mode. It is highly insecure when encrypting over multiple blocks. One of the reason is that we can differentiate text based of their ciphertext. This means that if we encrypt: block 1, block 2, block 3 aaaa....aaaa, bbbb....bbbb, aaaa....aaaa the ciphertext of the block 1 will be equals to the one in the block 3. * In the code the flag size is constant to 32 characters, the exact size of a block. * We can concatenate text on the left side of the flag before encryption. * There is padding, and it is handled by adding the characted padc to the right size of the text until the text is a multiple of 32 characters (a block size). ### The attack The attack is a chosen plaintext attack. There is two steps needed to retreive the flag: * Find the value of padc * Char by char padding attack on the flag #### Find the value of padc In the next descriptions we renamed padc as c. When sending one single character "a" to the server, there will be 2 blocks of the form: "a" + flag [0:31] and "}" + ccccccc..cccccc So if we send "}" + ccccc...cccccc + "a", there will be a 3rd block on the left with the exact same value as the rightmost block. Then it's trivial to determine the value of c, we send all the possible ascii character that could be c (max 256 requests). If the ciphertext of the leftmost block is the same as the one on the rightmost block, it means that it is the correct padding character! The code below achieved this results for i in range(64,256): char = chr(i) text = "}"+char * 31 send_text = text+"a" conn.sendline(send_text) conn.recvline() code = conn.recvline() c1, c2, c3 = code[:64], code[64:128], code[128:192] if(c1 == c3): break conn.recvline() The padding character was _ #### Char by char padding attack on the flag Now that we know padc, we will try to retreive a character of the flag. This is the same idea as before, by trying to leak one unknown character at a time in the rightmost block. We put the same text on the leftmost block, and compare if the 2 ciphertexts are equals. (need max 256 requests / tries per unknown character). Let's show one example to find the character before the "}" of the flag. If we send t+"}"+cccc....cccc+"aa", (c is the padding character) we will have these blocks: block 1, block 2, block 3 t+"}"+cccc....cccc, "aa" + flag[0:30], flag[31]+"}"+ccccccccccccc Then we will just have to test all possible ascii characters for t and compare the ciphertexts of the block 1 and the block 3 if they are equals. Afterward we iterate to the next unknown character. The code below implement this attack #find the flag char by char flag = "" for i in range(31): for j in range(32,127): char = chr(j) text = char + flag + paddingChar * (31-i) send_text = text+"a"+"a"*i conn.recvuntil(":") conn.sendline(send_text) conn.recvline() code = conn.recvline() c1, c2, c3 = code[:64], code[64:128], code[128:192] if(c1 == c3): flag = char + flag print(flag) break conn.recvline() print("The flag is: " + flag) ### Full Code The full code is available [here](https://github.com/ctf-epfl/writeups/blob/master/tuctf18/AESential%20Lesson/flag.py). ### Flag The flag is: TUCTF{A3S_3CB_1S_VULN3R4BL3!!!!}
main-content ## Weitere Artikel dieser Ausgabe durch Wischen aufrufen 31.10.2017 | Ausgabe 9/2018 # On the number of inequivalent Gabidulin codes Zeitschrift: Designs, Codes and Cryptography > Ausgabe 9/2018 Autoren: Kai-Uwe Schmidt, Yue Zhou Wichtige Hinweise Communicated by G. Lunardon. ## Abstract Maximum rank-distance (MRD) codes are extremal codes in the space of $$m\times n$$ matrices over a finite field, equipped with the rank metric. Up to generalizations, the classical examples of such codes were constructed in the 1970s and are today known as Gabidulin codes. Motivated by several recent approaches to construct MRD codes that are inequivalent to Gabidulin codes, we study the equivalence issue for Gabidulin codes themselves. This shows in particular that the family of Gabidulin codes already contains a huge subset of MRD codes that are pairwise inequivalent, provided that $$2\leqslant m\leqslant n-2$$. ### Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten Literatur Über diesen Artikel Zur Ausgabe
# An integrating digital voltmeters measures This question was previously asked in UJVNL AE EE 2016 Official Paper View all UJVNL AE Papers > 1. True average value 2. rms value 3. Peak value 4. None of the above ## Answer (Detailed Solution Below) Option 1 : True average value ## Detailed Solution Concept: Integrating type digital voltmeter (DVM): It measures the voltage in digital domain. It is possible to do so using voltage to frequency conversion. Reference voltage can be varied to change the duration of the pulse train (or we can say to change frequency of pulses) This voltmeter employs an integration technique that uses a voltage to frequency conversion. The heart of this technique is the operational amplifier acting as an Integrator. The output voltage of the integrator is $$E_o=\frac{E_i}{RC}\times t$$ The basic block diagram of a typical integrating type of DVM is shown : • Input voltage is fed to integrator and its output is sensed by level detector. • Level detector discrete signal is matched with clock oscillator signal to generate train of pulses. • Time base selector is used to select the duty ratio of pulses required. • Combined we get duty cycle along with duration which is used to trigger the BCD counter circuit • The count is read in LCD display. Explanation: The voltage measurement is proportional to frequency for a fixed reference value. Since it reads a digital value from the converted actual analog voltage signal, the average value of the voltage signal is used for comparison. Thus, DVM measures the true average value of the input voltage over a fixed measuring period.
All Questions 11 views Mean and Standard deviation The filling machine used by a dairy company to fill 1kg containers of yoghurt produces output which follows a normal distribution with mean 1030g (slightly more than 1kg) and standard deviation 20g. ... 6 views Negative binomial mixed effect model for repeated measures with R - prediction and plotting I have a dataset to analyze in which a response was recorded at the ends of months 1,3,4,5,6 in 187 patients. All patients had the responses recorded in each week, and all patients started a treatment ... 5 views Performing t test, one-way ANOVA from mean and SD using SPSS I have the following summarized data: Group 1: n=44, mean 12.1, SD 2.0 Group 2: n=56, mean 13.2, SD 2.5 Group 3: n=42, mean 12.7, SD 2.3 Is it possible to perform the t test comparing means of group ... 5 views Interpretation of variance component In a report, I came across the following table and unfortunately there is no description for it. I have very limited knowledge of statistics. Could someone kindly let me know what the interpretation ... 8 views What is the area under the normal curve that describes the probability that more that 43 households have a gas stove HW 7.4 random variable will be approximated using the normal distribution 5 views Random Coefficient Negative Binomial Model I have a crash count data and i want to build a random coefficient negative binomial model in R. The dependent variable will be the crash counts and covariates will be Lane width, AADT, shoulder width ... 5 views What does it mean for a function to be “contained in” a confidence band? So I have this problem that asks me to generate 100 random samples from a standard normal, and I did. It asked me to find a 95% confidence band and I did, using the Python code ... 2 views Jackknife estimator for coefficient of variation I have one of the estimator of coefficient of variation and i am using jackknife method to reduce bias. I am running R code to do the simulation for this study. The result i am getting is not what i ... 12 views Getting the Estimate of ref Level in Multiple Regression with Dummy Variable in R I have this regression model in R using dummy variables ... 13 views Locally weighted regression VS kernel linear regression? I am trying to make it clear the relationship of the listed three methods. According to my understanding kernel regression means : the weight vector W lies in the space spanned by training data. ... 10 views What are the odds of having the same Lawyer make a mistake in two wills involving the same person? [on hold] A lawyer made mistakes in my will and were caught while I was alive to have it fixed and then a loved one died and it turns out the very same lawyer made a mistake in his will that the deceased can't ... 9 views Why do we take the absolute value of weight of evidence when computing the information value? I was looking at the explanation for Information Value calculation in STATISTICA and I find it a bit confusing: ... 8 views Can I use deviance to compare the fit of a model to different datasets? I'm using R's nls to fit different datasets to the same model. I've read that using R-squared is usually not correct for ... 10 views What are the odds of having a 2 windows shot- one facing the west one year and another structure window the following? [on hold] It is a small rural community with fields between our house and a hedgerow between us and the neighbors to the north, and different neighbors across the street with a field. It was always assumed ... 10 views Forecasting daily visits using arima and external regressors I have daily visitors data for last 10 years, i want to do some basic tests like which is the busiest day , which is the busiest month, busiest week etc.i used auto.arima and xreg function to find out ... 10 views Forward sequential feature selection improving classifier performance? I was in a bit of a conversation with a co-worker about using forward selection. My training data is on order of ~6,000 w/ dimensionality of 1,200, and testing data on order of ~3,000. Currently, I'm ... 32 views Height of a normal distribution bell curve For a normal distribution bell-shaped curve, one would have thought that the height should have an ideal value. Knowing this value may be one quick indicator to check if the data is normally ... 4 views How does pruning and joining work in SPADE How can we generate frequent item sets from a sequential data using CSPADE algorithm? How does pruning and joining work in this algorithm? I am new to data mining So, request to explain clearly from ... 9 views How does the “summary” function that is run on a principal fitted components function work? I'm playing around with the ldr package in R, which comes with Big Mac data. If I run this code (which is similar to one of the examples): ... 4 views Matching two populations that have different disease status that have undergone the same intervention? If there are two populations, D+ and D-, where D is disease state, and both populations have undergone an intervention I, what would be the best way to design that study? Given that this is ... 18 views Fitting an envelope to x-y data in R I have some data that I'd like to create an envelope for it. As in the picture, the data is always positive and I'm looking for something that will capture my crudely hand drawn red line. It should ... 9 views Correction of data using a correlation Suppose I have measured the outcome variable A using a (psychophysical) test that determines the ability of a subject to discriminate between two stimuli with a certain difference (the variable X). It ... 4 views When testing autoregressive conditional heteroskedasticity with GARCH do you need to include the ind. variables? I have seen GARCH specified both ways... including the independent variables and excluding them. In the latter, only the ARCH and GARCH term remain in the specified regression equation. For testing ... 14 views box-cox transformation altered my anova result I have a non normal data and because of that i have decided to transform that data using Box-cox transformation. Even though the transformation worked really well, it changed my anova result. Here ... 6 views What is obtained from the product of a probability and a log probability ratio? I'm looking at the commonly used artificial neural network model that has nodes and connections. Quick refresher A connection has a source and target node, and a weight. The output of the source ... 10 views Different variance estimator for multilevel ordinal logistic regression in Stata and R I estimate a multilevel ordinal logistic regression models in Stata and R, and receive different estimators for the variance and the covariance of the latent variable of the higher level. Among other ... 3 views Multilevel model with multiple level 2 variables I am estimating a model with performance as the outcome, three level 1 predictors (age, sex, race) and three level 2 predictors (SchPopulation, SchSize, TeachingHrs). Level 2 predictors are the same ... 6 views Calculating Marginal Data Density for VAR Model I am currently estimating Bayesian vector autoregressive (BVAR) models and I would like to do model comparison with Bayes factors. I have read about the Gelfand-Day method, the Geweke (1999) modified ... 4 views Significance test likelihood to detect a mean difference, and sufficient power Here is my question: I need help on part D. This is my answer but I think it's wrong. Any suggestions are very appreciated. 10 views Any reason to report a Chi Square Test of Independence on a 2×2 table when I'm already reporting a 95% CI on the odds ratio? I'm analyzing a 2×2 contingency table, and am going to a report a 95% CI on the Odds Ratio. Is there any point in prefacing this with a Chi Square Test of Independence? I understand that if the Odds ... 11 views Piecewise-constant density estimation I came across the term "piecewise-constant density estimation" in a paper and haven't been able to find a definition for it online or in my textbook resources. No example was given in the paper ... 9 views R and fisher test in each row in a data frame [on hold] ![enter image description here][1] I basically have this dataframe and I would like to do a fisher test on it to see which of thoses elements are present more ... 5 views How can i combine the three variables into one single measure in SPSS? [on hold] How can i combine the three items into one single measure that indicates whether the respondent has taken part in at least one of the three activities treating it as a single dimension repertoire ? ... 11 views How to specify reference category for binary independent variables in multinomial logistic regression in SPSS I am using multinomial logistic regression in SPSS 20, with a DV with three ordinal categories, the last specified as the reference category. I have a mix of binary and ordinal IVs. I have no trouble ... 9 views skip -9999 and perform multiple regression [on hold] I want to do the multiple regression where SMDI is my dependent variable and other are my independent variable. I have following dataset to do the multiple regression: SMDI ET PRCP ET_Ch ... 20 views Is the bias of a coin a latent variable or a parameter? Consider the standard Bayesian estimation problem in which the bias $p$ of a coin is picked uniformly at random from $[0, 1]$, the coin is tossed a few times, and $p$ is then estimated from the ... 17 views SAS NLMIXED proc and LOGISTIC proc results different Consider a dataset $Z$ with $S\in \{0,1\}$ as binary response variable and 2 predictors $\{x_1, x_2\}$. ... 15 views Is it ok to have a unit root within an independent variable? The Dickey-Fuller and ADF tests testing for a unit root in variables are very sensitive. Some econometricians have personally indicated to me that in some cases it may be acceptable to model a ... 24 views Kernel methods in machine learning? I am beginning to tackle geostatistics problems where I tried to apply kriging(gaussian processes) to interpolate demographical water drop. According to my understanding, kernel methods are something ... 2 views Influence function for inequality index using ordinal data I wonder whether it is possible to derive influence function (explained for example here: Influence functions and OLS) for an inequality index designed for ordinal (non-continuous) data in this ... 4 views nntool data normalizing [on hold] I have some concerns related to the use of nntool in MATLAB toolbox. I have found that nntool by default normalizes the inputs to the range [-1 1]. how to changes these values both input and output ... 15 views 12 views deseasonalizing multiple series (more than 200 variables) I'm trying to produce deseasonalization for multiple series using x-12 ARIMA (as an alternative, if you can manage, you also could provide an idea with other methods, such as x-13 ARIMA). The thing is ... 6 views generating a sequence of indicators based on variable boundaries in JAGS Suppose I have a vector of indices 1:N, and data y[1], ..., y[N]. I have three variable center points in ... 21 views PDF and return on stock [on hold] i thought the answer is just 1 -P(x=<15) = 0.5 (1dp) http://i612.photobucket.com/albums/tt201/Ivanreyes/Screen%20Shot%202015-03-27%20at%206.33.13%20am.png 118 views Jelly Donut Puzzle - What Can Statistics Say About These Donuts [on hold] I give you a box of 12 donuts. You randomly remove 5. They are all jelly donuts. What is the probability that there is another jelly donut in the box? 17 views Equation in light of a given variance [on hold] The total profit of a firm is given by Profit = Total Revenue – Total Cost Suppose that Total Revenue = 120Q and Total Cost = 20 + 50Q where Q, the quantity sold, is a (approximately) normal random ...
Question Paper: Heat & Mass Transfer : Question Paper Dec 2013 - Mechanical Engineering (Semester 6) | Visveswaraya Technological University (VTU) 0 ## Heat & Mass Transfer - Dec 2013 ### Mechanical Engg. (Semester 6) TOTAL MARKS: 100 TOTAL TIME: 3 HOURS (1) Question 1 is compulsory. (2) Attempt any four from the remaining questions. (3) Assume data wherever required. (4) Figures to the right indicate full marks. 1 (a) Explain briefly the mechanism of conduction, convection and radiation heat transfer.(6 marks) 1 (b) With sketches write down the mathematical representation of three commonly used different types of boundary conditions for one dimensional heat equation in rectangular coordinates.(8 marks) 1 (c) A plate of thickness 'L' whose one side is insulated and the other side is maintained at a temperature T1 is exchanging heat by convection to the surrounding area at a temperature T2, with atmospheric air being the outside medium. Write mathematical formulation for one dimensional, steady state heat transfer, without heat generation.(6 marks) 2 (a) An electric cable of 10mm diameter is to be laid in atmosphere at 20°C. The estimated surface temperature of the cable due to heat generation is 65°C. Find the maximum percentage increase in heat dissipation, when the wire is insulated with rubber having K=0.155 W/mK, take h=8.5 W/m2K.(6 marks) 2 (b) Differentiate between the effectiveness and efficiency of fins.(4 marks) 2 (c) In order to reduce the thermal resistance at the surface of vertical plane wall(50×50cm). 100 pin fins(1 cm diameter. 10Cm long) are attached. If the pin fins are made of coper having a thermal conductivity is 15 W/m2K, calculate the decrease in the thermal resistance. Also calculate the consequent increase in heat transfer in heat transfer rate from the wall if it is maintained at a temperature of 200°C and surroundings are at 30°C.(10 marks) 3 (a) Show that the temperature distribution in body during Nevetonian having or cooling is given by $$\frac{T-T_{0}}{T_{1}-T_{0}}=\frac{\theta}{\theta_{1}}=Exp\left ( \frac{-hA_{s}t}{\rho CV} \right )$$.(6 marks) 3 (b) The steel ball bearing (K=50W/mK, α=.3×10-5m2/sec), 40mm at diameters are heated to temperature of 650°C, it is then quenched in a oil bath at 50°C, where the the transfer coefficient is estimated to be 300 W/m2K. Calculate: i)The time required for bearing to reach 200°C. ii) The total amount of heat removed from a bearing during this time and iii) The instantaneous heat transfer rate from the bearing, when they are first immersed in oil bath and when they reach 200°C (14 marks) 4 (a) With reference to fluid flow over a flat plate, discuss the concept of velocity boundary and thermal boundary, layer with nessary sketches.(5 marks) 4 (b) The exact expression for local Nuselt number for the laminar flow along a surface is given by $$Nu_{1}=\frac{h_{1}x}{k}=0.332 R^{1/2}_{ex}\ p^{1/3}$$ show that the average heat transfer coefficient from x=0 to x=L over the length 'L' of the surface is given by 2ht where ht is the local heat transfer coefficient at x=L.(5 marks) 4 (c) A vertical plate 15 cm high and 10cm wide is maintained at 140°C. Calculate the maximum heat dissipation rate from bothe the sides of the plates to air at 20°C. The radiation heat transfer coefficient is 9.0 w/m2K. For air at 80°C, take r=21.09 × 10-6m2/sec, Pr=0.692, Kf=0.03 W/mK.(10 marks) 5 (a) Explain the physical significance of i) Nusselt number ii) Groshoff number.(4 marks) 5 (b) Air at 2 atm and 200°C is heated as it flows at a velocity of 12m/sec through a tube with a diameter of 3 cm. A constant heat flux condition is maintained at the wall and the wall temperature is 20°C above air temperature all along the length of the tube. Calculate : i) The heat transfer per unit length of tube. ii) The increase in bulk temperature of air over a 4m length of the tube. take the following properties of air Pr=0.681.μ=2.57×10-5kg/ms, K=0.0386 W/mK and Cp=1.025 kJ/kg K. (10 marks) 5 (C) Obtain a relationship between drag coefficient c and heat transfer coefficient h for the flow over a flat plate.(6 marks) 6 (a) Derive an expression for LMTD of a counter flow heart exchanger. State the assumptions made.(8 marks) 6 (b) What is meant by the term fouling factor? How do you determine it?(4 marks) 6 (c) Engine oil is to be cooled from 80°C to 5°C by using a single pass counter flow , concentric-tube heat exchanger with cooling water available at 20°C. Water flows inside a tube with an internal dia of 2.5cm with a flow rate of 0.08 kg/s and oil flows through the annulus at a rate of 0.16kg/s. The heat transfer coefficient for the water side and oil side are respectively hw1000 W/m2°C and hoil 80W/m2C. The fouling factors is Fw 0.00018m2°C/W on both sides and the tube wall resistance in negligible. Calculate the tube length required.(8 marks) 7 (a) Sketch a pool boiling curve for water and explain briefly the various regimes in boiling heat transfer.(6 marks) 7 (b) Define mass transfer coefficient.(2 marks) 7 (c) A 12 cm outside diameter and 2m long tube is used in a big condenser to condense the steam at 0.4 bar. Estimate the unit surface conductance. i)in vertical position ; ii) in horizontal position. Also find the amount of condense formed per hour per hour in both the cases. The saturation temperature of the steam=74.5°C. Average wall temperature=50°C. the properties of water film at average temperature of $$\frac{75.4+50}{2}=62.7°C$$ are given below ρ =982.2 kg/m3, hf=24800kJ/kg,K=0.65 W/mK, μ=0.47×10-3kg/ms. (12 marks) 8 (a) State and prove Wien's displacement law of radiation.(6 marks) 8 (b) The temperature of a black surface 0.2m2 in area is 540°C calculate: i)The total rate of energy emission iii) The wavelength of maximum monochromatic emissive power. (6 marks) 8 (c) Derive an expression for a radiation shape factor and show that it is function of geometry only.(8 marks)
Corpus is an R text processing package with full support for international text (Unicode). It includes functions for reading data from newline-delimited JSON files, for normalizing and tokenizing text, for searching for term occurrences, and for computing term occurrence frequencies (including n-grams). Corpus does not provide any language models, part-of-speech tagging, topic models, or word vectors, but it can be used in conjunction with other packages that provide these features. ## Installation ### Stable version Corpus is available on CRAN.To install the latest released version, run the following command in R: install.packages("corpus") ### Development version To install the latest development version, run the following: tmp <- tempfile() system2("git", c("clone", "--recursive", shQuote("https://github.com/patperry/r-corpus.git"), shQuote(tmp))) devtools::install(tmp) Note that corpus uses a git submodule, so you cannot use devtools::install_github. ## Usage Here’s how to get the most common non-punctuation, non-stop-word terms in The Federalist Papers: > term_stats(federalist, drop = stopwords_en, drop_punct = TRUE) term count support 1 government 825 85 2 state 787 85 3 people 612 85 4 one 544 85 5 new 324 85 6 york 151 85 7 publius 85 85 8 may 812 84 9 states 845 82 10 power 606 82 11 must 446 81 12 can 464 78 13 every 350 77 14 part 226 77 15 constitution 462 76 16 might 322 76 17 general 255 76 18 time 249 76 19 great 291 74 20 public 282 74 ⋮ (8631 rows total) Here’s how to find all instances of tokens that stem to “power”: > text_locate(federalist, "power", stemmer = "en") text before instance after 1 1 …ay hazard a diminution of the power , emolument,\nand consequence … 2 1 …s. So numerous indeed and so\n powerful are the causes which serve to… 3 1 … of a temper fond of despotic power and\nhostile to the principle… 4 2 …der to vest it with requisite powers . It is well worthy\nof consid… 5 2 …head of each the same kind of powers which they are advised to\npl… 6 2 …\nwithout having been awed by power , or influenced by any passion… 7 3 …ment, vested with sufficient\n powers for all general and national … 8 3 … of nations towards all these powers , and to me it\nappears eviden… 9 3 …he wrong themselves, nor want power or\ninclination to prevent or… 10 3 …it will also be more in their power to\naccommodate and settle th… 11 3 …cy of little consideration or power .\n\nIn the year 1685, the sta… 12 3 …ain, or Britain, or any other POWERFUL nation?\n\nPUBLIUS.\n 13 4 … our advancement in union, in power and\nconsequence by land and … 14 4 …t can apply the resources and power of the whole to the\ndefense … 15 4 …\ncombining and directing the powers and resources of the whole, w… 16 5 …h tend to beget and\nincrease power in one part and to impede its… 17 6 … description are the love of\n power or the desire of pre-eminence… 18 6 …nd dominion--the jealousy of\n power , or the desire of equality an… 19 6 …rest of this enterprising and powerful monarch, he\nprecipitated Eng… 20 6 …rprising a passion as that of power or glory? Have there not\nbee… ⋮ (912 rows total) Here’s how to get a term frequency matrix of all 1-, 2-, 3-, 4-, and 5-grams. > system.time(x <- term_matrix(federalist, ngrams = 1:5)) user system elapsed 2.781 0.123 2.906 This computation uses only a single CPU, yet it still completes in under three seconds. For a more complete introduction to the package, see the getting started guide and the other articles at corpustext.com. ## Citation Cite corpus with the following BibTeX entry: @Manual{, title = {corpus: Text Corpus Analysis}, author = {Patrick O. Perry}, year = {2017}, note = {R package version 0.9.4}, url = {http://corpustext.com}, } ## Contributing The project maintainer welcomes contributions in the form of feature requests, bug reports, comments, unit tests, vignettes, or other code. If you’d like to contribute, either • fork the repository and submit a pull request (note the nonstandard instructions for building from source); • or contact the maintainer via e-mail. This project is released with a Contributor Code of Conduct, and if you choose to contribute, you must adhere to its terms. ## Acknowledgments The API and feature set for corpus draw inspiration from quanteda, developed by Ken Benoit and collaborators; stringr, developed by Hadley Wickham; and tidytext, developed by Julia Silge and David Robinson.
## The Annals of Probability ### The Rate of Escape of Random Walk William E. Pruitt #### Abstract Let $\{X_k\}$ be an i.i.d. sequence and define $S_n = X_1 + \cdots + X_n$. The problem is to determine for a given sequence $\{\beta_n\}$ whether $P\{|S_n| \leq \beta_n \mathrm{i.o.}\}$ is 0 or 1. A history of the problem is given along with two new results for the case when $P\{X_1 \geq 0\} = 1$: (a) An integral test that solves the problem in case the summands satisfy Feller's condition for stochastic compactness of the appropriately normalized sums and (b) necessary and sufficient conditions for a sequence $\{\beta_n\}$ to exist such that $\lim \inf S_n/\beta_n = 1$ a.s. #### Article information Source Ann. Probab., Volume 18, Number 4 (1990), 1417-1461. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176990626 Digital Object Identifier doi:10.1214/aop/1176990626 Mathematical Reviews number (MathSciNet) MR1071803 Zentralblatt MATH identifier 0715.60087 JSTOR links.jstor.org Subjects Primary: 60J15 Secondary: 60F15: Strong theorems #### Citation Pruitt, William E. The Rate of Escape of Random Walk. Ann. Probab. 18 (1990), no. 4, 1417--1461. doi:10.1214/aop/1176990626. https://projecteuclid.org/euclid.aop/1176990626
NASA Spinoff ### Originating Technology/NASA Contribution In order for NASA astronauts to explore the solar system, they will need to travel not just as pioneers but as settlers, learning to live off the land. Current mission needs have NASA scientists exploring ways to extract oxygen from the lunar soil and potable water from human wastes. One of the basic goals, however, will be for pioneering space travelers to learn to grow and manage their own crops. This requires the development of space-age greenhouses where astronaut farmers can experiment with harvesting large-scale food crops. ### Originating Technology/NASA Contribution Beginning in 1968, NASA began researching garments to help astronauts stay cool. The Agency designed the Apollo space suits to use battery-powered pumps to circulate cool water through channels in the inner layers of the garments. This led to commercial cooling vests for patients with heat control disorders (first featured in Spinoff 1979) and for workers in heat stress occupations (featured in Spinoff 1982). ### Originating Technology/NASA Contribution Since designing the first space suits in the 1950s, NASA has been interested in developing materials to keep astronauts comfortable and cool. In order to protect an astronaut from the extreme temperatures in space, engineers at Johnson Space Center created liquid-cooled garments that run water in small channels throughout the suit in what is called an active control system. However, in the 1980s, NASA began to investigate passive control strategies—fabric that could control temperature without pumped liquids—building on work by the U.S. Air Force. ### Originating Technology/NASA Contribution Johnson Space Center, NASA’s center for the design of systems for human space flight, began developing high-resolution visual displays in the 1990s for telepresence, which uses virtual reality technology to immerse an operator into the environment of a robot in another location. Telepresence is used by several industries when virtual immersion in an environment is a safer option, including remote training exercises and virtual prototyping, as well as remote monitoring of hazardous environments. Microdisplay panels, the tiny screens that comprise the visual displays for telepresence, are also used in some electronic viewfinders for digital video and still cameras. Originating Technology/NASA Contribution Recently, NASA’s Stardust mission used a block of aerogel to catch high-speed comet particles and specks of interstellar dust without damaging them, by slowing down the particles from their high velocity with minimal heating or other effects that would cause their physical alteration. This amazing accomplishment, bringing space particles back to Earth, was made possible by the equally amazing properties of aerogel. Due to its extremely light weight and often translucent appearance, aerogel is often called solid smoke. Barely denser than air, this smoky material weighs virtually nothing. In fact, it holds the world record for being the world’s lightest solid—one of 15 records granted it by Guinness World Records. It is truly an amazing substance: able to hold up under temperatures of 3,000 °F. Aerogels have unsurpassed thermal insulation values (providing three times more insulation than the best fiberglass), as well as astounding sound and shock absorption characteristics. As a class, aerogels, composed of silicone dioxide and 99.8 percent air, have the highest thermal insulation value, the highest specific surface area, the lowest density, the lowest speed of sound, the lowest refractive index, and the lowest dielectric constant of all solid materials. They are also extremely fragile. Similar in chemical structure to glass, though 1,000 times less dense, they are often prone to breaking when handling—seemingly their only drawback—aside from their cost. Invented nearly 80 years ago, aerogels are typically hard-to-handle and costly to manufacture by traditional means. For these reasons, the commercial industry found it difficult to manufacture products incorporating the material. However, a small business partnered with NASA to develop a flexible aerogel concept and a revolutionary manufacturing method that cut production time and costs, while also solving the handling problems associated with aerogel-based insulation products. These robust, flexible forms of aerogel can now be manufactured into blankets, thin sheets, beads, and molded parts. James Fesmire, senior principal investigator at Kennedy Space Center’s Cryogenics Test Laboratory, and one of the key inventors of this new technology, says of the advancements, “This aerogel blanket insulation is not only the world’s best insulator, but, combined with its favorable environmental and mechanical characteristics, also opens the door to many new design possibilities for buildings, cars, electrical power, and many industrial process systems.” Partnership Aspen Aerogels Inc., of Northborough, Massachusetts, an independent company spun off from Aspen Systems Inc., rose to the challenge of creating a robust, flexible form of aerogel by working with NASA through a Small Business Innovation Research (SBIR) contract with Kennedy. That contract led to further partnerships for the development of thermal insulation materials, manufacturing processes, and new test methods. This collaboration over many years was a pivotal part for the founding of NASA’s Cryogenics Test Laboratory. Aspen responded to NASA’s need for a flexible, durable, easy-to-use aerogel system for cryogenic insulation for space shuttle launch applications. For NASA, the final product of this low thermal conductivity system was useful in applications such as launch vehicles, space shuttle upgrades, and life support equipment. The company has since used the same manufacturing process developed under the SBIR to expand its product offerings into the more commercial realms, making aerogel available for the first time as a material that can be handled and installed just like standard insulation. The development process culminated in an “R&D 100” award for Aspen Aerogels and Kennedy in 2003. According to Fesmire, “This flexible aerogel insulation idea originated 16 years ago. The problem was to make the world’s best insulation material in an easy-to-use form at an affordable price. All these goals have now been achieved through many years of dedicated work.” Product Outcome Based on its work with NASA, Aspen has developed three different lines of aerogel products: Cryogel, Spaceloft, and Pyrogel. Its work has also infused back into the Space Program, as Kennedy is an important customer. Cryogel is a flexible insulation, with or without integral vapor barrier, for sub-ambient temperature and cryogenic pipelines, vessels, and equipment. It comes as flexible aerogel blanket insulation engineered to deliver maximum thermal protection with minimal weight and thickness and zero water vapor permeability. Its unique properties—extremely low thermal conductivity, superior flexibility, compression resistance, hydrophobicity, and ease of use—make it an ideal thermal protection for cryogenic applications. Spaceloft also comes in flexible blanket form and is easy to use. (It can be cut using conventional textile cutting tools, including scissors, electric scissors, and razor knives.) It is designed to meet the demanding requirements of industrial, commercial, and residential applications. Spaceloft is a proven, effective insulator in the oil and gas industries, building and construction, aerospace, automotive, cold chain, and other industries requiring maximum thermal protection within tight space and weight constraints. Spaceloft is used for low-pressure steam pipes, vessels, and equipment; sub-sea pipelines, hot pipes, vessels, and equipment; footwear and outdoor apparel. Other applications include tents, insulation for interior wall renovation, mobile home exteriors, tractor heat shielding, bus heat shielding, hot water pipes, and solar panels. Pyrogel is used in medium-to-high pressure steam pipes, vessels, and equipment; aerospace and defense applications; fire barriers; welding blankets; footwear and outdoor apparel. Applications have included insulating an entire polycarbonate plant, a reactor exterior, high- altitude boots, water and gas piping, tubing bundles, yacht exhausts, large vessels, exhaust ducts, ships’ boilers, and underground steam lines. The insulation has been proven to be an effective underfoot barrier to extreme cold in the form of insoles for climbers on Mt. Everest, where their light weight and flexibility are also prized. They have even been tested as insoles for ultramarathoners—runners who jog past the 26.2 mile standard marathon distance and sometimes up to 100 miles at a time—who prize the material for its light weight and excellent heat- insulating properties. It is not just industry and the commercial realm that are benefiting from Aspen’s products, though. The work has come full circle, and Aspen is a regular provider of aerogel insulation to NASA, where the material is used on many diverse projects, for space shuttle applications, interplanetary propulsion, and life support equipment. On the space shuttle, it is used as an insulation on the external tank vent’s quick-disconnect valve, which releases at liftoff and reaches temperatures of -400 °F. It is also found on the shuttle launch pad’s fuel cell systems. At Stennis Space Center’s E-3 engine test stand, the blankets are used on the liquid oxygen lines. In the laboratory, NASA scientists are working to incorporate insulating Aspen aerogels with new polymer materials to create a new category of materials and to create composite foam fillers. Sponsored by NASA’s Space Operations Mission Directorate, engineers are experimenting with the products to create an insulating material that could replace poured and molded foams for a plethora of applications, including in test facilities, on launch pads, and even on spacecraft. Cryogel™, Spaceloft™, and Pyrogel™ are trademarks of Aspen Aerogels Inc. Originating Technology/NASA Contribution In addition to the mammoth engineering challenge posed by launching a cargo-laden craft into space for a long-distance mission, keeping the crews safe and healthy for these extended periods of time in space poses further challenges, problems for which NASA scientists are constantly seeking new answers. Obstacles include maintaining long-term food supplies, ensuring access to clean air and potable water, and developing efficient means of waste disposal—all with the constraints of being in a spacecraft thousands of miles from Earth, and getting farther every minute. NASA continues to overcome these hurdles, though, and is in the process of designing increasingly efficient life support systems to make life aboard the International Space Station sustainable for laboratory crews, and creating systems for use on future lunar laboratories and the upcoming long trip to Mars. Ideal life support systems for these closed environments would take up very little space, consume very little power, and require limited crew intervention—these much-needed components would virtually disappear while doing their important jobs. One NASA experiment into creating a low-profile life support system involved living ecosystems in contained environments. Dubbed the Controlled Ecological Life Support Systems (CELSS) these contained systems attempted to address the basic needs of crews, meet stringent payload and power usage restrictions, and minimize space occupancy by developing living, regenerative ecosystems that would take care of themselves and their inhabitants—recreating Earth-like conditions. Years later, what began as an experiment with different methods of bioregenerative life support for extended-duration, human-crewed space flight, has evolved into one of the most widespread NASA spinoffs of all time. Partnership In the 1980s, Baltimore-based Martin Marietta Corporation worked with NASA to test the use of certain strains of microalgae as a food supply, oxygen source, and a catalyst for waste disposal as part of the CELSS experiments. The plan was for the microalgae to become part of the life support system on long-duration flights, taking on a plethora of tasks with minimal space, energy, and maintenance requirements. During this research, the scientists discovered many things about the microalgae, realizing ultimately that its properties were valuable to people not only in space, but here on Earth, as a nutritional supplement. The scientists, fueled by these discoveries, spun off from Martin Marietta, and in 1985, formed Martek Biosciences Corporation, in Columbia, Maryland. Product Outcome Now, after two decades of continued research on the same microalgae studied for use in long-duration space flight, Martek has developed into a major player in the nutrition field, with over 500 employees and annual revenue of more than $270 million. The reach of the company’s space-developed product, though, is what is most impressive. Martek’s main products, life’sDHA and life’sARA, both of which trace directly back to the original NASA CELSS work, can be found in over 90 percent of the infant formulas sold in the United States, and are added to the infant formulas sold in over 65 additional countries. With such widespread use, the company estimates that over 24 million babies worldwide have consumed its nutritional additives. Outside of the infant formula market, Martek’s commercial partners include General Mills Inc., Yoplait USA Inc., Odwalla Inc., Kellogg Company, and Dean Foods Company’s WhiteWave Foods division (makers of the Silk, Horizon Organic, and Rachel’s brands). Why would so many people consume these products? The primary ingredient is one of the building blocks of health: A fatty acid found in human breast milk, known to improve brain function and visual development, which recent studies have indicated plays a significant role in heart health. It is only introduced to the body through dietary sources, so supplements containing it are in high demand. The primary discovery Martek made while exploring properties of microalgae for use in long-duration space flights was identifying Crypthecodinium cohnii, a strain of algae that produces docosahexaenoc acid (DHA) naturally and in high quantities. Using the same principles, the company also patented a method for developing another fatty acid that plays a key role in infant health, arachidonic acid (ARA). This fatty acid, it extracts from the fungus Mortierella alpina. DHA is an omega-3 fatty acid, naturally found in the body, which plays a key role in infant development and adult health. Most abundant in the brain, eyes, and heart, it is integral in learning ability, mental development, visual acuity, and in the prevention and management of cardiovascular disease. Approximately 60 percent of the brain is composed of structural fat (the gray matter), of which nearly half is composed of DHA. As such, it is an essential building block for early brain development, as well as a key structural element in maintaining healthy brain functioning through all stages of life. It is especially important in infancy, though, when the most rapid brain growth occurs—the human brain nearly triples in size during the first year of life. Breast milk, which is generally two-thirds fat, is a chief source for DHA for children, both a testament to the body’s need for this substance and an argument for sustainable sources that can be added to infant formula. Studies have shown that adults, too, need DHA for healthy brain functioning, and that the important chemical is delivered through the diet. DHA is also a key component in the structural fat that makes up the eye, and is vital for visual development and ocular health. The retina, for example, contains a high concentration of DHA, which the body forms from nutritious fats in the diet. With heart tissue, the U.S. Food and Drug Administration has found supporting evidence that DHA consumption may reduce the risk of coronary heart disease. This important compound, previously only found in human breast milk, and with undeniable nutritional value, is now available throughout the world. It is one example of how NASA research intended to sustain life in space has found its way back to Earth, where it is improving the lives of people everywhere. life’sDHA™ and life’sARA™ are trademarks of Martek Biosciences Corporation. Silk®, Horizon Organic®, and Rachel’s® are registered trademarks of the WhiteWave Foods Company. Originating Technology/NASA Contribution Developed by Jonathan Lee, a structural materials engineer at Marshall Space Flight Center, and PoShou Chen, a scientist with Huntsville, Alabama-based Morgan Research Corporation, MSFC-398 is a high-strength aluminum alloy able to operate at high temperatures. The invention was conceived through a program with the Federal government and a major automobile manufacturer called the Partnership for Next Generation Vehicles. While the success of MSFC-398 can partly be attributed to its strength and resistance to wear, another key aspect is the manufacturing process: the metal is capable of being produced in high volumes at low cost, making it attractive to commercial markets. Since its premiere, the high-strength aluminum alloy has received several accolades, including being named Marshall’s “Invention of the Year” in 2003, receiving the Society of Automotive Engineering’s “Environmental Excellence in Transportation Award” in 2004, the Southeast Regional Federal Laboratory Consortium “Excellence in Technology Transfer Award” in 2005, and the National Federal Laboratory Consortium’s “Excellence in Technology Transfer Award” in 2006. Realizing the potential commercial applicability of MSFC-398, Marshall introduced it for public licensing in 2001. The alloy’s subsequent success is particularly apparent in its widespread application in commercial marine products. Partnership A worldwide leader in the design, development, and distribution of a wide variety of land and water vehicles, including outboard motors, Bombardier Recreational Products (BRP) Inc., came across a description of the NASA alloy and was immediately intrigued. The Canada-based company decided to meet with NASA in April 2001, to explore how the technology could strengthen its products. BRP and NASA identified an application for high-performance outboard engine pistons. Prototype production started in July, and the Boats and Outboard Engines Division of BRP, based in Sturtevant, Wisconsin, signed the licensing agreement exactly 1 year later. “Having a proper mixture of the alloy’s composition with the correct heat treatment process are two crucial steps to create this alloy for high-temperature applications,” said Lee. “The team at Bombardier worked hard with the casting vendor and NASA inventors to perfect the casting of pistons, learn and repeat the process, and bring its product to market. Chen and I are honored to see something we invented being used in a commercial product in a very rapid pace. We still have to pinch ourselves occasionally to realize that BRP’s commercialization effort for this alloy has become a reality. It’s happened so quickly.” The usual cycle for developing this type of technology, from the research stage to the development phase, and finally into a commercial product phase may take several years and more than a$1 million investment,” Lee said. In this case, it occurred in fewer than 4 years and at a fraction of that cost. BRP also applauded NASA for its prompt assistance. “The demands of the outboard engine are more significant than any other engine NASA had ever encountered,” claims Denis Morin, the company’s vice president of engineering, outboard engines. “The team from NASA was on the fast track, learned all the intricacies, and delivered an outstanding product.” BRP incorporated the alloy pistons into a brand new mid-power outboard motor that the company affirms is “years beyond carbureted two-stroke, four-stroke, or even direct injection” engines. Product Outcome While a four-stroke engine generally runs cleaner and quieter than its two-stroke counterpart, it lacks the power and dependability; and the two-stroke engine, which generally contains 200 fewer parts than a comparable four-stroke motor, literally has fewer things that can go wrong. Evinrude E-TEC is a line of two-stroke motors that maintain the power and dependability of a two-stroke with the refinement of a four-stroke. The Evinrude E-TEC is also the first outboard motor that will not require oil changes, winterization, spring tune-ups, or scheduled maintenance for 3 years of normal recreational use. It incorporates the NASA alloy into its pistons, significantly improving durability at high temperatures while also making the engine quieter, cleaner, and more efficient. The E-TEC features a low-friction design completely free from belts, powerhead gears, cams, and mechanical oil pumps; a “sure-start” ignition system that prevents spark plug fouling and does not require priming or choking; and speed-adjusting failsafe electronics that keep it running even if a boat’s battery dies. A central computer controls the outboard engine’s single injector, which is completely sealed to prevent air from entering the fuel system and thus minimizes evaporative emissions. Furthermore, the E-TEC auto-lubing oil system eliminates the process of having to mix oil with fuel, while complete combustion precludes virtually any oil from escaping into the environment. When programmed to operate on specially designed oil, the E-TEC uses approximately 50 percent less oil than a traditional direct injection system and 75 percent less than a traditional two-stroke engine. Additionally, when compared to a four-stroke engine, the E-TEC creates 80 percent less carbon monoxide while idle. As an added bonus for fishermen, the new piston design also reduces the slapping sound usually made when pistons slide up and down in the engine’s cylinder, a sure sign to fish that someone is coming for them with a worm on a hook. Ranging from 40-horsepower (hp) models to 300-hp models, Evinrude E-TEC engines won the prestigious “2003 Innovation Award” from the National Marine Manufacturers Association at the annual Miami International Boat Show, and are the only marine engines to have ever received the U.S. Environmental Protection Agency’s “Clean Air Technology Excellence Award.” E-TEC also received a testimonial from an individual who put the engine to an incredible test in the most unusual of conditions: While BRP often hears from boaters who depend on its engines in tropical, warm, and temperate climates, the company had heard about an individual from the small Alaskan village of Koyukuk who runs the Yukon River with his Evinrude just about everyday, from break-up of the iced-over body of water to freeze-up during the long Alaskan winter. The nearest “sizable” town is 400 miles upstream from Koyukuk, so the turbid and turgid river serves as the only “highway” on which to acquire goods, tools, and groceries. That’s a pretty good vote of confidence. Evinrude® is a registered trademark, and E-TEC™ is a trademark of Bombardier Recreational Products Inc. Originating Technology/NASA Contribution Timeless, beautiful, and haunting images: A delicate blue marble floating in the black sea of space; a brilliant white astronaut suit, visor glowing gold, the entire Earth as a backdrop; the Moon looming large and ghostly, pockmarked with sharp craters, a diaphanous grey on deep black. Photographs from space illustrating the planet on which we live, the space surrounding it, and the precarious voyages into it by our fellow humans are among the most tangible products of the Space Program. These images have become touchstones of successive generations, as the voyages into space have illuminated the space in which we live. In 1962, Walter Schirra blasted off in a Mercury rocket to become the fifth American in space, bringing with him the first Hasselblad camera to leave the Earth’s atmosphere, recently purchased from a camera shop near Johnson Space Center in Houston—but not the last. The camera, a Hasselblad 500C, was a standard consumer unit that Schirra had stripped to bare metal and painted black in order to minimize reflections. Once in space, he documented the wonder and awe-inspiring beauty around him, and brought the images back for us to share. The Hasselblad 500C cameras were used on this and the last Project Mercury mission in 1963. They continued to be used throughout the Gemini space flights in 1965 and 1966. Since then, a number of different camera models have been put to use, but the images taken with the boxy, black Hasselblads have remained true classics. Noted for the amazing sharpness of the photos, the Hasselblads stood up to the rigors of operating in space, facing from -65 °C to over 120 °C in the sun. Many shots have become historic treasures: the first spacewalk during the Gemini IV mission in 1965; the first venture to another celestial body during Apollo VIII, including the iconic “Earthrise” photograph; and the first landing on the surface of the Moon during Apollo XI. These pictures were published around the world, and have become some of the most recognizable and powerful photographs known. Several different models of Hasselblad cameras have been taken into space, often modified in one way or another to ease use in cramped conditions and while wearing space suits, such as replacing the reflex mirror with an eye-level finder. Partnership Victor Hasselblad AB, of Gothenburg, Sweden, has enjoyed a very long-lived collaboration with NASA. Working primarily with Johnson, the last four decades have seen a frequent exchange of ideas between Hasselblad and NASA via faxes, telephone calls, and meetings both in Sweden and the United States. Initially, most meetings were held at Hasselblad headquarters in Gothenburg, to be as close to the core activities as possible. Since then, collaboration with NASA has allowed what was once a very small company in international terms to achieve worldwide recognition. Hassleblad’s operations now include centers in Parsippany, New Jersey; and Redmond, Washington; as well as France and Denmark. One direct development of this partnership, the 553ELS, is the space version of the 553ELX model, available commercially for years. This camera has adopted several key features and improvements, such as: the fixation of the mirror mechanism was removed from the rear plate to the side walls; aluminum plating replaced the standard black leatherette as the outer covering; the standard 5-pole contact was replaced by a special 7-pole contact equipped with a bayonet locking device; and the battery cover was equipped with a hinge. These changes resulted in increased durability and reliability, and the ELS model has seen frequent use in the shuttle program. Hasselblad incorporated and refined other modifications by NASA technicians into new models, such as a 70mm magazine developed to meet Space Program needs. Camera modifications included new materials and lubricants to cope with the vacuum conditions outside the spacecraft, and often improved reliability and durability of the cameras. In addition, technicians modified camera electronics to meet NASA’s special demands for handling and function, reconstructing lenses and adding large tabs to the focusing and aperture rings to ease handling with the large gloves of an astronaut suit in zero gravity. Product Outcome For over four decades, Hasselblad has supplied camera equipment to the NASA Space Program, and Hasselblad cameras still take on average between 1,500 and 2,000 photographs on each space shuttle mission. Just as the remarkable pictures on the surface of the Moon defined an era, the fine pictures of astronauts at work in and around the shuttles and International Space Station (ISS) have helped define the latest era of man’s continued exploration of the universe around us. Likewise, the commercial line of Hasselblad cameras continues to incorporate lessons learned from these voyages. Consumer models have enjoyed such refinements as the revised fixation of the mirror mechanism—the Hasselblad 503CW still features the space-influenced improved mirror mechanism—a design change that gave far better stability for the mirror assembly, and an enlarged exposure button, similar to the one designed for the space models. In October 2001, the Space Shuttle Discovery, in addition to transporting modules to the ISS, carried a new Hasselblad space camera: a focal-plane shutter camera based on the standard commercial version (203FE) equipped with data imprinting along the edge of the film frame, enabling the recording of time and picture number for each exposure. Since the computers onboard have full control over the position of the shuttle, identification of the exact location captured in a frame has become much easier. Now that NASA is returning to the Moon and is also looking on to Mars for the next stage of exploration, it is without doubt that Hasselblad cameras will be along to document the voyages for those of us remaining on Earth. The relationship that began in a camera shop in Houston, blossomed on the Moon, and matured on the space shuttle, now prepares to reach new heights. As one more small step for a man and giant leap for mankind approaches, we anxiously await the photographs. Originating Technology/NASA Contribution NASA uses 3-D immersive photography and video for guiding space robots, in the space shuttle and International Space Station programs, cryogenic wind tunnels, and for remote docking of spacecraft. It allows researchers to view situations with the same spatial awareness they would have if they were present. With this type of photography, viewers virtually enter the panoramic image and can interact with the environment by panning, looking in different directions, and zooming in on anything in the 360-degree field of view that is of interest. As the perspective changes, the viewer feels as if he or she is actually looking around the scene, which enhances situational awareness and provides a high level of functionality for viewing, capturing, and analyzing visual data. Partnership A Small Business Innovation Research (SBIR) contract through Langley Research Center helped Interactive Pictures Corporation (IPC), of Knoxville, Tennessee, create an innovative imaging technology. This technology is a video imaging process that allows real-time control of live video data and can provide users with interactive, panoramic 360° views. In 1993, the year that the first IPIX camera entered the market, it also received an “R&D 100” award, a prestigious honor given by R&D magazine for significant contributions to the scientific community. The camera system can see in multiple directions, provide up to four simultaneous views, each with its own tilt, rotation, and magnification, yet it has no moving parts, is noiseless, and can respond faster than the human eye. In addition, it eliminates the distortion caused by a fisheye lens, and provides a clear, flat view of each perspective. In 1995, an inventor named Ford Oxaal showed the company a technology he had developed which gives users the ability to combine two or more images, whether fisheye or rectilinear, into a single, navigable spherical image. Oxaal convinced IPIX to commercialize this useful showcasing technology, and combined with the advent of the World Wide Web, IPIX was able to execute a successful initial public offering. The company has changed names at several points along the way. It started out as Telerobotics International, but changed its name to Omniview in 1995 after Oxaal showed his spherical media technology. In 1998, it became Interactive Pictures Corporation, and then later, Internet Pictures Corporation, and finally, IPIX Corporation. In 2007, Minds-Eye-View Inc., founded by Oxaal in 1989 and based in Cohoes, New York, purchased most of the operating assets of IPIX and is now in the process of taking the company and the technology to the entertainment industry. Oxaal is currently president and CEO. Applications now include what Oxaal calls “homeland reconnaissance,” wherein critical infrastructure and public facilities are documented with spherical media; military reconnaissance; real estate and product showcasing; security and surveillance; and soon, interactive Webcasts. Product Outcome Through the NASA SBIR work, IPIX has created two 3-D immersive photography suites: a still image program and a video complement. The IPIX package is a convenient and powerful documentation and site management tool. It is compatible with many off-the-shelf digital cameras and the final pictures are viewable in any immersive viewing formats, giving users a handful of benefits, including ease of use and the ability to capture and save an entire spherical environment with just two shots. The two images are fused together with no discernable seam, and the viewer can navigate throughout the picture from a fixed location. This is particularly helpful for virtual tours and has been widely embraced by the real estate, hotel, and automobile industries. IPIX’s immersive video suite also offers many benefits. Users can count on immersive video to capture and save digital representations of entire environments, while providing multiple simultaneous views with a single camera and no moving parts. From within the immersive video view, users can electronically pan, tilt, and zoom, while the camera remains motionless. The system also provides wide, complete coverage, with no blind spots, and the files can be transmitted efficiently over networks, even over existing, commercial IP-based platforms. Both of these camera systems can be employed in virtually any situation where immersive views are needed. They have been used in casinos, airports, rail systems, parking garages, schools, banks, stores, gas stations, automobile dealerships, amusement parks, hotels, homes for sale or rent, cruise ships, warehouses, power plants, incarceration facilities, theaters, stadiums, shopping centers, military facilities, government centers, assisted living centers, hospitals, gated communities, multi-tenant complexes, manufacturing plants, museums, hospitals, office buildings, colleges and universities, courts, and convention centers, to name just a few. Potential applications, however, are limitless. In 2004, IPIX security cameras were chosen for surveillance of the 2004 Democratic National Convention in Boston and the 2004 Republican National Convention in New York. That same year, the technology was used for surveillance at the 30th G8 Summit at Sea Island, Georgia, and during the President’s second inaugural parade in Washington, DC. More recently, the technology has been used to secure everything from the CircusCircus Las Vegas Hotel and Casino to Meade High School at Fort George G. Meade, Maryland, to the Mt. Pleasant, Illinois, City Hall. The technology isn’t only applicable to safety and surveillance uses, though. It is a popular complement to real estate and hotel Web sites, where visitors can take virtual tours of properties online. Originating Technology/NASA Contribution A space shuttle and a competitive swimmer have a lot more in common than people might realize: Among other forces, both have to contend with the slowing influence of drag. NASA’s Aeronautics Research Mission Directorate focuses primarily on improving flight efficiency and generally on fluid dynamics, especially the forces of pressure and viscous drag, which are the same for bodies moving through air as for bodies moving through water. Viscous drag is the force of friction that slows down a moving object through a substance, like air or water. NASA uses wind tunnels for fluid dynamics research, studying the forces of friction in gasses and liquids. Pressure forces, according to Langley Research Center’s Stephen Wilkinson, “dictate the optimal shape and performance of an airplane or other aero/hydro-dynamic body.” In both high-speed flight and swimming, says Wilkinson, a thin boundary layer of reduced velocity fluid surrounds the moving body; this layer is about 2 centimeters thick for a swimmer. Partnership In spite of some initial skepticism, Los Angeles-based SpeedoUSA asked NASA to help design a swimsuit with reduced drag, shortly after the 2004 Olympics. According to Stuart Isaac, senior vice president of Team Sales and Sports Marketing, “People would look at us and say ‘this isn’t rocket science’ and we began to think, ‘well, actually, maybe it is.’” While most people would not associate space travel with swimwear, rocket science is exactly what SpeedoUSA decided to try. The manufacturer sought a partnership with NASA because of the Agency’s expertise in the field of fluid dynamics and in the area of combating drag. A 2004 computational fluid dynamics study conducted by Speedo’s Aqualab research and development unit determined that the viscous drag on a swimmer is about 25 percent of the total retarding force. In competitive swimming, where every hundredth of a second counts, the best possible reduction in drag is crucially important. Researchers began flat plate testing of fabrics, using a small wind tunnel developed for earlier research on low-speed viscous drag reduction, and Wilkinson collaborated over the next few years with Speedo’s Aqualab to design what Speedo now considers the most efficient swimsuit yet: the LZR Racer. Surface drag testing was performed with the help of Langley, and additional water flume testing and computational fluid dynamics were performed with guidance from the University of Otago (New Zealand) and ANSYS Inc., a computer-aided engineering firm. “Speedo had the materials in mind [for the LZR Racer],” explains Isaac, “but we did not know how they would perform in surface friction drag testing, which is where we enlisted the help of NASA.” The manufacturer says the fabric, which Speedo calls LZR Pulse, is not only efficient at reducing drag, but it also repels water and is extremely lightweight. Speedo tested about 100 materials and material coatings before settling on LZR Pulse. NASA and Speedo performed tests on traditionally sewn seams, ultrasonically welded seams, and the fabric alone, which gave Speedo a baseline for reducing drag caused by seams and helped them identify problem areas. NASA wind tunnel results helped Speedo “create a bonding system that eliminates seams and reduces drag,” according to Isaac. The Speedo LZR Racer is the first fully bonded, full-body swimsuit with ultrasonically welded seams. Instead of sewing overlapping pieces of fabric together, Speedo actually fused the edges ultrasonically, reducing drag by 6 percent. “The ultrasonically welded seams have just slightly more drag than the fabric alone,” Isaac explains. NASA results also showed that a low-profile zipper ultrasonically bonded (not sewn) into the fabric and hidden inside the suit generated 8 percent less drag in wind tunnel tests than a standard zipper. Low-profile seams and zippers were a crucial component in the LZR Racer because the suit consists of multiple connecting fabric pieces—instead of just a few sewn pieces such as found in traditional suits—that provide extra compression for maximum efficiency. Product Outcome The LZR Racer reduces skin friction drag 24 percent more than the Fastskin, the previous Speedo racing suit fabric; and according to the manufacturer, the LZR Racer uses a Hydro Form Compression System to grip the body like a corset. Speedo experts say this compression helps the swimmers maintain the best form possible and enables them to swim longer and faster since they are using less energy to maintain form. The compression alone improves efficiency up to 5 percent, according to the manufacturer. Olympic swimmer Katie Hoff, one of the American athletes wearing the suit in 2008 competitions, said that the tight suit helps a swimmer move more quickly through the water, because it “compresses [the] whole body so that [it’s] really streamlined.” Athletes from the French, Australian, and British Olympic teams all participated in testing the new Speedo racing suits. Similar in style to a wetsuit, the LZR Racer can cover all or part of the legs, depending on personal preference and event. A swimmer can choose a full-body suit that covers the entire torso and extends to the ankles, or can opt for a suit with shorter legs above the knees. The more skin the LZR Racer covers, the more potential it has to reduce skin friction drag. The research seems to have paid off; in March 2008, athletes wearing the LZR Racer broke 13 world records. Speedo®, LZR Pulse®, LZR Racer®, and FastSkin® are registered trademarks of Speedo Holdings B.V. ### publications: The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.
# Question #9628d Apr 10, 2016 ${\left(\frac{P V}{T}\right)}_{1} = {\left(\frac{P V}{T}\right)}_{2}$ is applicable in all situations. Combining the relationships of each of the other general laws, ${\left(P \frac{V}{T}\right)}_{1} = {\left(P \frac{V}{T}\right)}_{2}$ is applicable in all situations.
# Determine the area limited by curves: $f(x)=2x^3-3x^2+9x$ and $g(x)=x^3-2x^2-3x$ Determine the area limited by curves: $$f(x)=2x^3-3x^2+9x \\ g(x)=x^3-2x^2-3x$$ The correctly answer is: 25, How can I find it? - Could you clarify the question? We usually talk about area of regions, not of functions. Are you looking for the area of some region that is bounded by the graphs of those functions? – MJD May 30 '12 at 18:36 Sorry, the correctly question is: Determine the limited area by curves f(x)=2x^3−3x^2+9x and g(x)=x^3−2x^2−3x – Alfredo May 30 '12 at 18:40 @Alfredo please edit your question accordingly – Belgi May 30 '12 at 18:40 The correctly answer is: 25. But how can I find it? – Alfredo May 30 '12 at 18:44 You need more information. $f$ and $g$ do not enclose any bounded area. – copper.hat May 30 '12 at 18:46 Assuming that the limits of the interval are given $a<b \in\mathbb{R}$, the fundamental theorem of calculus says that the "area" below the curve $f(x)$, and between $a$ and $b$ equals to $F(b)-F(a)$ where $F'(x)=f(x)$, $\forall x\in[a,b]$ (in case that such $F$ exists). Gladly, our $f$ & $g$ are polynomials and very easy to find an anti-derivative to (=indefinite integral). So, we can find the anti-derivatives for $f(x)$ and $g(x)$ and evaluate the difference at $a$, $b$. If we let $S$ to be the area below $f(x)$ and above $g(x)$ we need to calculate the area below $f(x)$ and above the $X$ axis minus the area below $g(x)$ above the $X$ axis: $$S = F(x)-G(x)|_{x=a}^{x=b}$$ Notice that $\int f(x)+g(x) =\int f(x)+ \int g(x)$ so, you can integrate every "monic" by itself. Now, let $a\in\mathbb{R}$, $(a\cdot x^n)' = a\cdot n \cdot x^{n-1} \Rightarrow \int a\cdot x^{n-1} = a\frac{x^n}{n}$. – Amihai Zivan May 30 '12 at 19:41
## sasogeek Group Title prove by induction that $$\large 1 \times 2 + 2 \times 3 + ... +n(n+1) = \frac{1}{3}n(n+1)(n+2)$$ one year ago one year ago 1. Mimi_x3 Group Title well, where are you stuck i assume you should know the steps n=k, then n=k+1 2. Mimi_x3 Group Title Assume $$n=k$$ is true $1*2+2*3+...+k(k+1) = \frac{1k}{3}(k+1)(k+2)$ Prove $$n=k+1$$ $1*2+2*3+...+k(k+1)+(k+1)(k+2) = \frac{(k+1)(k+2)(k+3)}{3}$ $\frac{(k)(k+1)(k+2)}{3} + (k+1)(k+2) = \frac{(k+1)(k+2)(k+3)}{3}$ Now, you can prove RHS = LHS 3. sasogeek Group Title ok i understand and have worked the up to the point $$\large 1* 2 + 1* 3 + ... + k+1(k+2) = 1 * 2 + 1*3 + ... +k(k+1) +k+1(k+2)$$ $$\large = \frac{1}{3}k(k+1)(k+2)+(k+1)(k+2)$$ then what? for left side : k(k+1)(k+2)/3 + (k+1)(k+2) = (k+1)(k+2)(k/3 + 1) = (k+1)(k+2)(k+3)/3 same like right side 5. Mimi_x3 Group Title Well, you prove as I said above. Prove the LHS = RHS $=>(k+1)(k+2)\left[ \frac{k}{3}+1\right]$It should be straight forward now. 6. Mimi_x3 Group Title Just, some algebra and you're done! 7. sasogeek Group Title it's still blurry but i'll try to get it in a bit :) 8. Mimi_x3 Group Title Well, where are you stuck? 9. Mimi_x3 Group Title All you have to do here is: Prove the LHS that is: $\frac{(k)(k+1)(k+2)}{3} + (k+1)(k+2)$ Is equal to the RHS: $=> \frac{(k+1)(k+2)(k+3)}{3}$ 10. sasogeek Group Title ohh, thanks :) normal algebra takes off from there i see :) makes sense now xD
I. Marx vs. Smith and food banks When Heinz produces too many Bagel Bites, or Kellogg produces too many Pop-Tarts, or whatever, these mammoth food-processing companies can donate their surplus food to Feeding America, a national food bank. Feeding America then distributes these corporate donations to local food banks throughout the country. What’s the economically optimal way to allocate the donations across the country? Option one is what you might call “full communism.” Under full communism, Feeding America collects the food donations and then top-down tells individual food banks what endowments they will be receiving, based on Feeding America’s own calculation of which food banks need what. Prior to 2005, this was indeed what occurred: food was distributed by centralized assignment. Full communism! The problem was one of distributed versus centralized knowledge. While Feeding America had very good knowledge of poverty rates around the country, and thus could measure need in different areas, it was not as good at dealing with idiosyncratic local issues. Food banks in Idaho don’t need a truckload of potatoes, for example, and Feeding America might fail to take this into account. Or maybe the Chicago regional food bank just this week received a large direct donation of peanut butter from a local food drive, and then Feeding America comes along and says that it has two tons of peanut butter that it is sending to Chicago. To an economist, this problem screams of the Hayekian knowledge problem. Even a benevolent central planner will be hard-pressed to efficiently allocate resources in a society since it is simply too difficult for a centralized system to collect information on all local variation in needs, preferences, and abilities. This knowledge problem leads to option two: market capitalism. Unlike poorly informed central planners, the decentralized price system – i.e., the free market – can (often but not always) do an extremely good job of aggregating local information to efficiently allocate scarce resources. This result is known as the First Welfare Theorem. Such a system was created for Feeding America with the help of four Chicago Booth economists in 2005. Instead of centralized allocation, food banks were given fake money – with needier food banks being given more – and allowed to bid for different types of food in online auctions. Prices are thus determined by supply and demand. At midnight each day all of the (fake) money spent that day is redistributed, according to the same formula as the initial allocation. Accordingly, any food bank which does not bid today will have more money to bid with tomorrow. Under this system, the Chicago food bank does not have to bid on peanut butter if it has just received a large peanut butter donation from another source. The Idaho food bank, in turn, can skip on bidding for potatoes and bid for extra peanut butter at a lower price. It’s win-win-win. By all accounts, the system has worked brilliantly. Food banks are happier with their allocations; donations have gone up as donors have more confidence that their donations will actually be used. Chalk one up for economic theory. II. MV=PY, information frictions, and food banks This is all pretty neat, but here’s the really interesting question: what is optimal monetary policy for the food bank economy? Remember that food banks are bidding for peanut butter or cereal or mini pizzas with units of fake money. Feeding America has to decide if and how the fake money supply should grow over time, and how to allocate new units of fake money. That’s monetary policy! Here’s the problem for Feeding America when thinking about optimal monetary policy. Feeding America wants to ensure that changes in prices are informative for food banks when they bid. In the words of one of the Booth economists who helped design the system: “Suppose I am a small food bank; I really want a truckload of cereal. I haven’t bid on cereal for, like, a year and a half, so I’m not really sure I should be paying for it. But what you can do on the website, you basically click a link and when you click that link it says: This is what the history of prices is for cereal over the last 5 years. And what we wanted to do is set up a system whereby by observing that history of prices, it gave you a reasonable instinct for what you should be bidding.” That is, food banks face information frictions: individual food banks are not completely aware of economic conditions and only occasionally update their knowledge of the state of the world. This is because obtaining such information is time-consuming and costly. Relating this to our question of optimal monetary policy for the food bank economy: How should the fake money supply be set, taking into consideration this friction? Obviously, if Feeding America were to randomly double the supply of (fake) money, then all prices would double, and this would be confusing for food banks. A food bank might go online to bid for peanut butter, see that the price has doubled, and mistakenly think that demand specifically for peanut butter has surged. This “monetary misperception” would distort decision making: the food bank wants peanut butter, but might bid for a cheaper good like chicken noodle soup, thinking that peanut butter is really scarce at the moment. Clearly, random variation in the money supply is not a good idea. More generally, how should Feeding America set the money supply? One natural idea is to copy what real-world central banks do: target inflation. The Fed targets something like 2% inflation. But, if the price of a box of pasta and other foods were to rise 2% per year, that might be confusing for food banks, so let’s suppose a 0% inflation target instead. It turns out inflation targeting is not a good idea! In the presence of the information frictions described above, inflation targeting will only sow confusion. Here’s why. As I go through this, keep in the back of your mind: if households and firms in the real-world macroeconomy face similar information frictions, then – and this is the punchline of this entire post – perhaps inflation targeting is a bad idea in the real world as well. III. Monetary misperceptions I demonstrate the following argument rigorously in a formal mathematical model in a paper, “Monetary Misperceptions: Optimal Monetary Policy under Incomplete Information,” using a microfounded Lucas Islands model. The intuition for why inflation targeting is problematic is as follows. Suppose the total quantity of all donations doubles. You’re a food bank and go to bid on cheerios, and find that there are twice as many boxes of cheerios available today as yesterday. You’re going to want to bid at a price something like half as much as yesterday. Every other food bank looking at every other item will have the same thought. Aggregate inflation thus would be something like -50%, as all prices would drop by half. As a result, under inflation targeting, the money supply would simultaneously have to double to keep inflation at zero. But this would be confusing: Seeing the quantity of cheerios double but the price remain the same, you won’t be able to tell if the price has remained the same because (a) The central bank has doubled the money supply or (b) Demand specifically for cheerios has jumped up quite a bit It’s a signal extraction problem, and rationally you’re going to put some weight on both of these possibilities. However, only the first possibility actually occurred. This problem leads to all sorts of monetary misperceptions, as money supply growth creates confusions, hence the title of my paper. Inflation targeting, in this case, is very suboptimal. Price level variation provides useful information to agents. IV. Optimal monetary policy As I work out formally in the paper, optimal policy is instead something close to a nominal income (NGDP) target. Under log utility, it is exactly a nominal income target. (I’ve written about nominal income targeting before more critically here.) Nominal income targeting in this case means that the money supply should not respond to aggregate supply shocks. In the context of our food banks, this result means that the money supply should not be altered in response to an increase or decrease in aggregate donations. Instead, if the total quantity of all donations doubles, then the price level should be allowed to fall by (roughly) half. This policy prevents the confusion described above. Restating, the intuition is this. Under optimal policy, the aggregate price level acts as a coordination mechanism, analogous to the way that relative prices convey useful information to agents about the relative scarcity of different goods. When total donations double, the aggregate price level signals that aggregate output is less scarce by halving. It turns out that nominal income targeting is only exactly optimal (as opposed to approximately optimal) under some special conditions. I’ll save that discussion for another post though. Feeding America, by the way, does not target constant inflation. They instead target “zero inflation for a given good if demand and supply conditions are unchanged.” This alternative is a move in the direction of a nominal income target. V. Real-world macroeconomic implications I want to claim that the information frictions facing food banks also apply to the real economy, and as a result, the Federal Reserve and other central banks should consider adopting a nominal income target. Let me tell a story to illustrate the point. Consider the owner of an isolated bakery. Suppose one day, all of the customers seen by the baker spend twice as much money as the customers from the day before. The baker has two options. She can interpret this increased demand as customers having come to appreciate the superior quality of her baked goods, and thus increase her production to match the new demand. Alternatively, she could interpret this increased spending as evidence that there is simply more money in the economy as a whole, and that she should merely increase her prices proportionally to account for inflation. Economic agents confounding these two effects is the source of economic booms and busts, according to this model. This is exactly analogous to the problem faced by food banks trying to decide how much to bid at auction. To the extent that these frictions are quantitatively important in the real world, central banks like the Fed and ECB should consider moving away from their inflation targeting regimes and toward something like a nominal income target, as Feeding America has. VI. Summing up Nominal income targeting has recently enjoyed a surge in popularity among academic monetary economists, so the fact that this result aligns with that intuition is pretty interesting. To sum up, I’ll use a metaphor from Selgin (1997). Consider listening to a symphony on the radio. Randomly turning the volume knob up and down merely detracts from the musical performance (random variation in the price level is not useful). But, the changing volume of the orchestra players themselves, from quieter to louder and back down again, is an integral part of the performance (the price level should adjust with natural variations in the supply of food donations). The changing volume of the orchestra should not be smoothed out to maintain a constant volume (constant inflation is not optimal). Central banks may want to consider allowing the orchestra to do its job, and reconsider inflation targeting as a strategy. Behavioral economists have a concept called loss aversion. It’s almost always described something like this: “Loss aversion implies that one who loses $100 will lose more satisfaction than another person will gain satisfaction from a$100 windfall.” Wikipedia, as of December 2015 Sounds eminently reasonable, right? Some might say so reasonable, in fact, that it’s crazy that those darn neoclassical economists don’t incorporate such an obvious, fundamental fact about human nature in their models. It is crazy – because it’s not true! The pop definition of loss aversion given above – that ‘losses hurt more than equivalent size gains’ – is precisely the concept of diminishing marginal utility (DMU) that is boringly standard in standard price theory. Loss aversion is, in fact, a distinct and (perhaps) useful concept. But somewhat obnoxiously, behavioral economists, particularly in their popular writings, have a tendency to conflate it with DMU in a way that makes the concept seem far more intuitive than it is, and in the process wrongly makes standard price theory look bad. I’m not just cherry-picking a bad Wikipedia edit. I name names at the bottom of this post, listing where behavioral economists – Thaler, Kahneman, Sunstein, Dubner, etc. – have (often!) given the same misleading definition. It’s wrong! Loss aversion is about reference dependence. To restate, what I’m claiming is this: 1. Behavioral economists use an incorrect definition of loss aversion when writing for popular audiences 2. This incorrect definition is in fact the property of DMU that is assumed in all of neoclassical economics 3. DMU is much more intuitive than the real definition of loss aversion, and so by using a false definition of loss aversion behavioral economists make neoclassical economics look unnecessarily bad and behavioral economics look misleadingly good Let me walk through the difference between DMU and loss aversion painstakingly slowly: Diminishing marginal utility “Diminishing marginal utility” is the idea that the more you have of something, the less you get out of having a little bit more of it. For example: If you own nothing but $1,000 and the clothes on your back, and I then give you$100,000, that is going to give you a heck of a lot more extra happiness then if you had $100 million and I gave you$100,000. An important corollary follows immediately from this: losses hurt more than gains! I made a super high quality illustration to depict this: What we have here is a graph of your utility as a function of your wealth under extremely standard (i.e., non-behavioral) assumptions. The fact that the line flattens out as you get to higher wealth levels is the property of DMU. We can also see that equivalently sized losses hurt more than gains. As you go from 10k wealth to 2k wealth (middle green line to bottom green line), your utility falls by more than the amount your utility rises if you go from 10k wealth to 18k wealth (middle green to top green lines), despite the change in wealth being the same 8k in both directions. Standard economics will always assume DMU, thus capturing exactly the intuition of the idea described in the above Wikipedia definition of loss aversion. More mathematically – and I’m going to breeze through this – if your utility is purely a function of your wealth, Utility=U(W), then we assume that U'(W)>0 but U''(W)<0, i.e. your utility function is concave. With these assumptions, the result that U(W+ε)-U(W) < U(W)-U(W-ε) follows from taking a Taylor expansion. See proof attached below. Loss aversion Loss aversion is a consequence of reference dependence and is an entirely different beast. The mathematical formulation was first made in Tversky and Kahneman (1991). In words, loss aversion says this: Suppose you have nothing but the clothes you’re wearing and $10,000 in your pocket, and then another$10,000 appears in your pocket out of nowhere. Your level of utility/happiness will now be some quantity given your wealth of $20,000. Now consider a situation where you only own your clothes and the$30,000 in your pocket. Suppose suddenly $10,000 in your pocket disappears. Your total wealth is$20,000 – that is, exactly the same as the prior situation. Loss aversion predicts that in this situation, your level of utility will be lower than in the first situation, despite the fact that in both situations your wealth is exactly $20,000, because you lost money to get there. Perhaps this concept of loss aversion is reasonable in some situations. It doesn’t seem crazy to think that people don’t like to lose things they had before. But this concept is entirely different from the idea that ‘people dislike losses more than they like gains’ which sloppy behavioral economists go around blathering about. It’s about reference dependence! Your utility depends on your reference point: did you start with higher or lower wealth than you currently have? In their academic papers, behavioral economists are very clear on the distinction. The use of math in formal economic models imposes precision. But when writing for a popular audience in the less-precise language of English – see below for examples – the same economists slip into using an incorrect definition of loss aversion. Conclusion So, please, don’t go around claiming that behavioral economists are incorporating some brilliant newfound insight that people hate losses more than they like gains. We’ve known about this in price theory since Alfred Marshall’s 1890 Principles of Economics. Addendum It’s kind of silly for me to write this post without naming names. Here we go: 1. Richard Thaler, one of the founding fathers of behavioral economics, in his 2015 bestseller, Misbehaving: 2. Richard Thaler, in the 2008 bestseller, Nudge: 3. Cass Sunstein (Oct. 2015), Harvard law and behavioral economics professor: 4. Daniel Kahneman, Nobel Prize-winning behavioral economist, in his 2011 bestseller, Thinking Fast and Slow: 5. Stephen Dubner (Nov. 2005): 6. New York Times (Dec. 2013): 7.The Economist (Feb. 2015): I should note that Tversky and Kahneman in their original paper describing loss aversion are admirably clear in their usage of the concept: the title of their QJE paper is Loss Aversion in Riskless Choice: A Reference-Dependent Model, explicitly highlighting the notion of reference dependence. References Until very recently – see last month’s WSJ survey of economists – the FOMC was widely expected to raise the target federal funds rate this week at their September meeting. Whether or not the Fed should be raising rates is a question that has received much attention from a variety of angles. What I want to do in this post is answer that question from a very specific angle: the perspective of a New Keynesian economist. Why the New Keynesian perspective? There is certainly a lot to fault in the New Keynesian model (see e.g. Josh Hendrickson). However, the New Keynesian framework dominates the Fed and other central banks across the world. If we take the New Keynesian approach seriously, we can see what policymakers should be doing according to their own preferred framework. The punch line is that the Fed raising rates now is the exact opposite of what the New Keynesian model of a liquidity trap recommends. If you’re a New Keynesian, this is the critical moment in monetary policy. For New Keynesians, the zero lower bound can cause a recession, but need not result in a deep depression, as long as the central bank credibly promises to create an economic boom after the zero lower bound (ZLB) ceases to be binding. That promise of future growth is sufficient to prevent a depression. If the central bank instead promises to return to business as normal as soon as the ZLB stops binding, the result is a deep depression while the economy is trapped at the ZLB, like we saw in 2008 and continue to see in Europe today. The Fed appears poised to validate earlier expectations that it would indeed return to business as normal. If the New Keynesian model is accurate, this is extremely important. By not creating a boom today, the Fed is destroying any credibility it has for the next time we hit the ZLB (which will almost certainly occur during the next recession). It won’t credibly be able to promise to create a boom after the recession ends, since everyone will remember that it did not do so after the 2008 recession. The result, according to New Keynesian theory, will be another depression. I. The theory: an overview of the New Keynesian liquidity trap I have attached at the bottom of this post a reference sheet going into more detail on Eggertsson and Woodford (2003), the definitive paper on the New Keynesian liquidity trap. Here, I summarize at a high level –skip to section II if you are familiar with the model. A. The NK model without a ZLB Let’s start by sketching the standard NK model without a zero lower bound, and then see how including the ZLB changes optimal monetary policy. The basic canonical New Keynesian model of the economy has no zero lower bound on interest rates and thus no liquidity traps (in the NK context, a liquidity trap is defined as a period when the nominal interest rate is constrained at zero). Households earn income through labor and use that income to buy a variety of consumption goods and consume them to receive utility. Firms, which have some monopoly power, hire labor and sell goods to maximize their profits. Each period, a random selection of firms are not allowed to change their prices (Calvo price stickiness). With this setup, the optimal monetary policy is to have the central bank manipulate the nominal interest rate such that the real interest rate matches the “natural interest rate,” which is the interest rate which would prevail in the absence of economic frictions. The intuition is that by matching the actual interest rate to the “natural” one, the central bank causes the economy to behave as if there are no frictions, which is desirable. In our basic environment without a ZLB, a policy of targeting zero percent inflation via a Taylor rule for the interest rate exactly achieves the goal of matching the real rate to the natural rate. Thus optimal monetary policy results in no inflation, no recessions, and everyone’s the happiest that they could possibly be. B. The NK liquidity trap The New Keynesian model of a liquidity trap is exactly the same as the model described above, with one single additional equation: the nominal interest rate must always be greater than or equal to zero. This small change has significant consequences. Whereas before zero inflation targeting made everyone happy, now such a policy can cause a severe depression. The problem is that sometimes the interest rate should be less than zero, and the ZLB can prevent it from getting there. As in the canonical model without a ZLB, optimal monetary policy would still have the central bank match the real interest rate to the natural interest rate. Now that we have a zero lower bound, however, if the central bank targets zero inflation, then the real interest rate won’t be able to match the natural interest rate if the natural interest rate ever falls below zero! And that, in one run-on sentence, is the New Keynesian liquidity trap. Optimal policy is no longer zero inflation. The new optimal policy rule is considerably more complex and I refer you to the attached reference sheet for full details. But the essence of the idea is quite intuitive: If the economy ever gets stuck at the ZLB, the central bank must promise that as soon as the ZLB is no longer binding it will create inflation and an economic boom. The intuition behind this idea is that the promise of a future boom increases the inflation expectations of forward-looking households and firms. These increased inflation expectations reduce the real interest rate today. This in turn encourages consumption today, diminishing the depth of the recession today. All this effect today despite the fact that the boom won’t occur until perhaps far into the future! Expectations are important, indeed they are the essence of monetary policy. C. An illustration of optimal policy Eggertsson (2008) illustrates this principle nicely in the following simulation. Suppose the natural rate is below the ZLB for 15 quarters. The dashed line shows the response of the economy to a zero-inflation target, and the solid line the response to the optimal policy described above. Under optimal policy (solid line), we see in the first panel that the interest rate is kept at zero even after period 15 when the ZLB ceases to bind. As a result, we see in panels two and three that the depth of the recession is reduced to almost zero under policy; there is no massive deflation; and there’s a nice juicy boom after the liquidity trap ends. In contrast, under the dashed line – which you can sort of think of as closer to the Fed’s current history independent policy – there is deflation and economic disaster. II. We’re leaving the liquidity trap; where’s our boom? To be completely fair, we cannot yet say that the Fed has failed to follow its own model. We first must show that the ZLB only recently has ceased or will cease to be binding. Otherwise, a defender of the Fed could argue that the lower bound could have ceased to bind years ago, and the Fed has already held rates low for an extended period. The problem for showing this is that estimating the natural interest rate is extremely challenging, as famously argued by Milton Friedman (1968). That said, several different models using varied estimation methodologies all point to the economy still being on the cusp of the ZLB, and thus the thesis of this post: the Fed is acting in serious error. Consider, most tellingly, the New York Fed’s own model! The NY Fed's medium-scale DSGE model is at its core the exact same as the basic canonical NK model described above, with a lot of bells and whistles grafted on. The calibrated model takes in a whole jumble of data – real GDP, financial market prices, consumption, the kitchen sink, forecast inflation, etc. – and spits outs economic forecasts. It can also tell us what it thinks the natural interest rate is. From the perspective of the New York Fed DSGE team, the economy is only just exiting the ZLB: Barsky et al (2014) of the Chicago Fed perform a similar exercise with their own DSGE model and come to the same conclusion: Instead of using a microfounded DSGE model, John Williams and Thomas Laubach, president of the Federal Reserve Bank of San Francisco and director of monetary affairs of the Board of Governors respectively, use a reduced form model estimated using a Kalman filter. Their model has that the natural rate in fact still below its lower bound (in green): David Beckworth has a cruder but more transparent regression model here and also finds that the economy remains on the cusp of the ZLB (in blue): If anyone knows of any alternative estimates, I’d love to hear in the comments. With this fact established, we have worked through the entire argument. To summarize: 1. The Fed thinks about the world through a New Keynesian lens 2. The New Keynesian model of a liquidity trap says that to prevent a depression, the central bank must keep rates low even after the ZLB stops being binding, in order to create an economic boom 3. The economy is only just now coming off the ZLB 4. Therefore, a good New Keynesian should support keeping rates at zero. 5. So: why is the Fed about to raise rates?! III. What’s the strongest possible counterargument? I intend to conclude all future posts by considering the strongest possible counterarguments to my own. In this case, I see only two interesting critiques: A. The NK model is junk This argument is something I have a lot of sympathy for. Nonetheless, it is not a very useful point, for two reasons. First, the NK model is the preferred model of Fed economists. As mentioned in the introduction, this is a useful exercise as the Fed’s actions should be consistent with its method of thought. Or, its method of thought must change. Second, other models give fairly similar results. Consider the more monetarist model of Auerbach and Obstfeld (2005) where the central bank’s instrument is the money supply instead of the interest rate (I again attach my notes on the paper below). Instead of prescribing that the Fed hold interest rates lower for longer as in Eggertsson and Woodford, Auerbach and Obstfeld’s cash-in-advance model shows that to defeat a liquidity trap the Fed should promise a one-time permanent level expansion of the money supply. That is, the expansion must not be temporary: the Fed must continue to be “expansionary” even after the ZLB has ceased to be binding by keeping the money supply expanded. This is not dissimilar in spirit to Eggertsson and Woodford’s recommendation that the Fed continue to be “expansionary” even after the ZLB ceases to bind by keeping the nominal rate at zero. B. The ZLB ceased to bind a long time ago The second possible argument against my above indictment of the Fed is the argument that the natural rate has long since crossed the ZLB threshold and therefore the FOMC has targeted a zero interest rate for a sufficiently long time. This is no doubt the strongest argument a New Keynesian Fed economist could make for raising rates now. That said, I am not convinced, partly because of the model estimations shown above. More convincing to me is the fact that we have not seen the boom that would accompany interest rates being below their natural rate. Inflation has been quite low and growth has certainly not boomed. Ideally we’d have some sort of market measure of the natural rate (e.g. a prediction market). As a bit of an aside, as David Beckworth forcefully argues, it’s a scandal that the Fed Board does not publish its own estimates of the natural rate. Such data would help settle this point. I’ll end things there. The New Keynesian model currently dominates macroeconomics, and its implications for whether or not the Fed should be raising rates in September are a resounding no. If you’re an economist who finds value in the New Keynesian perspective, I’d be extremely curious to hear why you support raising rates in September if you do – or, if not, why you’re not speaking up more loudly. References I comment on Josh Hendrickson's interesting post. While it certainly is hard for me to believe that the natural rate of interest could be negative, it's difficult to find a satisfying alternative explanation for the sustained output gap of the past seven years coexisting with the federal funds rate at the zero lower bound plus positive inflation. JP Koning makes the case that even if Greece were to leave the Eurozone and institute a new currency (call it the New Drachma), Athens would still not have independent monetary policy: if households and firms continue to post prices in Euros rather than New Drachmas, Greek monetary policy would not be able to affect the Greek economy. As JP explains: “Consider what happens if the euro remains the economy's preferred accounting unit, even as Greek drachmas begin to circulate as a medium of exchange. No matter how low the drachma exchange rate goes, there can be no drachma-induced improvement in competitiveness. After all, if olive oil producers accept payment in drachmas but continue to price their goods in euros, then a lower drachma will have no effect on Greek olive oil prices, the competitiveness of Greek oil vis-à-vis , say, Turkish oil, remaining unchanged. If a Greek computer programmer continues to price their services in euros, the number of drachmas required to hire him or her will have skyrocketed, but the programmer's euro price will have remained on par with a Finnish programmer's wage.” Thus, if the New Drachma is not adopted as the dominant unit of account, Greece would still be at the mercy of the ECB, and worse, now without any voice in ECB decision-making. I think this story is largely correct, but I want to throw out a counterpoint for discussion, which perhaps demonstrates that leaving the Eurozone could benefit Greece. Currency reform and rewriting of debt contracts One of the most important actions a government takes when it institutes a new currency or a currency reform is to legally redenominate all old contracts (issued under domestic law) in the new currency. In particular: debt therefore becomes automatically priced in the new currency. In American history, this occurred during Franklin Roosevelt’s 1933 “currency reform”, when the dollar was devalued relative to gold and gold clauses in existing contracts were invalidated. To quote from Amity Shlaes’ “The Forgotten Man: A New History of the Great Depression”: “Next Roosevelt set to work invalidating gold clauses in contracts. Since the previous century, gold clauses had been written into both government bond and private contracts between individual businessmen. The clauses committed signatories to paying not merely in dollars but in gold dollars. The boilerplate phrase was that the obligation would be “payable in principal and interest in United States gold coin of the present standard of value.” The phrase “the present standard” referred, or so many believed, to the moment at which the contract had been signed. The line also referred to gold, not paper, just as it said. This was a way of ensuring that, even if a government did inflate, an individual must still honor his original contract. Gold clause bonds had historically sold at a premium, which functioned as a kind of meter of people’s expectation of inflation. In order to fund World War I, for instance, Washington had resorted to gold clause bonds, backing Liberty Bonds sold to the public with gold. Now, in the spring of 1933, upon the orders of Roosevelt, the Treasury was making clear that it would cease to honor its own gold clauses. This also threw into jeopardy gold clauses in private contracts between individuals. The notion would be tested in the Supreme Court later; meanwhile, bond and contract holders had to accept the de facto devaluation of their assets. The deflation had hurt borrowers, and now this inflationary act was a primitive revenge. To end the gold clause was an act of social redistribution, a$200 billion transfer of wealth from creditor to debtor, a victory for the populists.” [Chapter 5] Unfortunately I can’t find a citation right now, but I believe Argentina did the same thing when it replaced the austral with the peso; and that this relabeling almost always occurs during currency reforms. Thus after a currency reform, the price of existing debt, at the very least, would be in the new currency. Debt: the most important nominal friction? And there’s a good argument to be made that the most important “sticky” price is the price of debt. Selgin’s “Less Than Zero”, Sheedy (2014), and Mian and Sufi's new book make this argument. Debt contracts are almost always both (a) fixed in nominal, not real, terms and (b) not contingent on aggregate economic conditions. In perfectly complete markets, on the other hand, we would expect debt contracts to be state-contingent. Contracts would be written in such a way that (perhaps by tracking an inflation index and some index of real economic conditions) if inflation or economic growth increases, borrowers would pay more back to their lender; and if inflation or economic growth went down, borrowers would pay less. Both borrowers and lenders would ex ante prefer this type of arrangement, but transaction costs make such contracts prohibitively expensive. For more intuition on this see Chapter III of Less Than Zero and the introduction to Sheedy’s paper. As for why this nominal friction may be more important than the traditional nominal frictions that economists worry about – that is, sticky prices and sticky wages – I would again suggest a look at Sheedy’s paper where he calibrates his model and finds that the central bank should care 90% about this nominal debt “stickiness” and 10% about traditional price stickiness. However, the relative importance of these two categories of frictions is very much still an open question. If non-state contingent debt is indeed the most important nominal friction, than perhaps if Greece were to rewrite existing debt contracts when instituting a New Drachma, the new Greek central bank would have enough control over the nominal economy to pull Greece out of its depression. (Of course, after the switch over to the New Drachma, Greek households and firms could – unless further legislation prevented them – write *new* contracts denominated in euros. JP’s Latin America pricing hysteresis example would seem to suggest that this is very possible.) In short To summarize, JP writes, “As long as a significant portion of Greek prices are expressed in euros, Greece’s monetary policy will continue to be decided in Frankfurt, not Athens.” While true, it is at least conceivable that a government-mandated relabeling of existing debt contracts (as has occurred historically during currency reforms) could ensure that debt prices, which are perhaps the most important prices, are no longer expressed in euros but instead in New Drachma. Summary: 1. NGDP growth is equal to real GDP growth plus inflation. Thus, under NGDP targeting, if the potential real growth rate of the economy changes, then the full-employment inflation rate changes. 2. New Keynesians advocate that the Fed adjust the NGDP target one for one with changes in potential GDP. However, this rule would be extremely problematic for market monetarists. 3. Most importantly, it is simply not possible to estimate potential GDP in real time: an accurate structural model will never be built. 4. Further: such a policy would give the Fed huge amounts of discretion; unanchor long term expectations, especially under level targeting; and be especially problematic if technological growth rapidly accelerates as some predict. I want to discuss a problem that I see with nominal GDP targeting: structural growth slowdowns. This problem isn’t exactly a novel insight, but it is an issue with which I think the market monetarist community has not grappled enough. I. A hypothetical example Remember that nominal GDP growth (in the limit) is equal to inflation plus real GDP growth. Consider a hypothetical economy where market monetarism has triumphed, and the Fed maintains a target path for NGDP growing annually at 5% (perhaps even with the help of a NGDP futures market). The economy has been humming along at 3% RGDP growth, which is the potential growth rate, and 2% inflation for (say) a decade or two. Everything is hunky dory. But then – the potential growth rate of the economy drops to 2% due to structural (i.e., supply side) factors, and potential growth will be at this rate for the foreseeable future. Perhaps there has been a large drop in the birth rate, shrinking the labor force. Perhaps a newly elected government has just pushed through a smorgasbord of measures that reduce the incentive to work and to invest in capital. Perhaps, most plausibly (and worrisomely!) of all, the rate of innovation has simply dropped significantly. In this market monetarist fantasy world, the Fed maintains the 5% NGDP path. But maintaining 5% NGDP growth with potential real GDP growth at 2% means 3% steady state inflation! Not good. And we can imagine even more dramatic cases. II. Historical examples Skip this section if you’re convinced that the above scenario is plausible Say a time machine transports Scott Sumner back to 1980 Tokyo: a chance to prevent Japan’s Lost Decade! Bank of Japan officials are quickly convinced to adopt an NGDP target of 9.5%, the rationale behind this specific number being that the average real growth in the 1960s and 70s was 7.5%, plus a 2% implicit inflation target. Thirty years later, trend real GDP in Japan is around 0.0%, by Sumner’s (offhand) estimation and I don’t doubt it. Had the BOJ maintained the 9.5% NGDP target in this alternate timeline, Japan would be seeing something like 9.5% inflation today. Counterfactuals are hard: of course much else would have changed had the BOJ been implementing NGDPLT for over 30 years, perhaps including the trend rate of growth. But to a first approximation, the inflation rate would certainly be approaching 10%. Or, take China today. China saw five years of double digit real growth in the mid-2000s, and not because the economy was overheating. I.e., the 12.5% and 14% growth in real incomes in China in 2006 and 2007 were representative of the true structural growth rate of the Chinese economy at the time. To be conservative, consider the 9.4% growth rate average over the decade, which includes the meltdown in 2008-9 and a slowdown in the earlier part of the decade. Today, growth is close to 7%, and before the decade is up it very well could have a 5 handle. If the People’s Bank had adopted NGDP targeting at the start of the millennium with a 9.4% real growth rate in mind, inflation in China today would be more than 2 percentage points higher than what the PBOC desired when it first set the NGDP target! That’s not at all trivial, and would only become a more severe issue as the Chinese economy finishes converging with the developed world and growth slows still further. This isn’t only a problem for countries playing catch-up to the technological frontier. France has had a declining structural growth rate for the past 30 years, at first principally because of declining labor hours/poor labor market policies and then compounded by slowing productivity and population growth. The mess that is Russia has surely had a highly variable structural growth rate since the end of the Cold War. The United States today, very debatably, seems to be undergoing at least some kind of significant structural change in economic growth as well, though perhaps not as drastic. Source: Margaret Jacobson, “Behind the Slowdown of Potential GDP III. Possible solutions to the problem of changing structural growth There are really only two possible solutions to this problem for a central bank to adopt. First, you can accept the higher inflation, and pray to the Solow residual gods that the technological growth rate doesn’t drop further and push steady state inflation even higher. I find this solution completely unacceptable. Higher long term inflation is simply never a good thing; but even if you don’t feel that strongly, you at least should feel extremely nervous about risking the possibility of extremely high steady state inflation. Second, you can allow the central bank to periodically adjust the NGDP target rate (or target path) to adjust for perceived changes to the structural growth rate. For example, in the original hypothetical, the Fed would simply change its NGDP target path to grow at 4% instead of 5% as previously so that real income grows at 2% and inflation continues at 2%. This second solution, I think, is probably what Michael Woodford, Brad DeLong, Paul Krugman, and other non-monetarist backers of NGDP targeting would support. Indeed, Woodford writes in his Jackson Hole paper, “It is surely true – and not just in the special model of Eggertsson and Woodford – that if consensus could be reached about the path of potential output, it would be desirable in principle to adjust the target path for nominal GDP to account for variations over time in the growth of potential.” (p. 46-7) Miles Kimball notes the same argument: in the New Keynesian framework, an NGDP target rate should be adjusted for changes in potential. However – here’s the kicker – allowing the Fed to change its NGDP target is extremely problematic for some of the core beliefs held by market monetarists. (Market monetarism as a school of thought is about more than merely just NGDP targeting – see Christensen (2011) – contra some.) Let me walk through a list of these issues now; by the end, I hope it will be clear why I think that Scott Sumner and others have not discussed this issue enough. IVa. The Fed shouldn’t need a structural model For the Fed to be able to change its NGDP target to match the changing structural growth rate of the economy, it needs a structural model that describes how the economy behaves. This is the practical issue facing NGDP targeting (level or rate). However, the quest for an accurate structural model of the macroeconomy is an impossible pipe dream: the economy is simply too complex. There is no reason to think that the Fed’s structural model could do a good job predicting technological progress. And under NGDP targeting, the Fed would be entirely dependent on that structural model. Ironically, two of Scott Sumner’s big papers on futures market targeting are titled, “Velocity Futures Markets: Does the Fed Need a Structural Model?” with Aaron Jackson (their answer: no), and “Let a Thousand Models Bloom: The Advantages of Making the FOMC a Truly 'Open Market'”. In these, Sumner makes the case for tying monetary policy to a prediction market, and in this way having the Fed adopt the market consensus model of the economy as its model of the economy, instead of using an internal structural model. Since the price mechanism is, in general, extremely good at aggregating disperse information, this model would outperform anything internally developed by our friends at the Federal Reserve Board. If the Fed had to rely on an internal structural model adjust the NGDP target to match structural shifts in potential growth, this elegance would be completely lost! But it’s more than just a loss in elegance: it’s a huge roadblock to effective monetary policymaking, since the accuracy of said model would be highly questionable. IVb. Rules are better than discretion Old Monetarists always strongly preferred a monetary policy based on well-defined rules rather than discretion. This is for all the now-familiar reasons: the time-inconsistency problem; preventing political interference; creating accountability for the Fed; etc. Market monetarists are no different in championing rule-based monetary policy. Giving the Fed the ability to modify its NGDP target is simply an absurd amount of discretionary power. It’s one thing to give the FOMC the ability to decide how to best achieve its target, whether than be 2% inflation or 5% NGDP. It’s another matter entirely to allow it to change that NGDP target at will. It removes all semblance of accountability, as the Fed could simply move the goalposts whenever it misses; and of course it entirely recreates the time inconsistency problem. IVc. Expectations need to be anchored Closely related to the above is the idea that monetary policy needs to anchor nominal expectations, perhaps especially at the zero lower bound. Monetary policy in the current period can never be separated from expectations about future policy. For example, if Janet Yellen is going to mail trillion dollar coins to every American a year from now, I am – and hopefully you are too – going to spend all of my or your dollars ASAP. Because of this, one of the key necessary conditions for stable monetary policy is the anchoring of expectations for future policy. Giving the Fed the power to discretionarily change its NGDP target wrecks this anchor completely! Say the Fed tells me today that it’s targeting a 5% NGDP level path, and I go take out a 30-year mortgage under the expectation that my nominal income (which remember is equal to NGDP in aggregate) will be 5% higher year after year after year. This is important as my ability to pay my mortgage, which is fixed in nominal terms, is dependent on my nominal income. But then Janet Yellen turns around and tells me tomorrow, “Joke’s on you pal! We’re switching to a 4% level target.” It’s simply harder for risk-averse consumers and firms to plan for the future when there’s so much possible variation in future monetary policy. IVd. Level targeting exacerbates this issue Further, level targeting exacerbates this entire issue. The push for level targeting over growth rate targeting is at least as important to market monetarism as the push for NGDP targeting over inflation targeting, for precisely the reasoning described above. To keep expectations on track, and thus not hinder firms and households trying to make decisions about the future, the central bank needs to make up for past mistakes, i.e. level target. However, level targeting has issues even beyond those that rate targeting has, when the central bank has the ability to change the growth rate. In particular: what happens if the Fed misses the level target one year, and decides at the start of the next to change its target growth rate for the level path? For instance, say the Fed had adopted a 5% NGDP level target in 2005, which it maintained successfully in 2006 and 2007. Then, say, a massive crisis hits in 2008, and the Fed misses its target for say three years running. By 2011, it looks like the structural growth rate of the economy has also slowed. Now, agents in the economy have to wonder: is the Fed going to try to return to its 5% NGDP path? Or is it going to shift down to a 4.5% path and not go back all the way? And will that new path have as a base year 2011? Or will it be 2008? (Note: I am aware that had the Fed been implementing NGDPLT in 2008 the crisis would have been much less severe, perhaps not even a recession! The above is for illustration.) (Also, I thank Joe Mihm for this point.) IVe. This problem for NGDP targeting is analogous to the velocity instability problem for Friedman’s k-percent rule Finally, I want to make an analogy that hopefully emphasizes why I think this issue is so serious. Milton Friedman long advocated that the Fed adopt a rule whereby it would have promised to keep the money supply (M2, for Friedman) growing at a steady rate of perhaps 3%. Recalling the equation of exchange, MV = PY, we can see that when velocity is constant, the k-percent rule is equivalent to NGDP targeting! In fact, velocity used to be quite stable: Source: FRED For the decade and a half or two after 1963 when Friedman and Schwartz published A Monetary History, the rule probably would have worked brilliantly. But between high inflation and financial innovation in the late 70s and 80s, the stable relationship between velocity, income, and interest rates began to break down, and the k-percent rule would have been a disaster. This is because velocity – sort of the inverse of real, income-adjusted money demand – is a structural, real variable that depends on the technology of the economy and household preferences. The journals of the 1980s are somewhat famously a graveyard of structural velocity models attempting to find a universal model that could accurately explain past movements in velocity and accurately predict future movements. It was a hopeless task: the economy is simply too complex. (I link twice to the same Hayek essay for a reason.) Hence the title of the Sumner and Jackson paper already referenced above. Today, instead of hopelessly modeling money demand, we have economists engaged in the even more hopeless task of attempting to develop a structural model for the entire economy. Even today, when the supply side of the economy really changes very little year-to-year, we don’t do that good of a job at it. And (this is the kicker) what happens if the predictability of the structural growth rate breaks down to the same extent that the predictability of velocity broke down in the 1980s? What if, instead of the structural growth rate only changing a handful of basis points each year, we have year-to-year swings in the potential growth rate on the order of whole percentage points? I.e., one year the structural growth is 3%, but the next year it’s 5%, and the year after that it’s 2.5%? I know that at this point I’m probably losing anybody that has bothered to read this far, but I think this scenario is entirely more likely than most people might expect. Rapidly accelerating technological progress in the next couple of decades as we reach the “back half of the chessboard”, or even an intelligence explosion, could very well result in an extremely high structural growth rate that swings violently year to year. However, it is hard to argue either for or against the techno-utopian vision I describe and link to above, since trying to estimate the future of productivity growth is really not much more than speculation. That said, it does seem to me that there are very persuasive arguments that growth will rapidly accelerate in the next couple of decades. I would point those interested in a more full-throated defense of this position to the work of Robin Hanson, Erik Brynjolfsson and Andrew McAfee, Nick Bostrom, and Eliezer Yudkowsky. If you accept the possibility that we could indeed see rapidly accelerating technological change, an “adaptable NGDP target” would essentially force the future Janet Yellen to engage in an ultimately hopeless attempt to predict the path of the structural growth rate and to chase after it. I think it’s clear why this would be a disaster. V. An anticipation of some responses Before I close this out, let me anticipate four possible responses. 1. NGDP variability is more important than inflation variability Nick Rowe makes this argument here and Sumner also does sort of here. Ultimately, I think this is a good point, because of the problem of incomplete financial markets described by Koenig (2013) and Sheedy (2014): debt is priced in fixed nominal terms, and thus ability to repay is dependent on nominal incomes. Nevertheless, just because NGDP targeting has other good things going for it does not resolve the fact that if the potential growth rate changes, the long run inflation rate would be higher. This is welfare-reducing for all the standard reasons. Because of this, it seems to me that there’s not really a good way of determining whether NGDP level targeting or price level targeting is more optimal, and it’s certainly not the case that NGDPLT is the monetary policy regime to end all other monetary policy regimes. 2. Target NGDP per capita instead! You might argue that if the most significant reason that the structural growth rate could fluctuate is changing population growth, then the Fed should just target NGDP per capita. Indeed, Scott Sumner has often mentioned that he actually would prefer an NGDP per capita target. To be frank, I think this is an even worse idea! This would require the Fed to have a long term structural model of demographics, which is just a terrible prospect to imagine. 3. Target nominal wages/nominal labor compensation/etc. instead! Sumner has also often suggested that perhaps nominal aggregate wage targeting would be superior to targeting NGDP, but that it would be too politically controversial. Funnily enough, the basic New Keynesian model with wage stickiness instead of price stickiness (and no zero lower bound) would recommend the same thing. I don’t think this solves the issue. Take the neoclassical growth or Solow model with Cobb-Douglas technology and preferences and no population growth. On the balanced growth path, the growth rate of wages = the potential growth rate of the economy = the growth rate of technology. For a more generalized production function and preferences, wages and output still grow at the same rate. In other words, the growth rate of real wages parallels that of the potential growth rate of the economy. So this doesn’t appear to solve anything, as it would still require a structural model. 4. Set up a prediction market for the structural growth rate! I don’t even know if this would work well with Sumner’s proposal. But perhaps it would. In that case, my response is… stay tuned for my critique of market monetarism, part two: why handing policymaking over to prediction markets is a terrible idea. VI. In conclusion The concerns I outline above have driven me from an evangelist for NGDP level targeting to someone extremely skeptical that any central banking policy can maintain monetary equilibrium. The idea of optimal policy under NGDP targeting necessitating a structural model of the economy disturbs me, for a successful such model – as Sumner persuasively argues – will never be built. The prospect that NGDP targeting might collapse in the face of rapidly accelerating technological growth worries me, since it does seem to me that this very well could occur. And even setting aside the techno-utopianism, the historical examples described above, such as Japan in the 1980s, demonstrate that we have seen very large shifts in the structural growth rate in actual real-world economies. I want to support NGDPLT: it is probably superior to price level or inflation targeting anyway, because of the incomplete markets issue. But unless there is a solution to this critique that I am missing, I am not sure that NGDP targeting is a sustainable policy for the long term, let alone the end of monetary history. I found an interesting 1970 AER paper that adds land to the Solow model in continuous time and verifies the result, discussed last week, that as the rate of return on capital approaches the growth rate of the economy the price of land will approach infinity. The paper is a mess – the notation is disgusting, and doing continuous time instead of discrete time adds nothing but pain. The bottom line is this. The rate of return on land and capital should be equal in equilibrium (factor price equalization). If the interest rate – i.e., the rate of return on capital – is low, then the rate of return on land must be lower. First, the rate of return on land and capital should be equal on the balanced growth path (equation 6). Where K is capital, L is land, P is the price of land in terms of goods, and g is the rate of exogenous economic growth, we have the equilibrium condition: F_K = F_L/P + (∂P/∂t)/P Translated to English: the marginal product of capital must equal the marginal product of land plus the rate of increase in the price of land. Perhaps even better: the interest rate on capital equals the rent from land plus the capital gains from land. Or, one last rephrasing: the return on capital equals the return on land, where part of the return on land is price appreciation. From this, an equilibrium relationship between effective capital (k = K/N) and effective land (l = PL/N) can be derived with a little algebra (equation 8): l = F_L / [(F_K - g)*β] Where beta is a constant (the ratio of initial effective labor to land). This is the key result. If the rate of return on capital is equal to the growth rate, effective land goes to infinity: dividing by zero. And thus the price of land goes to infinity, as the supply of land is perfectly inelastic, i.e. fixed. To repeat what I wrote last week: we don’t see the price of landing going to infinity, which would seem to be a challenge for secular stagnationists. Some thoughts on Eggertsson and Mehrotra (2014), the first formalization of the “secular stagnation” thesis. Nothing innovative here, I just wanted to collect my thoughts all in one place. Model overview First, a brief review of Eggertsson and Mehrotra’s model for easy reference. (Simon Wren-Lewis has a short summary of the math.) The paper describes a three-period overlapping generations model, where the middle generation receives an endowment (or, in an extension, labors for an income). The young and old generations do not receive incomes; the young borrows from the middle generation, and the old uses money saved from their time in the middle generation. The amount the young can borrow is constrained because of a purely exogenous “debt limit”. The key result is that if this debt constraint (exogenously) drops (a “deleveraging shock”), then the demand for loans drops, forcing the natural rate of interest to permanently fall, potentially to permanently below zero. Once a price level and downward nominal wage rigidity are introduced, we can then have a permanent zero lower bound situation where the natural rate is permanently and unattainably negative – secular stagnation, by definition. This causes output to be permanently below potential. Now, various thoughts, from more to less interesting: 1. Lack of capital This model does not include capital. I suspect a model with capital and a negative interest rate would have negative or zero investment, whereas in the economy today we of course have positive net investment. The authors do note they want to include capital in the next iteration of the model. 2. Lack of land There is also no land in this model. Of course in modern times land is not typically included as a factor in the production function. Solow once joked, “If God had meant there to be more than two factors of production, he would have made it easier for us to draw three-dimensional diagrams.” But Nick Rowe, I think, makes a good case that in a model attempting to analyze permanently negative interest rates, land must be included. The argument goes like this: think of land as an asset like any other, where the price of land equals the present discounted value of the future returns to land. It can be shown that as the interest rate approaches the growth rate of the economy, the value of the land goes to infinity. Back in the real world, of course, we have not seen land prices go to infinity. So perhaps adding land to this model would prevent us from having secular stagnation without the price of land blowing up. Section three of this Stefan Homburg (2014) paper discusses this further, and Homburg models the result more formally here. Another interesting post from Rowe here, and comments from Matt Rognlie here. (Side note: by the same logic, perhaps a fall in the natural rate explains the housing “bubble” of the last decade?) 3. Debt limit as exogenous The debt limit is purely exogenous. It seems likely that there would be important and interesting general equilibrium effects if it were endogenized. There is not much to say on this point, but it’s very important. 4. OLG modelling instead of representative agent This model uses OLG as its basic framework instead of a representative agent. Importantly, this is different from the last decade and a half of research on the liquidity trap (Krugman 1998, Eggertsson and Woodford 2003, Auerbach and Obstfeld 2005) which all used representative agent models. In these models, in the long run steady the natural rate will determined by the discount factor which forces the long run natural rate to be positive. Thus, the economy can only be in a liquidity trap (ZLB) situation temporarily. It’s only in this OLG environment that we can have a permanently negative natural rate. That seems very interesting to me – what else might we be missing by using the representative agent model? (…Probably not much.) Turning away from mathematical formalization, I wonder if one way we could think about this is: what if the natural rate was expected to remain at the ZLB for a period longer than the remainder of a person’s life (say >60 years)? Would that create some kind of a trap situation? Conclusion Overall, I’m simply not convinced that this is a useful model. The idea that the natural rate could be permanently negative simply seems extremely unlikely. Also, the lack of inclusion of land seems to be a big oversight. Update: Josh Hendrickson makes the interesting point that adding money to the economy (with a fixed nominal return of 0%), the Eggertsson-Mehrota does not hold. In 2008, Christina and David Romer published an interesting paper demonstrating that FOMC members are useless at forecasting economic conditions compared to the Board of Governors staff, and presented some evidence that mistaken FOMC economic forecasts were correlated with monetary policy shocks. I’ve updated their work with another decade of data, and find that while the FOMC remained bad at forecasting over the extended period, the poor forecasting was not correlated with monetary policy shocks. First, some background. Background Before every FOMC meeting, the staff at the Board of Governors produces the Greenbook, an in-depth analysis of current domestic and international economic conditions and, importantly for us, forecasts of all kinds of economic indicators a year or two out. The Greenbook is only released to the public with a major lag, so the last data we have is from 2007. The FOMC members – the governors and regional bank presidents – prepare consensus economic forecasts twice a year, usually February and July, as part of the Monetary Policy Report they must submit to Congress. (Since October 2007, FOMC members have prepared projections at four FOMC meetings per year. That data, from the end of 2007, is not included in my dataset here, but I’ll probably put it in when I update it in the future as more recent Greenbooks are released.) Summary of Romer and Romer (2008) The Romers took around 20 years of data from these two sources, from 1979 to 2001, and compared FOMC forecasts to staff forecasts. They estimate a regression of the form Where X is the realized value of the variable (e.g. actual GDP growth in year t+1), S is the staff’s projection of the variable (e.g. the staff’s projected GDP growth next year), and P is the FOMC’s projection of the variable (e.g. the FOMC’s projected GDP growth next year). They find “not just that FOMC members fail to add information, but that their efforts to do so are counterproductive.” Policymakers were no good at forecasting over this period. They then ask if the mistaken forecasts cause the FOMC to make monetary policy errors that cause monetary policy shocks. The two use their own Romer and Romer (2004) measure, which I’ve updated here, as the measure of monetary policy shocks. They then estimate the regression Where M is the measure of shocks, and P and S are as before. They only ran this regression from 1979 through 1996, as that was the latest the measure of shocks went up to in the 2004 paper. They find that, “The estimates suggest that forecast differences may be one source of monetary shocks… An FOMC forecast of inflation one percentage point higher than the staff forecast is associated with an unusual rise in the federal funds rate of approximately 30 basis points.” That seemed like a very interesting result to me when I first read this paper. Could bad monetary policymaking be explained by the hubris of policymakers who thought they could forecast economic conditions better than the staff? It turns out, after I updated the data, this result does not hold. Updating the data I followed the same methodology as when I updated Romer and Romer (2004): first replicating the data to ensure I had the correct method before collecting the new data and updating. The data is from 1979 through 2007, and all my work is available here and here. I find, first, that policymakers remained quite poor economic forecasters. Here is the updated version of Table 1 from the paper, with the old values for comparison: The coefficient on the FOMC forecast for inflation and unemployment is still right around zero, indicating that FOMC forecasts for these two variables contain no useful information. However, it appears that once we extend the monetary policy shock regression from 1996 to 2007, the second result – that forecast differences are a source of monetary policy shocks – does not hold. Here is the updated version of Table 2 from the paper, again with old values for comparison: When the Romers published their paper, the R-squared on the regression of monetary shocks over all three variables was 0.17. This wasn’t exactly the strongest correlation, but for the social sciences it’s not bad, especially considering that monetary shock measure is fairly ad hoc. As we can see in the updated regression, the R-squared is down to 0.05 with the extended data. This is just too small to be labeled significant. Thus, unfortunately, this result does not appear to hold. I’ve updated the Romer and Romer (2004) series of monetary policy shocks. The main takeaway is this graph of monetary policy shocks by month, since 1969, where the gray bars indicate recession: When the two published their paper, they only had access to date up through 1996, since Fed Greenbooks – upon which the series is based – are released with a large lag. I’ve updated it through 2007, the latest available, and will update it again next month when the 2008 Greenbooks are released. The two interesting points in the new data are 1. The negative policy shock before and during the 2001 recession 2.  The negative policy shock in 2007 before the Great Recession Below I’ll go into the more technical notes of how this measure is constructed and my methodology, but the graph and the two points above are the main takeaway. How is the R&R measure constructed? First, the Romers derive a series of intended changes in the federal funds rate. (This is easy starting in the 1990s, since the FOMC began announcing when it wanted to change the FFR; before that, the two had to trawl through meeting minutes to figure it out.) They then use the Fed’s internal Greenbook forecasts of inflation and real growth to control the intended FFR series for monetary policy actions taken in response to information about future economic developments, specifically RGDP growth, inflation, and unemployment. In other words, they regress the change in the intended FFR around forecast dates on RGDP growth, inflation and unemployment. Then, as they put it, “Residuals from this regression show changes in the intended funds rate not taken in response to information about future economic developments. The resulting series for monetary shocks should be relatively free of both endogenous and anticipatory actions.” The equation they estimate is: $\Delta ff_m = \alpha + \beta ffb_m + \Sigma_{i=-1}^{2}\gamma_i\tilde{\Delta y_{mi}} + \Sigma_{i=-1}^{2} \lambda_i (\tilde{\Delta y_{mi} - \Delta y_{m-1,i}}) + \Sigma_{i=-1}^{2} \phi_i \tilde{\pi}_{mi} + \Sigma_{i=-1}^{2} \theta_i (\tilde{\pi_{mi}} - \tilde{\pi{m-1,i}}) + \rho \tilde{u}_{m0} + \epsilon_m$ • Δff is the change in the intended FFR around meeting m • ffb is the level of the target FFR before the change in the meeting m (included to capture any mean reversion tendency) • π, Δy, and u are the forecasts of inflation, real output growth, and the unemployment rate; note both that the current forecast and the change since the last meeting are used • The i subscripts refer to the horizon of the forecast: -1 is the previous quarter, 0 the current quarter, 1 the next quarter, 2 the next next quarter • All relative to the date of the forecast corresponding to meeting m; i.e. if the meeting is in early July 1980 and the forecast is in late June 1980, the contemporaneous forecast is for the second quarter of 1980 The Romers show in their paper that, by this measure, negative monetary policy shocks have large and significant effects on output and the price level. It is worth noting the limitations of this measure. It is based on the federal funds rate instrument, which is not a very good indicator of the stance of monetary policy. Additionally, if the FOMC changes its target FFR between meetings, any shock associated with that decision would not be captured by this measure. Results First, I replicated the Romer and Romer (2004) results to confirm I had the correct method. Then I collected the new data in Excel and ran the regression specified above in MATLAB. The data is available here and here (though there might have been errors when uploading to Google Drive). The residuals are shown above in graph form; it is an updated version of figure 1a in Romer and Romer (2004). The coefficients and statistics on the regression are (this is an updated version of table 1 in the original paper): Last, for clarity, I have the monetary policy shock measure below with the top five extremes removed. This makes some periods more clear, especially the 2007 shock. Again, I will update this next month when the 2008 Greenbooks are released. It should be very interesting to see how large a negative shock there was in 2008. Update: I've updated this last graph with 2008 data. Interestingly, the 2008 shock is not exceptionally large.
# janmr blog ## Comparing Rational Numbers Without Overflow18 May 2014 Say you want to know if the inequality is true ( are all assumed to be positive integers). Of course, one could just multiply both sides with , obtaining the equivalent and we were done. But if our number representation allowed only numbers up to a certain size, say 32 bit unsigned integers, the multiplication could overflow. Of course, double-precision could be used to do the multiplication anyway, but this post will present a different method. The method effectively computes the continued fraction representation of each fraction simultaneously, but stops as soon as they differ. It is also the algorithm used for comparisons in the Boost C++ library Rational. We start by doing the (integer) division on each side of the inequality to obtain the representation with the quotient and the remainder (). Now if we have (using properties of the floor function) Analogously, if we get . In either case, we are done. When we have transformed the truth value of the first inequality into with . Now if or the truth value of the inequality is easily determined. For we get effectively by flipping the fractions and reversing the inequality sign (obtained by multiplying each side of the inequality by ). We now recursively apply the same approach until the quotients differ or one or both of the remainders is zero. Let us finish with a small example: so yes, .
## Area of the region Find the area of the region inside the circle $$r=24 cos \theta$$ and to the right of the vertical line $$r=6 sec \theta$$ • Anonymous commented Plz.. rate me first..!!! • R=24 cos theta R=6 sec theta 24 cos theta =6 sec theta cos theta=1/2 theta=60 degree Hence using this put in formula A = ? [b,a] (1/2)((r_1)^2 - (r_2)^2) dx
# ConsumptionEdit ## Consumption is:Edit what households, not businesses, do. This involves expenditure on goods and services, such as petrol, shopping and a new fridge. However, consumption can be split down further into durable and non-durable purchases. • Durable purchases are those that are going to last a long time - often years! These include items like cars, cookers, and televisions. Fridges, cookers, washing machines etc. are all durable goods. • Non-durable purchases are those that only last for a short period of time and are 'used up'. A good example of this is going to the shops to buy food and drink. The Importance of Consumption Every time you purchase food at the drive-thru or pull out your debit or credit card to buy something, you are adding to consumption. Consumption is one of the bigger concepts in economics and is extremely important because it helps determine the growth and success of the economy. Businesses can open up and offer all kinds of great products, but if we don't purchase or consume their products, they won't stay in business very long! If they don't stay in business, many of us won't have jobs or the income to buy goods and services. Consumption can be defined in different ways, but is best described as the final purchase of goods and services by individuals. The purchase of a new pair of shoes, a hamburger at the fast food restaurant or services, like getting your house cleaned, are all examples of consumption. It is also often referred to as consumer spending. Many topics in economics explore how the income of families and individuals affects consumption and spending habits. ## DeterminantsEdit The level of consumption in the economy is determined by: (C) → Cost of Credit (A) → Asset Levels (D) → Disposable income and Distribution of income (E) → Expectations (T) → Taxation (S) → Supply (shortages) ### 1) Cost of CreditEdit If credit becomes difficult, mainly through expense of interest rates, some households may postpone their credit financed purchases. There will be a reduction in consumption until circumstances change, i.e. accumulate more savings, or a fall in interest rates. on the other hand if accessing credit becomes less expensive household will increase their consumption ### 2) AssetsEdit Most households appear to have target levels of assets/wealth at each stage of their life cycle. If assets fall unexpectedly, households will increase their saving and reduce consumption. This works in reverse for situations like a sudden increase in wealth. ### 3) Disposable IncomeEdit Disposable income (income is often expressed as 'Y') is income after taxes. It is the amount of total income that can be spent with reasonable freedom by the household. Thus, disposable income is total income minus taxes (and sometimes also regarded as including other fixed payments, such as mortgage repayments). It is that income which can be 'disposed of' with near freedom. ### 4) ExpectationsEdit Individuals' attitudes to the functionality of the economy effects the level of aggregate expenditure. For example, households increase purchases of consumer durables if they believe interest rates will remain low or job security improves, etc. Expectations are the source of both business and household economic indicators. Strictly speaking, an expectation is a leading economic indicator, since it predicts changes in an economy and changes occur before corresponding changes in the economy. A leading indicator is an economic statistic that suggests transactions in the future. For example, building permits suggest construction activity in the near term, and hence the hiring of construction workers and purchases of building materials. Purchases of raw materials by manufacturing industries ordinarily suggest a level of likely production. Increases in either suggest increased activity; decreases suggest decreases in near-term activity. Stock prices fit this category because high prices for corporate stocks create the impression of wealth that spurs consumption. A coinciding indicator ordinarily indicates activity at the time. It is often a defining characteristic of the economy such as payroll or sales volume. A lagging indicator is a statistic that ordinarily follows economic changes. Unemployment rates are a prime example; decisions by most employers to hire workers follow increases in activity and the parallel decision to lay off workers follows decreases in activity. ### 5) TaxationEdit A change in the level of taxation on income (income tax) will reduce the amount of disposable income available. Because of this, C could fall. However, if an equal or greater sum were given out in benefits to households, particularly to unemployed, then consumption could even rise. It is important to note that an increase in taxation will not necessarily cause a contraction in consumption. Further, if taxation and benefits were used to redistribute income/wealth from richer to poorer households, consumption might rise. This is because less wealthy households are more likely to spend a greater proportion of their disposable income than extremely rich individuals. • Consumption varies between about 40-60% of total expenditure depending on what type of economy you are in, and the period in the economic cycle it is currently at. ## The Keynesian Consumption FunctionEdit C = a + b(ΔY) a = Autonomous Consumption b = Marginal Propensity to Consume Autonomous Consumption is the amount of income someone has to spend to survive (level of absolute need) MPC is the fraction/proportion of the remaining income that is spent on consumption. In reality it represents the how much of your income you are willing to spend on goods and services, as opposed to saving it for difficult times in the future or for retirement. The MPC is affected by consumer confidence and interest rates as they affect the rate of return on savings. ${\displaystyle MPC={\Delta C \over \Delta Y}}$ ### CalculationsEdit Example 1 C = 50 + 0.5(Y) B = C = Y Y = 50 + 0.5(Y) Y - 0.5(Y) = 50 0.5(Y) = 50 Y = 100 Example 2 C = 50 + 0.9(Y) B = C = Y Y = 50 + 0.9(Y) Y - 0.9(Y) = 50 0.1(Y) = 50 Y = 500 ## Note:Edit Saving(S) is not part of A.E. due to it being a leakage, (we analyse it because it helps us determine consumption and impacts on Y levels). The savings function (S = Y - C) can be derived from the consumption function (c.f.). S = -a + b(Y) where b is the Marginal Propensity to Save(MPS)
# Two-dimensional analysis of the Bose-Einstein correlations in $e^+ e^-$ annihilation at the Z$^0$ peak Abstract : The study of the directional dependence of two-particle correlations in the hadronic decays of the $Z^0$ boson is performed using the data collected by the DELPHI experiment in the 1992--1995 running periods. The comparison between the transverse, $R_{\perp}$, and longitudinal, $R_{\parallel}$, correlation radii confirms the string model prediction that the transverse correlation length is smaller than the longitudinal one, with the measured values of $R_{\perp}=0.53\pm 0.08\,\mathrm{fm}$ and $R_{\parallel}=0.85\pm 0.08\,\mathrm{fm}$, for selected $Z^0\rightarrow q\bar{q}$ events. Document type : Journal articles Cited literature [10 references] http://hal.in2p3.fr/in2p3-00003905 Contributor : Lpsc Bibliotheque <> Submitted on : Wednesday, January 26, 2000 - 4:02:33 PM Last modification on : Monday, March 29, 2021 - 3:22:08 PM Long-term archiving on: : Friday, May 29, 2015 - 4:40:30 PM ### Identifiers • HAL Id : in2p3-00003905, version 1 ### Citation P. Abreu, W. Adam, T. Adye, P. Adzic, Z. Albrecht, et al.. Two-dimensional analysis of the Bose-Einstein correlations in $e^+ e^-$ annihilation at the Z$^0$ peak. Physics Letters B, Elsevier, 2000, 471, pp.460-470. ⟨in2p3-00003905⟩ Record views
# The solution of inequality − 4 &lt; 5 − 3 x ≤ 17 in interval notation and sketch the solution on real number line. BuyFind ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 BuyFind ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 #### Solutions Chapter 1, Problem 11T (a) To determine Expert Solution ## Answer to Problem 11T The solution of inequality 4<53x17 is the interval [4,3) . ### Explanation of Solution Given: The inequality is 4<53x17 . Calculation: Section1: Subtract same quantity from each side gives an equivalent inequalitythat is, ABACBC The solution of given inequality is all x-values that satisfy both the inequalities 4<53x and 53x17 so use above rule of inequality and subtract 5 from both the inequality, 45<3x1759<3x12 Multiply each side of inequality by negative quantity that is (1) which reverse the direction of inequality, 123x<9123x<934x<3 The set of solution consists all x-values from x=4 to x=3 include the endpoint 4 because inequality is satisfy at the endpoint 4 so the interval notation of inequality is [4,3) . Thus,the solution of inequality 4<53x17 is the interval [4,3) . Section2: The solution of inequality from section 1 is [4,3) so graph of inequality is shown below, Figure (1) Figure (1) shows the solution of inequality which includes all x-values from x=4 to x=3 and close interval at 4 and open interval at 3. (b) To determine Expert Solution ## Answer to Problem 11T The solution of inequality x(x1)(x+2)>0 is the interval (2,0)(1,) . ### Explanation of Solution Given: The inequality is x(x1)(x+2)>0 . Calculation: Section1: The factors of the left-hand side are x, (x1) , (x+2) and these are zero when x=2,0,1 . These values of x divide the real line into the intervals (,2) , (2,0) , (0,1) and (1,) . Now, make a table indicating the sign of each factor on each interval, x<−2 −21 x − − + + (x−1) − − − + (x+2) − + + + x(x−1)(x+2) − + − + From the above sign table, the inequality is satisfied on the intervals (2,0) , (1,) and the end point of intervals does not satisfy the inequality. Thus, the solution of inequality x(x1)(x+2)>0 is the interval (2,0)(1,) . Section 2: The given inequality is x(x1)(x+2)>0 . The solution of inequality from section 1 is (2,0)(1,) so graph of inequality is shown below, Figure (2) Figure (2) shows the solution of inequality which includes all x-values from x=2 to x=0 and all values after x=1 because these x-coordinates does not satisfy the inequality they shows open interval on real line number. (c) To determine Expert Solution ## Answer to Problem 11T The solution of inequality |x4|<3 is the interval (1,7) . ### Explanation of Solution Given: The inequality is |x4|<3 . Calculation: Section1: The inequality |x4|<3 is equivalent to, 3<x4<3 The solution of above inequality is all x-values that satisfy both the inequalities 3<x4 and x4<3 so add 4 to each side of inequality to get solution, 3+4<x<3+41<x<7 Thus, the solution of inequality |x4|<3 is the interval (1,7) . Section2: The solution of inequality from section 1 is (1,7) so graph of inequality is shown below, Figure (3) Figure (3) shows the solution of inequality which includes all x-values from x=1 to x=7 and the endpoints does not satisfy the inequality so inequality shows open interval at both the sides on real number line. (d) To determine Expert Solution ## Answer to Problem 11T The solution of inequality 2x3x+11 is the interval (1,4] . ### Explanation of Solution Given: The inequality is 2x3x+11 . Calculation: Section1: First move all terms to the left-hand side of the inequality then factor the inequality to get values of x, 2x3x+1102x3(x+1)x+10x4x+10 The factor of numerator is (x4) and denominator is (x+1) and these are zero when x=1,4 . These values of x divide the real line into the intervals (,1) , (1,4) , and (4,) . Now, make a table indicating the sign of each factor on each interval, x<−1 x=−1 −14 (x−4) − − − 0 + (x+1) − 0 + + + x−4x+1 + undefined − 0 + From the sign table, the inequality is satisfied on the interval (1,4) and the end point 1 does not satisfy the inequality, endpoint 4 satisfy the inequality. Thus, the solution of inequality 2x3x+11 is the interval (1,4] . Section2: The given inequality is 2x3x+11 . The solution of inequality from section 1 is (1,4] so graph of inequality is shown below. Figure (4) Figure (4) shows the solution of inequality which includes all x-values from x=1 to x=4 because these x-coordinates 1 does not satisfy the inequality they shows open interval and 4 satisfy the inequality they shows close interval on real number line. ### Have a homework question? Subscribe to bartleby learn! Ask subject matter experts 30 homework questions each month. Plus, you’ll have access to millions of step-by-step textbook answers!
Products Rewards from HOLOOLY We are determined to provide the latest solutions related to all subjects FREE of charge! Enjoy Limited offers, deals & Discounts by signing up to Holooly Rewards Program HOLOOLY HOLOOLY TABLES All the data tables that you may search for. HOLOOLY ARABIA For Arabic Users, find a teacher/tutor in your City or country in the Middle East. HOLOOLY TEXTBOOKS Find the Source, Textbook, Solution Manual that you are looking for in 1 click. HOLOOLY HELP DESK Need Help? We got you covered. ## Q. 7.EX.3 Bridged Tee Circuit in State-Variable Form Determine the state-space equations for the circuit shown in Fig. 2.25. ## Verified Solution In order to write the equations in the state-variable form (i.e., a set of simultaneous first-order differential equations), we select the capacitor voltages $v_1 \text{ and } v_2$ as the state elements (i.e., $x = [v_1 v_2 ]^T$ ) and $v_i$ as the input (i.e., $u = v_i$ ). Here $v_1 = v_2, v_2 = v_1 − v_3,$ and still $v_1 = v_i.$ Thus $v_1 = v_i, v_2 = v_1 ,$ and $v_3 = v_i − v_2 .$ In terms of $v_1$ and $v_2$ , Eq. (2.34) is $\frac{v_1 − v_i}{R_1} + \frac{v_1 − (v_i − v_2 )}{R_2} + C_1 \frac{dv_1}{dt} = 0.$ $-\frac{v_{1}-v_{2}}{R_{1}}+\frac{v_{2}-v_{3}}{R_{2}}+C_{1} \frac{d v_{2}}{d t}=0 ,$           (2.34) Rearranging this equation into standard form, we get $\frac{dv_1}{dt} = − \frac{1}{C_1} \left( \frac{1}{R_1} + \frac{1}{R_2} \right) v_1 − \frac{1}{C_1} \left( \frac{1}{R_2} \right) v_2 + \frac{1}{C_1} \left( \frac{1}{R_1} + \frac{1}{R_2} \right) v_i.$                  (7.6) In terms of $v_1$ and $v_2$, Eq. (2.35) is $\frac{v_i − v_2 − v_1}{R_2} + C_2 \frac{d}{dt}(v_i − v_2 − v_i) = 0.$ $\frac{v_{3}-v_{2}}{R_{2}}+C_{2} \frac{d\left(v_{3}-v_{1}\right)}{d t}=0$             (2.35) In standard form, the equation is $\frac{dv_2}{dt} = − \frac{v_1}{C_2R_2} − \frac{v_2}{C_2R_2} + \frac{v_i}{C_2R_2}.$            (7.7) Equations (2.34)–(2.35) are entirely equivalent to the state-variable form, Eqs. (7.6) and (7.7), in describing the circuit. The standard matrix definitions are $\pmb F = \left[\begin{matrix} -\frac{1}{C_1}\left(\frac{1}{R_1}+ \frac{1}{R_2} \right) & -\frac{1}{C_1} \left( \frac{1}{R_2}\right)\\ -\frac{1}{C_2R_2} & -\frac{1}{C_2R_2} \end{matrix} \right] , \\ \pmb G = \left[\begin{matrix} \frac{1}{C_1}\left( \frac{1}{R_1} + \frac{1}{R_2}\right) \\ \frac{1}{C_2R_2} \end{matrix} \right] , \\ \pmb H = \left[\begin{matrix} 0 & -1 \end{matrix} \right] , \ \ \ J =1 .$
Equiangular polygon Example equiangular polygons Direct Indirect Skew A rectangle, <4>, is a convex direct equiangular polygon, containing four 90° internal angles. A concave indirect equiangular polygon, <6-2>, like this hexagon, counterclockwise, has five left turns and one right turn, like this tetromino. A skew polygon has equal angles off a plane, like this skew octagon alternating red and blue edges on a cube. Direct Indirect Counter-turned A multi-turning equiangular polygon can be direct, like this octagon, <8/2>, has 8 90° turns, totaling 720°. A concave indirect equiangular polygon, <5-2>, counterclockwise has 4 left turns and one right turn. (-1.2.4.3.2)60° An indirect equiangular hexagon, <6-6>90° with 3 left turns, 3 right turns, totaling 0°. In Euclidean geometry, an equiangular polygon is a polygon whose vertex angles are equal. If the lengths of the sides are also equal (that is, if it is also equilateral) then it is a regular polygon. Isogonal polygons are equiangular polygons which alternate two edge lengths. For clarity, a planar equiangular polygon can be called direct or indirect. A direct equiangular polygon has all angles turning in the same direction in a plane and can include multiple turns. Convex equiangular polygons are always direct. An indirect equiangular polygon can include angles turning right or left in any combination. A skew equiangular polygon may be isogonal, but can't be considered direct since it is nonplanar. A spirolateral nθ is a special case of an equiangular polygon with a set of n integer edge lengths repeating sequence until returning to the start, with vertex internal angles θ. Construction This convex direct equiangular hexagon, <6>, is bounded by 6 lines with 60° angle between. Each line can be moved perpendicular to its direction. This concave indirect equiangular hexagon, <6-2>, is also bounded by 6 lines with 90° angle between, each line moved independently, moving vertices as new intersections. An equiangular polygon can be constructed from a regular polygon or regular star polygon where edges are extended as infinite lines. Each edges can be independently moved perpendicular to the line's direction. Vertices represent the intersection point between pairs of neighboring line. Each moved line adjusts its edge-length and the lengths of its two neighboring edges.[1] If edges are reduced to zero length, the polygon becomes degenerate, or if reduced to negative lengths, this will reverse the internal and external angles. For an even-sided direct equiangular polygon, with internal angles θ°, moving alternate edges can invert all vertices into supplementary angles, 180-θ°. Odd-sided direct equiangular polygons can only be partially inverted, leaving a mixture of supplementary angles. Every equiangular polygon can be adjusted in proportions by this construction and still preserve equiangular status. Equiangular polygon theorem For a convex equiangular p-gon, each internal angle is 180(1-2/p)°; this is the equiangular polygon theorem. For a direct equiangular p/q star polygon, density q, each internal angle is 180(1-2q/p)°, with 1<2q<p. For w=gcd(p,q)>1, this represents a w-wound (p/w)/(q/w) star polygon, which is degenerate for the regular case. A concave indirect equiangular (pr+pl)-gon, with pr right turn vertices and pl left turn vertices, will have internal angles of 180(1-2/|pr-pl|))°, regardless of their sequence. An indirect star equiangular (pr+pl)-gon, with pr right turn vertices and pl left turn vertices and q total turns, will have internal angles of 180(1-2q/|pr-pl|))°, regardless of their sequence. An equiangular polygon with the same number of right and left turns has zero total turns, and has no constraints on its angles. Notation Every direct equiangular p-gon can be given a notation <p> or <p/q>, like regular polygons {p} and regular star polygons {p/q}, containing p vertices, and stars having density q. Convex equiangular p-gons <p> have internal angles 180(1-2/p)°, while direct star equiangular polygons, <p/q>, have internal angles 180(1-2q/p)°. A concave indirect equiangular p-gon can be given the notation <p-2c>, with c counter-turn vertices. For example, <6-2> is a hexagon with 90° internal angles of the difference, <4>, 1 counter-turned vertex. A multiturn indirect equilateral p-gon can be given the notation <p-2c/q> with c counter turn vertices, and q total turns. An equiangular polygon <p-p> is a p-gon with undefined internal angles θ, but can be expressed explicitly as <p-p>θ. Other properties Viviani's theorem holds for equiangular polygons:[2] The sum of distances from an interior point to the sides of an equiangular polygon does not depend on the location of the point, and is that polygon's invariant. A cyclic polygon is equiangular if and only if the alternate sides are equal (that is, sides 1, 3, 5, ... are equal and sides 2, 4, ... are equal). Thus if n is odd, a cyclic polygon is equiangular if and only if it is regular.[3] For prime p, every integer-sided equiangular p-gon is regular. Moreover, every integer-sided equiangular pk-gon has p-fold rotational symmetry.[4] An ordered set of side lengths ${\displaystyle (a_{1},\dots ,a_{n})}$  gives rise to an equiangular n-gon if and only if either of two equivalent conditions holds for the polynomial ${\displaystyle a_{1}+a_{2}x+\cdots +a_{n-1}x^{n-2}+a_{n}x^{n-1}:}$  it equals zero at the complex value ${\displaystyle e^{2\pi i/n};}$  it is divisible by ${\displaystyle x^{2}-2x\cos(2\pi /n)+1.}$ [5] Direct equiangular polygons by sides Direct equiangular polygons can be regular, isogonal, or lower symmetries. Examples for <p/q> are grouped into sections by p and subgrouped by density q. Equiangular triangles Equiangular triangles must be convex and have 60° internal angles. It is an equilateral triangle and a regular triangle, <3>={3}. The only degree of freedom is edge-length. A rectangle dissected into a 2×3 array of squares[6] Direct equiangular quadrilaterals have 90° internal angles. The only equiangular quadrilaterals are rectangles, <4>, and squares, {4}. An equiangular quadrilateral with integer side lengths may be tiled by unit squares.[6] Equiangular pentagons Direct equiangular pentagons, <5> and <5/2>, have 108° and 36° internal angles respectively. 108° internal angle from an equiangular pentagon, <5> Equiangular pentagons can can be regular, have bilateral symmetry, or no symmetry. 36° internal angles from an equiangular pentagram, <5/2> Equiangular hexagons An equilateral hexagon with 1:2 edge length ratios, with equilateral triangles.[6] This is spirolateral 2120°. Direct equiangular hexagons, <6> and <6/2>, have 120° and 60° internal angles respectively. 120° internal angles of an equiangular hexagon, <6> An equiangular hexagon with integer side lengths may be tiled by unit equilateral triangles.[6] 60° internal angles of an equiangular double-wound triangle, <6/2> Equiangular heptagons Direct equiangular heptagons, <7>, <7/2>, and <7/3> have 128 4/7°, 77 1/7° and 25 5/7° internal angles respectively. 128.57° internal angles of an equiangular heptagon, <7> 77.14° internal angles of an equiangular heptagram, <7/2> 25.71° internal angles of an equiangular heptagram, <7/3> Equiangular octagons Direct equiangular octagons, <8>, <8/2> and <8/3>, have 135°, 90° and 45° internal angles respectively. 135° internal angles from an equiangular octagon, <8> 90° internal angles from an equiangular double-wound square, <8/2> 45° internal angles from an equiangular octagram, <8/3> Equiangular enneagons Direct equiangular enneagons, <9>, <9/2>, <9/3>, and <9/4> have 140°, 100°, 60° and 20° internal angles respectively. 140° internal angles from an equiangular enneagon <9> 100° internal angles from an equiangular enneagram, <9/2> 60° internal angles from an equiangular triple-wound triangle, <9/3> 20° internal angles from an equiangular enneagram, <9/4> Equiangular decagons Direct equiangular decagons, <10>, <10/2>, <10/3>, <10/4>, have 144°, 108°, 72° and 36° internal angles respectively. 144° internal angles from an equiangular decagon <10> 108° internal angles from an equiangular double-wound pentagon <10/2> 72° internal angles from an equiangular decagram <10/3> 36° internal angles from an equiangular double-wound pentagram <10/4> Equiangular hendecagons Direct equiangular hendecagons, <11>, <11/2>, <11/3>, <11/4>, and <11/5> have 147 3/11°, 114 6/11°, 81 9/11°, 49 1/11°, and 16 4/11° internal angles respectively. 147° internal angles from an equiangular hendecagon, <11> 114° internal angles from an equiangular hendecagram, <11/2> 81° internal angles from an equiangular hendecagram, <11/3> 49° internal angles from an equiangular hendecagram, <11/4> 16° internal angles from an equiangular hendecagram, <11/5> Equiangular dodecagons Direct equiangular dodecagons, <12>, <12/2>, <12/3>, <12/4>, and <12/5> have 150°, 120°, 90°, 60°, and 30° internal angles respectively. 150° internal angles from an equiangular dodecagon, <12> Convex solutions with integer edge lengths may be tiled by pattern blocks, squares, equilateral triangles, and 30° rhombi.[6] 120° internal angles from an equiangular double-wound hexagon, <12/2> 90° internal angles from an equiangular triple-wound square, <12/3> 60° internal angles from an equiangular quadruple-wound triangle, <12/4> 30° internal angles from an equiangular dodecagram, <12/5> Direct equiangular tetradecagons, <14>, <14/2>, <14/3>, <14/4>, and <14/5>, <14/6>, have 154 2/7°, 128 4/7°, 102 6/7°, 77 1/7°, 51 3/7° and 25 5/7° internal angles respectively. 154.28° internal angles from an equiangular tetradecagon, <14> 128.57° internal angles from an equiangular double-wound regular heptagon, <14/2> 102.85° internal angles from an equiangular tetradecagram, <14/3> 77.14° internal angles from an equiangular double-wound heptagram <14/4> 51.43° internal angles from an equiangular tetradecagram, <14/5> 25.71° internal angles from an equiangular double-wound heptagram, <14/6> Direct equiangular pentadecagons, <15>, <15/2>, <15/3>, <15/4>, <15/5>, <15/6>, and <15/7>, have 156°, 132°, 108°, 84°, 60° and 12° internal angles respectively. 156° internal angles from an equiangular pentadecagon, <15> 132° internal angles from an equiangular pentadecagram, <15/2> 108° internal angles from an equiangular triple-wound pentagon, <15/3> 84° internal angles from an equiangular pentadecagram, <15/4> 60° internal angles from an equiangular 5-wound triangle, <15/5> 36° internal angles from an equiangular triple-wound pentagram, <15/6> 12° internal angles from an equiangular pentadecagram, <15/7> Direct equiangular hexadecagons, <16>, <16/2>, <16/3>, <16/4>, <16/5>, <16/6>, and <16/7>, have 157.5°, 135°, 112.5°, 90°, 67.5° 45° and 22.5° internal angles respectively. 157.5° internal angles from an equiangular hexadecagon, <16> 135° internal angles from an equiangular double-wound octagon, <16/2> 112.5° internal angles from an equiangular hexadecagram, <16/3> 90° internal angles from an equiangular 4-wound square, <16/4> 67.5° internal angles from an equiangular hexadecagram, <16/5> 45° internal angles from an equiangular double-wound regular octagram, <16/6> 22.5° internal angles from an equiangular hexadecagram, <16/7> Direct equiangular octadecagons, <18}, <18/2>, <18/3>, <18/4>, <18/5>, <18/6>, <18/7>, and <18/8>, have 160°, 140°, 120°, 100°, 80°, 60°, 40° and 20° internal angles respectively. 160° internal angles from an equiangular octadecagon, <18> 140° internal angles from an equiangular double-wound enneagon, <18/2> 120° internal angles of an equiangular 3-wound hexagon <18/3> 100° internal angles of an equiangular double-wound enneagram <18/4> 80° internal angles of an equiangular octadecagram {18/5} 60° internal angles of an equiangular 6-wound triangle <18/6> 40° internal angles of an equiangular octadecagram <18/7> 20° internal angles of an equiangular double-wound enneagram <18/8> Equiangular icosagons Direct equiangular icosagon, <20>, <20/3>, <20/4>, <20/5>, <20/6>, <20/7>, and <20/9>, have 162°, 126°, 108°, 90°, 72°, 54° and 18° internal angles respectively. 162° internal angles from an equiangular icosagon, <20> 144° internal angles from an equiangular double-wound decagon, <20/2> 126° internal angles from an equiangular icosagram, <20/3> 108° internal angles from an equiangular 4-wound pentagon, <20/4> 90° internal angles from an equiangular 5-wound square, <20/5> 72° internal angles from an equiangular double-wound decagram, <20/6> 54° internal angles from an equiangular icosagram, <20/7> 36° internal angles from an equiangular quadruple-wound pentagram, <20/8> 18° internal angles from an equiangular icosagram, <20/9>
## Second-Order Linear Equations A second-order linear differential equation has the form $\frac{d^2 y}{dt^2} + A_1(t)\frac{dy}{dt} + A_2(t)y = f(t)$, where $A_1(t)$, $A_2(t)$, and $f(t)$ are continuous functions. ### Learning Objectives Distinguish homogeneous and nonhomogeneous second-order linear differential equations ### Key Takeaways #### Key Points • Linear differential equations are of the form $L [y(t)] = f(t)$, where $L_n(y) \equiv \frac{d^n y}{dt^n} + A_1(t)\frac{d^{n-1}y}{dt^{n-1}} + \cdots + A_{n-1}(t)\frac{dy}{dt} + A_n(t)y$. • When $f(t)=0$, the equations are called homogeneous linear differential equations. (Otherwise, the equations are called nonhomogeneous equations). • Linear differential equations are differential equations that have solutions which can be added together to form other solutions. #### Key Terms • linear: having the form of a line; straight • differential equation: an equation involving the derivatives of a function Linear differential equations are of the form $Ly = f$, where the differential operator $L$ is a linear operator, $y$ is the unknown function (such as a function of time $y(t)$), and the right hand side $f$ is a given function of the same nature as $y$ (called the source term). For a function dependent on time, we may write the equation more expressly as $L y(t) = f(t)$ and, even more precisely, by bracketing $L [y(t)] = f(t)$. The linear operator $L$ may be considered to be of the form: $\displaystyle{L_n(y) \equiv \frac{d^n y}{dt^n} + A_1(t)\frac{d^{n-1}y}{dt^{n-1}} + \cdots + A_{n-1}(t)\frac{dy}{dt} + A_n(t)y}$ The linearity condition on $L$ rules out operations such as taking the square of the derivative of $y$, but permits, for example, taking the second derivative of $y$. It is convenient to rewrite this equation in an operator form: $\displaystyle{L_n(y) \equiv \left[\,D^n + A_{1}(t)D^{n-1} + \cdots + A_{n-1}(t) D + A_n(t)\right] y}$ where $D$ is the differential operator $\frac{d}{dt}$ (i.e. $Dy = y'$, $D^2y = y''$, $\cdots$) that is involved. ### Second-Order Linear Differential Equations A second-order linear differential equation has the form: $\displaystyle{\frac{d^2 y}{dt^2} + A_1(t)\frac{dy}{dt} + A_2(t)y = f(t)}$ where $A_1(t)$, $A_2(t)$, and $f(t)$ are continuous functions. When $f(t)=0$, the equations are called homogeneous second-order linear differential equations. (Otherwise, the equations are called nonhomogeneous equations.) Simple Pendulum: A simple pendulum, under the conditions of no damping and small amplitude, is described by a equation of motion which is a second-order linear differential equation. ## Nonhomogeneous Linear Equations Nonhomogeneous second-order linear equation are of the the form: $\frac{d^2 y}{dt^2} + A_1(t)\frac{dy}{dt} + A_2(t)y = f(t)$, where $f(t)$ is nonzero. ### Learning Objectives Identify when a second-order linear differential equation can be solved analytically ### Key Takeaways #### Key Points • Examples of homogeneous or nonhomogeneous second-order linear differential equation can be found in many different disciplines such as physics, economics, and engineering. • In simple cases, for example, where the coefficients $A_1(t)$ and $A_2(t)$ are constants, the equation can be analytically solved. In general, the solution of the differential equation can only be obtained numerically. • Linear differential equations are differential equations that have solutions which can be added together to form other solutions. #### Key Terms • linearity: a relationship between several quantities which can be considered proportional and expressed in terms of linear algebra; any mathematical property of a relationship, operation, or function that is analogous to such proportionality, satisfying additivity and homogeneity In the previous atom, we learned that a second-order linear differential equation has the form: $\displaystyle{\frac{d^2 y}{dt^2} + A_1(t)\frac{dy}{dt} + A_2(t)y = f(t)}$ where $A_1(t)$, $A_2(t)$, and $f(t)$ are continuous functions. When $f(t)=0$, the equations are called homogeneous second-order linear differential equations. Otherwise, the equations are called nonhomogeneous equations. Examples of homogeneous or nonhomogeneous second-order linear differential equation can be found in many different disciplines such as physics, economics, and engineering. Heat Transfer: Phenomena such as heat transfer can be described using nonhomogeneous second-order linear differential equations. In simple cases, for example, where the coefficients $A_1(t)$ and $A_2(t)$ are constants, the equation can be analytically solved. (Either the method of undetermined coefficients or the method of variation of parameters can be adopted.) In general, the solution of the differential equation can only be obtained numerically. However, there is a very important property of the linear differential equation, which can be useful in finding solutions. ### Linearity Linear differential equations are differential equations that have solutions which can be added together to form other solutions. If $y_1(t)$ and $y_2(t)$ are both solutions of the second-order linear differential equation provided above and replicated here: $\displaystyle{\frac{d^2 y}{dt^2} + A_1(t)\frac{dy}{dt} + A_2(t)y = f(t)}$ then any arbitrary linear combination of $y_1(t)$ and $y_2(t)$ —that is, $y(x) = c_1y_1(t) + c_2 y_2(t)$ for constants $c_1$ and $c_2$—is also a solution of that differential equation. This can be confirmed by substituting $y(x) = c_1y_1(t) + c_2 y_2(t)$ into the equation and using the fact that both $y_1(t)$ and $y_2(t)$ are solutions of the equation. ## Applications of Second-Order Differential Equations A second-order linear differential equation can be commonly found in physics, economics, and engineering. ### Learning Objectives Identify problems that require solution of nonhomogeneous and homogeneous second-order linear differential equations ### Key Takeaways #### Key Points • An ideal spring with a spring constant $k$ is described by the simple harmonic oscillation, whose equation of motion is given in the form of a homogeneous second-order linear differential equation: $m \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + k x = 0$. • Adding the damping term in the equation of motion, the equation of motion is given as $\frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega_0^{\,2} x = 0$. • Adding the external force term to the damped harmonic oscillator, we get an nonhomogeneous second-order linear differential equation:$\frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega_0^2 x = \frac{F(t)}{m}$. #### Key Terms • damping: the reduction in the magnitude of oscillations by the dissipation of energy • harmonic oscillator: a system which, when displaced from its equilibrium position, experiences a restoring force proportional to the displacement according to Hooke’s law, where $k$ is a positive constant Examples of homogeneous or nonhomogeneous second-order linear differential equation can be found in many different disciplines, such as physics, economics, and engineering. In this atom, we will learn about the harmonic oscillator, which is one of the simplest yet most important mechanical system in physics. ### Harmonic oscillator In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force, $F$, proportional to the displacement, $x$: $\vec F = -k \vec x \,$, where $k$ is a positive constant. The system under consideration could be an object attached to a spring, a pendulum, etc. Electronic circuits such as RLC circuits are also described by similar equations. ### Simple harmonic oscillation If $F$ is the only force acting on the system, the system is called a simple harmonic oscillator, and it undergoes simple harmonic motion: sinusoidal oscillations about the equilibrium point, with a constant amplitude and a constant frequency. The equation of motion is given as: $\displaystyle{F = m a = m \frac{\mathrm{d}^2x}{\mathrm{d}t^2} = -k x}$ Therefore, we end up with a homogeneous second-order linear differential equation: $\displaystyle{m \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + k x = 0}$ Note that the function $x(t) = A\cos\left( \omega_0 t+\phi\right)$ satisfies the equation where $\omega_0 = \sqrt{\frac{k}{m}} = \frac{2\pi}{T}$. $\omega_0$ is called angular velocity, and the constants $A$ and $\phi$ are determined from initial conditions of the motion. ### Damped harmonic oscillator In real oscillators, friction (or damping) slows the motion of the system. In many vibrating systems the frictional force $Ff$ can be modeled as being proportional to the velocity v of the object: $Ff = −cv$, where $c$ is called the viscous damping coefficient. Including this additional term, the equation of motion is given as: $\displaystyle{\frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega_0^{\,2} x = 0}$ where $\zeta = \frac{c}{2 \sqrt{mk}}$ is called the “damping ratio.” Damped Harmonic Oscillators: A solution of damped harmonic oscillator. Curves in different colors show various responses depending on the damping ratio. Driven harmonic oscillator: Driven harmonic oscillators are damped oscillators further affected by an externally applied force $F(t)$. Newton’s 2nd law ($F=ma$) takes the form: $\displaystyle{F(t)-kx-c\frac{\mathrm{d}x}{\mathrm{d}t}=m\frac{\mathrm{d}^2x}{\mathrm{d}t^2}}$ It is usually rewritten into the form: $\displaystyle{\frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega_0^2 x = \frac{F(t)}{m}}$ which is a nonhomogeneous second-order linear differential equation. ## Series Solutions The power series method is used to seek a power series solution to certain differential equations. ### Learning Objectives Identify the steps and describe the application of the power series method ### Key Takeaways #### Key Points • The power series method calls for the construction of a power series solution $f=\sum_{k=0}^\infty A_kz^k$ for a linear differential equation $f''+{a_1(z)\over a_2(z)}f'+{a_0(z)\over a_2(z)}f=0$. • The method assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients. • Hermit differential equation $f''-2zf'+\lambda f=0;\;\lambda=1$ has the following power series solution: $f=A_0 \left(1+{-1\over 2}x^2+{-1 \over 8}x^4+{-7 \over 240}x^6+\cdots\right) + A_1\left(x+{1\over 6}x^3+{1 \over 24}x^5+{1 \over 112}x^7+\cdots\right)$. #### Key Terms • recurrence relation: an equation that recursively defines a sequence; each term of the sequence is defined as a function of the preceding terms • analytic functions: a function that is locally given by a convergent power series The power series method is used to seek a power series solution to certain differential equations. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients. Maclaurin Power Series of an Exponential Function: The exponential function (in blue), and the sum of the first $n+1$ terms of its Maclaurin power series (in red). Using power series, a linear differential equation of a general form may be solved. ### Method Consider the second-order linear differential equation: $a_2(z)f''(z)+a_1(z)f'(z)+a_0(z)f(z)=0$ Suppose $a_2$ is nonzero for all $z$. Then we can divide throughout to obtain: $\displaystyle{f''+{a_1(z)\over a_2(z)}f'+{a_0(z)\over a_2(z)}f=0}$ Suppose further that $\frac{a_1}{a_2}$ and $\frac{a_1}{a_2}$ are analytic functions. The power series method calls for the construction of a power series solution: $\displaystyle{f= \sum_{k=0}^\infty A_kz^k}$ After substituting the power series form, recurrence relations for $A_k$ is obtained, which can be used to reconstruct $f$. ### Example Let us look at the case know as Hermit differential equation: $f''-2zf'+\lambda f=0\quad (\lambda=1)$ We can try to construct a series solution: $\displaystyle{f= \sum_{k=0}^\infty A_kz^k \ f'= \sum_{k=0}^\infty kA_kz^{k-1} \ f''= \sum_{k=0}^\infty k(k-1)A_kz^{k-2}}$ substituting these in the differential equation: \begin{align} & {} \sum_{k=0}^\infty k(k-1)A_kz^{k-2}-2z \sum_{k=0}^\infty kA_kz^{k-1}+ \sum_{k=0}^\infty A_kz^k=0 \\ & = \sum_{k=0}^\infty k(k-1)A_kz^{k-2}- \sum_{k=0}^\infty 2kA_kz^k+ \sum_{k=0}^\infty A_kz^k \end{align} making a shift on the first sum: \begin{align} & = \sum_{k+2=0}^\infty (k+2)((k+2)-1)A_{k+2}z^{(k+2)-2}- \sum_{k=0}^\infty 2kA_kz^k+ \sum_{k=0}^\infty A_kz^k \\ & = \sum_{k=0}^\infty (k+2)(k+1)A_{k+2}z^k- \sum_{k=0}^\infty 2kA_kz^k+ \sum_{k=0}^\infty A_kz^k \\ & = \sum_{k=0}^\infty \left((k+2)(k+1)A_{k+2}+(-2k+1)A_k \right)z^k \end{align} If this series is a solution, then all these coefficients must be zero, so: $(k+2)(k+1)A_{k+2}+(-2k+1)A_k=0$ We can rearrange this to get a recurrence relation for $A_{k+2}$: $\displaystyle{A_{k+2}={\frac{(2k-1)}{(k+2)(k+1)}A_k}}$ Now, we have: $\displaystyle{A_2 = {\frac{-1}{(2)(1)}}A_0={\frac{-1}{2}}A_0, A_3 = {\frac{1}{(3)(2)}} A_1={\frac{1}{6}}A_1}$ and all coefficients with larger indices can be similarly obtained using the recurrence relation. The series solution is: $\displaystyle{f=A_0 \left(1+\frac{-1}{2}x^2+\frac{-1}{8}x^4+\frac{-7}{240}x^6+ \cdots \right)\\ \,\quad + A_1 \left(x+\frac{1}{6}x^3+\frac{1}{24}x^5+\frac{1}{112}x^7+ \cdots \right)}$
# Add decoration at end of all chapters automatically The accepted answer of this question got me half of the answer I am looking for. I would like to implement this end of chapter decoration automatically for all of my (many) chapters. The MWE is based on the solution provided by touhami: \documentclass{book} \usepackage{kantlipsum} \begin{document} \chapter{Chapter one} \kant[1-3] %\decoration \ifdim\dimexpr\pagegoal-\pagetotal-\baselineskip\relax>.05\textheight \begin{center} \rule{3cm}{0.5pt} \end{center} \fi \chapter{Chapter two} \kant[1-3] \end{document} Only in this example this yields the decoration only at the end of chapter 1, not of chapter 2 - and not because af the \relax value, I believe. I would like all of my chapters, but excluding the frontmatter and the back matter, to have this decoration. Do I really need to enter this or a custom macro at the end of every chapter manually? This code will add a decoration at the end of each chapter. \makedecoration is executed before each \chapter, decorating the previous chapter, and also at the end of the document to decorate the last chapter if backmater is absent. Works also with unnumbered chapters if it is followed by a numbered one. Frontmatter material such as \tableofcontents or a chapter will not be decorated; nor the back matter. \documentclass{book} \usepackage{kantlipsum} \usepackage{etoolbox}% needed <<<<<<<<<<<<< \newcommand{\insertdecoratation}{% define the decoration \begin{center} \rule{3cm}{0.5pt} \end{center}} \newcommand{\makedecoration}{% \ifnum\value{chapter}>1%% decorate from chapter 2 to before last \ifdim\dimexpr\pagegoal-\pagetotal-\baselineskip\relax>.05\textheight% \insertdecoratation% \fi\fi% \ifnum\value{chapter}=1%decorate chapter 1 \ifdim\dimexpr\pagegoal-\pagetotal-\baselineskip\relax>.05\textheight% \insertdecoratation% \fi\fi% } \makeatletter \renewcommand\mainmatter{% \pretocmd{\chapter}{\makedecoration}{}{}% decorate previous chapter \AtEndDocument{\makedecoration}% decorate the last chapter if no back matter \cleardoublepage \@mainmattertrue \pagenumbering{arabic}} \renewcommand\backmatter{% \makedecoration % decorate last chapter of mainmatter \renewcommand{\makedecoration}{}% now do nothing \if@openright \cleardoublepage \else \clearpage \fi \@mainmatterfalse} \makeatother %%************************************************ \begin{document} \frontmatter \tableofcontents \chapter{Introduction} \kant[1-2] \mainmatter \chapter{Chapter One} \kant[1-3] \chapter{Chapter Two} \kant[1-3] \chapter{Chapter Three} \kant[1-3] \chapter*{Chapter Four} \kant[1-2] % BACK MATTER \backmatter \chapter{Appendix} \kant[1-2] \end{document} • Not tried, but please do. Does this work reliably if chapters are pulled in with \include{...}? (I think it should, but would like that confirmed.) Jul 5 at 18:25 • @barbara beeton It works with \input{...} but not with include{...} because the latter adds a \clearpage before and after the \input. Jul 5 at 19:18 • Thanks for checking. That's useful to know, and could be a significant restriction (timewise) on a long book with many chapters. Jul 5 at 21:36
# What are the four possible constitutional isomers for the molecular formula C3H5Br? Jan 9, 2016 Here are the steps I would take to figure them out. #### Explanation: 1. Decide how many ring or double bonds are in the molecule. 2. Draw all the possible acyclic isomers. 3. Draw all the possible cyclic isomers. 1. Rings or double bonds The molecular formula is $\text{C"_3"H"_5"Br}$. Replacing the $\text{Br}$ with an equivalent $\text{H}$ gives the formula ${\text{C"_3"H}}_{6}$. The formula of an alkane with 3 carbons is ${\text{C"_3"H}}_{8}$, so the compound is missing two $\text{H}$ atoms. $\text{C"_3"H"_5"Br}$ must contain a double bond or a ring. 2. Acyclic isomers (a) Draw all possible alkenes with 3 carbon atoms. (b) Add a bromine atom in every possible location. These isomers are 3-bromopropene, 2-bromopropene, and 1-bromopropene. 3. Cyclic isomers (a) Draw all possible cycloalkanes with 3 carbon atoms. (b) Add a Br atom in every possible location. This isomer is bromocyclopropane. And we have our four isomers.
Drake SpatialAcceleration< T > Class Template Reference This class is used to represent a spatial acceleration that combines rotational (angular acceleration) and translational (linear acceleration) components. More... #include <drake/multibody/multibody_tree/math/spatial_acceleration.h> Inheritance diagram for SpatialAcceleration< T >: Collaboration diagram for SpatialAcceleration< T >: ## Public Member Functions SpatialAcceleration () Default constructor. More... SpatialAcceleration (const Eigen::Ref< const Vector3< T >> &alpha, const Eigen::Ref< const Vector3< T >> &a) SpatialAcceleration constructor from an angular acceleration alpha and a linear acceleration a. More... template<typename Derived > SpatialAcceleration (const Eigen::MatrixBase< Derived > &A) SpatialAcceleration constructor from an Eigen expression that represents a six-dimensional vector. More... SpatialAcceleration< T > & ShiftInPlace (const Vector3< T > &p_PoBo_E, const Vector3< T > &w_WP_E) In-place shift of this spatial acceleration A_WP of a frame P into the spatial acceleration A_WPb of a frame Pb which is an offset frame rigidly aligned with P, but with its origin shifted to a point Bo by an offset p_PoBo. More... SpatialAcceleration< T > Shift (const Vector3< T > &p_PoBo_E, const Vector3< T > &w_WP_E) const Shifts this spatial acceleration A_WP of a frame P into the spatial acceleration A_WPb of a frame Pb which is an offset frame rigidly aligned with P, but with its origin shifted to a point Bo by an offset p_PoBo. More... SpatialAcceleration< T > ComposeWithMovingFrameAcceleration (const Vector3< T > &p_PoBo_E, const Vector3< T > &w_WP_E, const SpatialVelocity< T > &V_PB_E, const SpatialAcceleration< T > &A_PB_E) const This method composes this spatial acceleration A_WP of a frame P measured in a frame W, with that of a third frame B moving in P with spatial acceleration A_PB. More... Implements CopyConstructible, CopyAssignable, MoveConstructible, MoveAssignable SpatialAcceleration (const SpatialAcceleration &)=default SpatialAccelerationoperator= (const SpatialAcceleration &)=default SpatialAcceleration (SpatialAcceleration &&)=default SpatialAccelerationoperator= (SpatialAcceleration &&)=default Public Member Functions inherited from SpatialVector< SpatialAcceleration, T > SpatialVector () Default constructor. More... SpatialVector (const Eigen::Ref< const Vector3< T >> &w, const Eigen::Ref< const Vector3< T >> &v) SpatialVector constructor from an rotational component w and a linear component v. More... SpatialVector (const Eigen::MatrixBase< OtherDerived > &V) SpatialVector constructor from an Eigen expression that represents a six-dimensional vector. More... int size () const The total size of the concatenation of the angular and linear components. More... const T & operator[] (int i) const T & operator[] (int i) const Vector3< T > & rotational () const Vector3< T > & rotational () const Vector3< T > & translational () const Vector3< T > & translational () const T * data () const Returns a (const) bare pointer to the underlying data. More... T * mutable_data () Returns a (mutable) bare pointer to the underlying data. More... bool IsApprox (const SpatialQuantity &other, double tolerance=Eigen::NumTraits< T >::epsilon()) const Compares this spatial vector to the provided spatial vector other within a specified precision. More... void SetNaN () Sets all entries in this SpatialVector to NaN. More... SpatialQuantitySetZero () Sets both rotational and translational components of this SpatialVector to zero. More... CoeffsEigenTypeget_coeffs () Returns a reference to the underlying storage. More... const CoeffsEigenTypeget_coeffs () const Returns a constant reference to the underlying storage. More... SpatialVector (const SpatialVector &)=default SpatialVector (SpatialVector &&)=default SpatialVectoroperator= (const SpatialVector &)=default SpatialVectoroperator= (SpatialVector &&)=default Public Types inherited from SpatialVector< SpatialAcceleration, T > enum Sizes for spatial quantities and its components in three dimensions. More... using SpatialQuantity = SpatialAcceleration< T > The more specialized spatial vector class templated on the scalar type T. More... typedef Vector6< T > CoeffsEigenType The type of the underlying in-memory representation using an Eigen vector. More... typedef T ScalarType Static Public Member Functions inherited from SpatialVector< SpatialAcceleration, T > static SpatialQuantity Zero () Factory to create a zero SpatialVector, i.e. More... ## Detailed Description ### template<typename T> class drake::multibody::SpatialAcceleration< T > This class is used to represent a spatial acceleration that combines rotational (angular acceleration) and translational (linear acceleration) components. While a SpatialVelocity V_XY represents the motion of a "moving frame" Y measured with respect to a "measured-in" frame X, the SpatialAcceleration A_XY represents the rate of change of this spatial velocity V_XY in frame X. That is $$^XA^Y = \frac{^Xd}{dt}\,{^XV^Y}$$ where $$\frac{^Xd}{dt}$$ denotes the time derivative taken in frame X. That is, to compute an acceleration we need to specify in what frame the time derivative is taken, see [Mitiguy 2016, §6.1] for a more in depth discussion on this. Time derivatives can be taken in different frames, and they transform according to the "Transport Theorem", which is in Drake is implemented in drake::math::ConvertTimeDerivativeToOtherFrame(). In source code comments we write A_XY = DtX(V_XY), where DtX() is the operator that takes the time derivative in the X frame. By convention, and unless otherwise stated, we assume that the frame in which the time derivative is taken is the "measured-in" frame, i.e. the time derivative used in A_XY is in frame X by default (i.e. DtX()). To perform numerical computations, we need to specify an "expressed-in" frame E (which may be distinct from either X or Y), so that components can be expressed as real numbers. Only the vector values are stored in a SpatialAcceleration object; the frames must be understood from context and it is the responsibility of the user to keep track of them. That is best accomplished through disciplined notation. In source code we use monogram notation where capital A is used to designate a spatial acceleration quantity. The same monogram notation rules for SpatialVelocity are also used for SpatialAcceleration. That is, the spatial acceleration of a frame Y measured in X and expressed in E is denoted with A_XY_E. For a more detailed introduction on spatial vectors and the monogram notation please refer to section Spatial Vectors. [Mitiguy 2016] Mitiguy, P., 2016. Advanced Dynamics & Motion Simulation. Template Parameters T The underlying scalar type. Must be a valid Eigen scalar. ## Constructor & Destructor Documentation SpatialAcceleration ( const SpatialAcceleration< T > & ) default SpatialAcceleration ( SpatialAcceleration< T > && ) default SpatialAcceleration ( ) inline Default constructor. In Release builds the elements of the newly constructed spatial acceleration are left uninitialized resulting in a zero cost operation. However in Debug builds those entries are set to NaN so that operations using this uninitialized spatial acceleration fail fast, allowing fast bug detection. SpatialAcceleration ( const Eigen::Ref< const Vector3< T >> & alpha, const Eigen::Ref< const Vector3< T >> & a ) inline SpatialAcceleration constructor from an angular acceleration alpha and a linear acceleration a. SpatialAcceleration ( const Eigen::MatrixBase< Derived > & A ) inlineexplicit SpatialAcceleration constructor from an Eigen expression that represents a six-dimensional vector. Under the hood, spatial accelerations are 6-element quantities that are pairs of ordinary 3-vectors. Elements 0-2 constitute the angular acceleration component while elements 3-5 constitute the translational acceleration. The argument A in this constructor is the concatenation of the rotational 3D component followed by the translational 3D component. This constructor will assert the size of A is six (6) at compile-time for fixed sized Eigen expressions and at run-time for dynamic sized Eigen expressions. ## Member Function Documentation SpatialAcceleration ComposeWithMovingFrameAcceleration ( const Vector3< T > & p_PoBo_E, const Vector3< T > & w_WP_E, const SpatialVelocity< T > & V_PB_E, const SpatialAcceleration< T > & A_PB_E ) const inline This method composes this spatial acceleration A_WP of a frame P measured in a frame W, with that of a third frame B moving in P with spatial acceleration A_PB. The result is the spatial acceleration A_WB of frame B measured in W. At the instant in which the accelerations are composed, frame B is located with its origin Bo at p_PoBo from P's origin Po. SpatialVelocity::ComposeWithMovingFrameVelocity() for the composition of SpatialVelocity quantities. Note This method is the extension to the Shift() operator, which computes the spatial acceleration frame P shifted to Bo as if frame B moved rigidly with P, that is, for when V_PB and A_PB are both zero. In other words the results from Shift() equal the results from this method when V_PB and A_PB are both zero. Parameters [in] p_PoBo_E Shift vector from P's origin to B's origin, expressed in frame E. The "from" point Po must be the point whose acceleration is currently represented in this spatial acceleration, and E must be the same expressed-in frame as for this spatial acceleration. [in] w_WP_E Angular velocity of frame P measured in frame A and expressed in frame E. [in] V_PB_E The spatial velocity of a third frame B in motion with respect to P, expressed in the same frame E as this spatial acceleration. [in] A_PB_E The spatial acceleration of a third frame B in motion with respect to P, expressed in the same frame E as this spatial acceleration. Return values A_WB_E The spatial acceleration of frame B in W, expressed in frame E. ### Derivation The spatial velocity of frame B in W can be obtained by composing V_WP with V_PB: V_WB = V_WPb + V_PB = V_WP.Shift(p_PoBo) + V_PB (1) This operation can be performed with the SpatialVelocity method ComposeWithMovingFrameVelocity(). #### Translational acceleration component The translational velocity v_WBo of point Bo in W corresponds to the translational component in Eq. (1): v_WBo = v_WPo + w_WP x p_PoBo + v_PBo (2) Therefore, for the translational acceleration we have: a_WBo = DtW(v_WBo) = DtW(v_WPo + w_WP x p_PoBo + v_PBo) = DtW(v_WPo) + DtW(w_WP x p_PoBo) + DtW(v_PBo) = a_WPo + DtW(w_WP) x p_PoBo + w_WP x DtW(p_PoBo) + DtW(v_PBo) = a_WPo + alpha_WP x p_PoBo + w_WP x DtW(p_PoBo) + DtW(v_PBo) (3) with a_WPo = DtW(v_WPo) and alpha_WP = DtW(w_WP) by definition. The term DtW(p_PoBo) in Eq. (3) is obtained by converting the vector time derivative from DtW() to DtP(), see drake::math::ConvertTimeDerivativeToOtherFrame(): DtW(p_PoBo) = DtP(p_PoBo) + w_WP x p_PoBo = v_PBo + w_WP x p_PoBo (4) since v_PBo = DtP(p_PoBo) by definition. Similarly, the term DtW(v_PBo) in Eq. (3) is also obtained by converting the time derivative from DtW() to DtP(): DtW(v_PBo) = DtP(v_PBo) + w_WP x v_PBo = a_PBo + w_WP x v_PBo (5) with a_PBo = DtP(v_PBo) by definition. Using Eqs. (4) and (5) in Eq. (3) yields for the translational acceleration: a_WBo = a_WPo + alpha_WP x p_PoBo + w_WP x (v_PBo + w_WP x p_PoBo) + a_PBo + w_WP x v_PBo and finally, by grouping terms together: a_WBo = a_WPo + alpha_WP x p_PoBo + w_WP x w_WP x p_PoBo + 2 * w_WP x v_PBo + a_PBo (6) which includes the effect of angular acceleration of P in W alpha_WP x p_PoBo, the centrifugual acceleration w_WP x w_WP x p_PoBo, the Coriolis acceleration 2 * w_WP x v_PBo due to the motion of Bo in P and, the additional acceleration of Bo in P a_PBo. #### Rotational acceleration component The rotational velocity w_WB of frame B in W corresponds to the rotational component in Eq. (1): w_WB = w_WP + w_PB (7) Therefore, the rotational acceleration of B in W corresponds to: alpha_WB = DtW(w_WB) = DtW(w_WP) + DtW(w_PB) = alpha_WP + DtW(w_PB) (8) where the last term in Eq. (8) can be converted to a time derivative in P as: DtW(w_PB) = DtP(w_PB) + w_WP x w_PB = alpha_PB + w_WP x w_PB (9) where alpha_PB = DtP(w_PB) by definition. Thus, the final expression for alpha_WB is obtained by using Eq. (9) into Eq. (8): alpha_WB = alpha_WP + alpha_PB + w_WP x w_PB (10) Equation (10) shows that angular accelerations cannot be simply added as angular velocities can but there exists an additional term w_WP x w_PB. #### The spatial acceleration The rotational and translational components of the spatial acceleration are given by Eqs. (10) and (6) respectively: A_WB.rotational() = alpha_WB = {alpha_WP} + alpha_PB + w_WP x w_PB A_WB.translational() = a_WBo = {a_WPo + alpha_WP x p_PoBo + w_WP x w_WP x p_PoBo} + 2 * w_WP x v_PBo + a_PBo where we have placed within curly brackets {} all the terms that also appear in the Shift() operation, which is equivalent to this method when V_PB and A_PB are both zero. In the equations above alpha_WP = A_WP.rotational() and a_WPo = A_WP.translational(). As usual, for computation, all quantities above must be expressed in a common frame E; we add an _E suffix to each symbol to indicate that. Here is the call graph for this function: Here is the caller graph for this function: SpatialAcceleration& operator= ( SpatialAcceleration< T > && ) default SpatialAcceleration& operator= ( const SpatialAcceleration< T > & ) default SpatialAcceleration Shift ( const Vector3< T > & p_PoBo_E, const Vector3< T > & w_WP_E ) const inline Shifts this spatial acceleration A_WP of a frame P into the spatial acceleration A_WPb of a frame Pb which is an offset frame rigidly aligned with P, but with its origin shifted to a point Bo by an offset p_PoBo. Frame Pb is instantaneously moving together with frame P as if rigidly attached to it. As an example of application, this operation can be used to compute A_WPb where P is a frame on a rigid body and Bo is another point on that same body. Therefore P and Pb move together with the spatial velocity V_PPb being zero at all times. This is an alternate signature for shifting a spatial acceleration that does not change the original object. See ShiftInPlace() for more information and a description of the arguments. Here is the call graph for this function: Here is the caller graph for this function: SpatialAcceleration& ShiftInPlace ( const Vector3< T > & p_PoBo_E, const Vector3< T > & w_WP_E ) inline In-place shift of this spatial acceleration A_WP of a frame P into the spatial acceleration A_WPb of a frame Pb which is an offset frame rigidly aligned with P, but with its origin shifted to a point Bo by an offset p_PoBo. Frame Pb is instantaneously moving together with frame P as if rigidly attached to it. As an example of application, this operation can be used to compute A_WPb where P is a frame on a rigid body and Bo is another point on that same body. Therefore P and Pb move together with the spatial velocity V_PPb being zero at all times. The shift operation modifies this spatial acceleration A_WP_E of a frame P measured in a frame W and expressed in a frame E, to become A_WPb_E, representing the acceleration of a frame Pb result of shifting frame P to point Bo which instantaneously moves together with frame P. This requires adjusting the linear acceleration component to account for: 1. the angular acceleration alpha_WP of frame P in W. 2. the centrifugal acceleration due to the angular velocity w_WP of frame P in W. We are given the vector from the origin Po of frame P to point Bo, which becomes the origin of the shifted frame Pb, as the position vector p_PoBo_E expressed in the same frame E as this spatial acceleration. The operation performed, in coordinate-free form, is: alpha_WPb = alpha_WP, i.e. the angular acceleration is unchanged. a_WBo = a_WPo + alpha_WP x p_PoBo + w_WP x w_WP x p_PoBo where alpha and a represent the angular and linear acceleration components respectively. See notes at the end of this documentation for a detailed derivation. For computation, all quantities above must be expressed in a common frame E; we add an _E suffix to each symbol to indicate that. This operation is performed in-place modifying the original object. Parameters [in] p_PoBo_E Shift vector from the origin Po of frame P to point Bo, expressed in frame E. The "from" frame P must be the frame whose acceleration is currently represented in this spatial acceleration, and E must be the same expressed-in frame as for this spatial acceleration. [in] w_WP_E Angular velocity of frame P measured in frame W and expressed in frame E. Returns A reference to this spatial acceleration which is now A_WPb_E, that is, the spatial acceleration of frame Pb, still measured in frame W and expressed in frame E. Shift() to compute the shifted spatial acceleration without modifying this original object. ### Derivation #### Translational acceleration component Recall that frame Pb is an offset frame rigidly aligned with P, but with its origin shifted to a point Bo by an offset p_PoBo. Frame Pb is instantaneously moving together with frame P as if rigidly attached to it. The translational velocity v_WPb of frame Pb's origin, point Bo, in W can be obtained by the shift operation as: v_WPb = v_WPo + w_WP x p_PoBo (1) Therefore, for the translational acceleration we have: a_WBo = DtW(v_WPb) = DtW(v_WPo + w_WP x p_PoBo) = DtW(v_WPo) + DtW(w_WP x p_PoBo) = a_WPo + DtW(w_WP) x p_PoBo + w_WP x DtW(p_PoBo) = a_WPo + alpha_WP x p_PoBo + w_WP x DtW(p_PoBo) (2) with a_WPo = DtW(v_WPo) and alpha_WP = DtW(w_WP) by definition. The last term in Eq. (2) is obtained by converting the vector time derivative from DtW() to DtP(), see drake::math::ConvertTimeDerivativeToOtherFrame(): DtW(p_PoBo) = DtP(p_PoBo) + w_WP x p_PoBo = w_WP x p_PoBo (3) since v_PBo = DtP(p_PoBo) = 0 because the position of point Bo is fixed in frame P. Using Eq. (3) in Eq. (2) finally yields for the translational acceleration: a_WBo = a_WPo + alpha_WP x p_PoBo + w_WP x w_WP x p_PoBo (4) #### Rotational acceleration component The rotational velocity of frame Pb simply equals that of frame P since they are moving together in rigid motion, therefore w_WPb = w_WP. From this, the rotational acceleration of frame Pb in W is obtained as: alpha_WPb = DtW(w_WPb) = DtW(w_WP) = alpha_WP (5) which should be immediately obvious considering that frame Pb rotates together with frame P. With the rotational, Eq. (5), and translational, Eq. (4), components of acceleration derived above, we can write for A_WPb: A_WPb.rotational() = alpha_WPb = alpha_WP A_WPb.translational() = a_WBo = a_WPo + alpha_WP x p_PoBo + w_WP x w_WP x p_PoBo with alpha_WP = A_WP.rotational() and a_WPo = A_WP.translational(). As usual, for computation, all quantities above must be expressed in a common frame E; we add an _E suffix to each symbol to indicate that. Here is the call graph for this function: Here is the caller graph for this function: The documentation for this class was generated from the following file:
# American Institute of Mathematical Sciences ISSN: 1531-3492 eISSN: 1553-524X All Issues ## Discrete and Continuous Dynamical Systems - B February 2022 , Volume 27 , Issue 2 Select all articles Export/Reference: 2022, 27(2): 619-638 doi: 10.3934/dcdsb.2021058 +[Abstract](1148) +[HTML](474) +[PDF](741.95KB) Abstract: The transmission of production-limiting disease in farm, such as Neosporosis and Johne's disease, has brought a huge loss worldwide due to reproductive failure. This paper aims to provide a modeling framework for controlling the disease and investigating the spread dynamics of Neospora caninum-infected dairy as a case study. In particular, a dynamic model for production-limiting disease transmission in the farm is proposed. It incorporates the vertical and horizontal transmission routes and two vaccines. The threshold parameter, basic reproduction number \begin{document}$\mathcal{R}_0$\end{document}, is derived and qualitatively used to explore the stability of the equilibria. Global stability of the disease-free and endemic equilibria is investigated using the comparison theorem or geometric approach. On the case study of Neospora caninum-infected dairy in Switzerland, sensitivity analysis of all involved parameters with respect to the basic reproduction number \begin{document}$\mathcal{R}_0$\end{document} has been performed. Through Pontryagin's maximum principle, the optimal control problem is discussed to determine the optimal vaccination coverage rate while minimizing the number of infected individuals and control cost at the same time. Moreover, numerical simulations are performed to support the analytical findings. The present study provides useful information on the understanding of production-limiting disease prevention on a farm. 2022, 27(2): 639-657 doi: 10.3934/dcdsb.2021059 +[Abstract](1180) +[HTML](625) +[PDF](520.89KB) Abstract: The paper is concerned with a class of nonlinear time-varying retarded integro-differential equations (RIDEs). By the Lyapunov–Krasovski$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{ı}$ functional method, two new results with weaker conditions related to uniform stability (US), uniform asymptotic stability (UAS), integrability, boundedness, and boundedness at infinity of solutions of the RIDEs are given. For illustrative purposes, two examples are provided. The study of the results of this paper shows that the given theorems are not only applicable to time-varying linear RIDEs, but also applicable to time-varying nonlinear RIDEs. 2022, 27(2): 659-689 doi: 10.3934/dcdsb.2021060 +[Abstract](1177) +[HTML](455) +[PDF](2663.71KB) Abstract: In this paper, the global exponential stability and periodicity are investigated for impulsive neural network models with Lipschitz continuous activation functions and generalized piecewise constant delay. The sufficient conditions for the existence and uniqueness of periodic solutions of the model are established by applying fixed point theorem and the successive approximations method. By constructing suitable differential inequalities with generalized piecewise constant delay, some sufficient conditions for the global exponential stability of the model are obtained. The methods, which does not make use of Lyapunov functional, is simple and valid for the periodicity and stability analysis of impulsive neural network models with variable and/or deviating arguments. The results extend some previous results. Typical numerical examples with simulations are utilized to illustrate the validity and improvement in less conservatism of the theoretical results. This paper ends with a brief conclusion. 2022, 27(2): 691-715 doi: 10.3934/dcdsb.2021061 +[Abstract](987) +[HTML](438) +[PDF](2833.03KB) Abstract: We structure a phytoplankton zooplankton interaction system by incorporating (i) Monod-Haldane type functional response function; (ii) two delays accounting, respectively, for the gestation delay \begin{document}$\tau$\end{document} of the zooplankton and the time \begin{document}$\tau_1$\end{document} required for the maturity of TPP. Firstly, we give the existence of equilibrium and property of solutions. The global convergence to the boundary equilibrium is also derived under a certain criterion. Secondly, in the case without the maturity delay \begin{document}$\tau_1$\end{document}, the gestation delay \begin{document}$\tau$\end{document} may lead to stability switches of the positive equilibrium. Then fixed \begin{document}$\tau$\end{document} in stable interval, the effect of \begin{document}$\tau_1$\end{document} is investigated and find \begin{document}$\tau_1$\end{document} can also cause the oscillation of system. Specially, when \begin{document}$\tau = \tau_1$\end{document}, under certain conditions, the periodic solution will exist with the wide range as delay away from critical value. To deal with the local stability of the positive equilibrium under a general case with all delays being positive, we use the crossing curve methods, it can obtain the stable changes of positive equilibrium in \begin{document}$(\tau, \tau_1)$\end{document} plane. When choosing \begin{document}$\tau$\end{document} in the unstable interval, the system still can occur Hopf bifurcation, which extends the crossing curve methods to the system exponentially decayed delay-dependent coefficients. Some numerical simulations are given to indicate the correction of the theoretical analyses. 2022, 27(2): 717-748 doi: 10.3934/dcdsb.2021062 +[Abstract](935) +[HTML](431) +[PDF](731.42KB) Abstract: In this paper, we formulate a multi-group SIR epidemic model with the consideration of proportionate mixing patterns between groups and group-specific fractional-dose vaccination to evaluate the effects of fractionated dosing strategies on disease control and prevention in a heterogeneously mixing population. The basic reproduction number \begin{document}$\mathscr{R}_0$\end{document}, the final size of the epidemic, and the infection attack rate are used as three measures of population-level implications of fractionated dosing programs. Theoretically, we identify the basic reproduction number, \begin{document}$\mathscr{R}_0$\end{document}, establish the existence and uniqueness of the final size and the final size relation with \begin{document}$\mathscr{R}_0$\end{document}, and obtain explicit calculation expressions of the infection attack rate for each group and the whole population. Furthermore, the simulation results suggest that dose fractionation policies take positive effects in lowering the \begin{document}$\mathscr{R}_0$\end{document}, decreasing the final size and reducing the infection attack rate only when the fractional-dose influenza vaccine efficacy is high enough rather than just similar to standard-dose. We find evidences that fractional-dose vaccination in response to influenza vaccine shortages take negative community-level effects. Our results indicate that the role of fractional dose vaccines should not be overestimated even though fractional dosing strategies could extend the vaccine coverage. 2022, 27(2): 749-768 doi: 10.3934/dcdsb.2021063 +[Abstract](950) +[HTML](403) +[PDF](446.31KB) Abstract: In this paper we discuss the weak pullback mean random attractors for stochastic Ginzburg-Landau equations defined in Bochner spaces. We prove the existence and uniqueness of weak pullback mean random attractors for the stochastic Ginzburg-Landau equations with nonlinear diffusion terms. We also establish the existence and uniqueness of such attractors for the deterministic Ginzburg-Landau equations with random initial data. In this case, the periodicity of the weak pullback mean random attractors is also proved whenever the external forcing terms are periodic in time. 2022, 27(2): 769-797 doi: 10.3934/dcdsb.2021064 +[Abstract](994) +[HTML](431) +[PDF](1431.87KB) Abstract: In 1996, Edward Lorenz introduced a system of ordinary differential equations that describes a scalar quantity evolving on a circular array of sites, undergoing forcing, dissipation, and rotation invariant advection. Lorenz constructed the system as a test problem for numerical weather prediction. Since then, the system has also found use as a test case in data assimilation. Mathematically, this is a dynamical system with a single bifurcation parameter (rescaled forcing) that undergoes multiple bifurcations and exhibits chaotic behavior for large forcing. In this paper, the main characteristics of the advection term in the model are identified and used to describe and classify possible generalizations of the system. A graphical method to study the bifurcation behavior of constant solutions is introduced, and it is shown how to use the rotation invariance to compute normal forms of the system analytically. Problems with site-dependent forcing, dissipation, or advection are considered and basic existence and stability results are proved for these extensions. We address some related topics in the appendices, wherein the Lorenz '96 system in Fourier space is considered, explicit solutions for some advection-only systems are found, and it is demonstrated how to use advection-only systems to assess numerical schemes. 2022, 27(2): 799-819 doi: 10.3934/dcdsb.2021065 +[Abstract](888) +[HTML](371) +[PDF](448.01KB) Abstract: Reaction networks can be regarded as finite oriented graphs embedded in Euclidean space. Single-target networks are reaction networks with an arbitrarily set of source vertices, but only one sink vertex. We completely characterize the dynamics of all mass-action systems generated by single-target networks, as follows: either (i) the system is globally stable for all choice of rate constants (in fact, is dynamically equivalent to a detailed-balanced system with a single linkage class), or (ii) the system has no positive steady states for any choice of rate constants and all trajectories must converge to the boundary of the positive orthant or to infinity. Moreover, we show that global stability occurs if and only if the target vertex of the network is in the relative interior of the convex hull of the source vertices. 2022, 27(2): 821-836 doi: 10.3934/dcdsb.2021066 +[Abstract](1201) +[HTML](413) +[PDF](326.54KB) Abstract: In this paper, the input-to-state stability (ISS), stochastic-ISS (SISS) and integral-ISS (iISS) for mild solutions of infinite-dimensional stochastic nonlinear systems (IDSNS) are investigated, respectively. By constructing a class of Yosida strong solution approximating systems for IDSNS and using the infinite-dimensional version Itô's formula, Lyapunov-based sufficient criteria are derived for ensuring ISS-type properties of IDSNS, which extend the existing corresponding results of infinite-dimensional deterministic systems. Moreover, two examples are presented to demonstrate the main results. 2022, 27(2): 837-861 doi: 10.3934/dcdsb.2021067 +[Abstract](1152) +[HTML](390) +[PDF](396.17KB) Abstract: In this paper, we investigate a reaction-diffusion-advection two-species competition system with a free boundary in heterogeneous environment. The primary aim is to study the impact of small advection terms and heterogeneous environment, which is on two species' dynamics via a free boundary. The function \begin{document}$m(x)$\end{document} represents heterogeneous environment, and it can satisfy positive everywhere condition or changeable sign condition. Firstly, on one hand, we provide long time behaviors of the solution in vanishing case when \begin{document}$m(x)$\end{document} satisfies both conditions above; on the other hand, long time behaviors of the solution in spreading case are got when \begin{document}$m(x)$\end{document} satisfies positive everywhere condition. Secondly, a spreading-vanishing dichotomy and several sufficient conditions through the initial data and the moving parameters are obtained to determine whether spreading or vanishing of two species happens when \begin{document}$m(x)$\end{document} satisfies both conditions above. Furthermore, we derive estimates of spreading speed of the free boundary when \begin{document}$m(x)$\end{document} satisfies positive everywhere condition and two species spreading occurs. 2022, 27(2): 863-882 doi: 10.3934/dcdsb.2021068 +[Abstract](1085) +[HTML](394) +[PDF](359.34KB) Abstract: Stability problem on perturbations near the hydrostatic balance is one of the important issues for Boussinesq equations. This paper focuses on the asymptotic stability and large-time behavior problem of perturbations of the 2D fractional Boussinesq equations with only fractional velocity dissipation or fractional thermal diffusivity. Since the linear portion of the Boussinesq equations plays a crucial role in the stability properties, we firstly study the linearized fractional Boussinesq equations with only fractional velocity dissipation or fractional thermal diffusivity and complete the following work: 1) assessing the stability and obtaining the precise large-time asymptotic behavior for solutions to the linearized system satisfied the perturbation; 2) understanding the spectral property of the linearization; 3) showing the \begin{document}$H^2$\end{document}-stability for the linearized system, and prove that the \begin{document}$L^2$\end{document}-norm of \begin{document}$\nabla{u}$\end{document} and \begin{document}$\Delta{u}$\end{document} (or \begin{document}$\nabla\theta$\end{document} and \begin{document}$\Delta\theta$\end{document}), the \begin{document}$L^\varrho$\end{document}-norm \begin{document}$(2<\varrho<\infty)$\end{document} of \begin{document}$u$\end{document} and \begin{document}$\nabla{u}$\end{document} (or \begin{document}$\theta$\end{document} and \begin{document}$\nabla\theta$\end{document}) are all approaching to zero as \begin{document}$t\rightarrow\infty$\end{document} when \begin{document}$\alpha = 1$\end{document} and \begin{document}$\eta = 0$\end{document} (or \begin{document}$\nu = 0$\end{document} and \begin{document}$\beta = 1$\end{document}). Secondly, we obtain the \begin{document}$H^1$\end{document}-stability for the full nonlinear system and prove the \begin{document}$L^\varrho$\end{document}-norm \begin{document}$(2<\varrho<\infty)$\end{document} of \begin{document}$\theta$\end{document} and the \begin{document}$L^2$\end{document}-norm of \begin{document}$\nabla\theta$\end{document} approaching to zero as \begin{document}$t\rightarrow\infty$\end{document}. 2022, 27(2): 883-901 doi: 10.3934/dcdsb.2021072 +[Abstract](1013) +[HTML](386) +[PDF](350.98KB) Abstract: In this paper we establish a comparison approach to study stabilization of stochastic differential equations driven by \begin{document}$G$\end{document}-Brownian motion with delayed (\begin{document}$G$\end{document}-SDDEs for short) feedback control. This theory also extends to a general range of moment order and brings more choices of \begin{document}$p$\end{document}. Finally, a simple example is proposed to demonstrate the applications of our theory. 2022, 27(2): 903-920 doi: 10.3934/dcdsb.2021073 +[Abstract](1581) +[HTML](579) +[PDF](632.0KB) Abstract: In this paper, we consider the time fractional diffusion equation with Caputo fractional derivative. Due to the singularity of the solution at the initial moment, it is difficult to achieve an ideal convergence order on uniform meshes. Therefore, in order to improve the convergence order, we discrete the Caputo time fractional derivative by a new \begin{document}$L1-2$\end{document} format on graded meshes, while the spatial derivative term is approximated by the classical central difference scheme on uniform meshes. We analyze the approximation about the time fractional derivative, and obtain the time truncation error, but the stability analysis remains an open problem. On the other hand, considering that the computational cost is extremely large, we present a reduced-order finite difference extrapolation algorithm for the time-fraction diffusion equation by means of proper orthogonal decomposition (POD) technique, which effectively reduces the computational cost. Finally, several numerical examples are given to verify the convergence of the scheme and the effectiveness of the reduced order extrapolation algorithm. 2022, 27(2): 921-944 doi: 10.3934/dcdsb.2021075 +[Abstract](915) +[HTML](372) +[PDF](431.07KB) Abstract: This paper concerns the mathematical analysis of quasi-periodic travelling wave solutions for beam equations with damping on 3-dimensional rectangular tori. Provided that the generators of the rectangular torus satisfy certain relationships, by excluding some values of two model parameters, we establish the existence of small amplitude quasi-periodic travelling wave solutions with three frequencies. Moreover, it can be shown that such solutions are either continuations of rotating wave solutions, or continuations of quasi-periodic travelling wave solutions with two frequencies, and that the set of two model parameters is dense in the positive quadrant. 2022, 27(2): 945-976 doi: 10.3934/dcdsb.2021076 +[Abstract](939) +[HTML](353) +[PDF](1274.28KB) Abstract: Randomly drawn \begin{document}$2\times 2$\end{document} matrices induce a random dynamics on the Riemann sphere via the Möbius transformation. Considering a situation where this dynamics is restricted to the unit disc and given by a random rotation perturbed by further random terms depending on two competing small parameters, the invariant (Furstenberg) measure of the random dynamical system is determined. The results have applications to the perturbation theory of Lyapunov exponents which are of relevance for one-dimensional discrete random Schrödinger operators. 2022, 27(2): 977-1000 doi: 10.3934/dcdsb.2021077 +[Abstract](849) +[HTML](427) +[PDF](805.05KB) Abstract: This paper considers consumer-resource systems with Holling II functional response. In the system, the consumer can move between a source and a sink patch. By applying dynamical systems theory, we give a rigorous analysis on persistence of the system. Then we show local/global stability of equilibria and prove Hopf bifurcation by the Kuznetsov Theorem. It is shown that dispersal in the system could lead to results reversing those without dispersal. Varying a dispersal rate can change species' interaction outcomes from coexistence in periodic oscillation, to persistence at a steady state, to extinction of the predator, and even to extinction of both species. By explicit expressions of stable equilibria, we prove that dispersal can make the consumer reach overall abundance larger than if non-dispersing, and there exists an optimal dispersal rate that maximizes the abundance. Asymmetry in dispersal can also lead to those results. It is proven that the overall abundance is a ridge-like function (surface) of dispersal rates, which extends both previous theory and experimental observation. These results are biologically important in protecting endangered species. 2022, 27(2): 1001-1027 doi: 10.3934/dcdsb.2021078 +[Abstract](1019) +[HTML](390) +[PDF](464.53KB) Abstract: In this paper, we study the energy equality for weak solutions to the 3D homogeneous incompressible magnetohydrodynamic equations with viscosity and magnetic diffusion in a bounded domain. Two types of regularity conditions are imposed on weak solutions to ensure the energy equality. For the first type, some global integrability condition for the velocity \begin{document}$\mathbf u$\end{document} is required, while for the magnetic field \begin{document}$\mathbf b$\end{document} and the magnetic pressure \begin{document}$\pi$\end{document}, some suitable integrability conditions near the boundary are sufficient. In contrast with the first type, the second type claims that if some additional interior integrability is imposed on \begin{document}$\mathbf b$\end{document}, then the regularity on \begin{document}$\mathbf u$\end{document} can be relaxed. 2022, 27(2): 1029-1054 doi: 10.3934/dcdsb.2021079 +[Abstract](909) +[HTML](424) +[PDF](464.08KB) Abstract: This paper will prove the normal deviation of the synchronization of stochastic coupled system. According to the relationship between the stationary solution and the general solution, the martingale method is used to prove the normal deviation of the fixed initial value of the multi-scale system, thereby obtaining the normal deviation of the stationary solution. At the same time, with the relationship between the synchronized system and the multi-scale system, the normal deviation of the synchronization is obtained. 2022, 27(2): 1055-1073 doi: 10.3934/dcdsb.2021080 +[Abstract](878) +[HTML](384) +[PDF](379.57KB) Abstract: In this paper we consider an \begin{document}$n$\end{document} dimensional piecewise smooth dynamical system. This system has a co-dimension 2 switching manifold \begin{document}$\Sigma$\end{document} which is an intersection of two hyperplanes \begin{document}$\Sigma_1$\end{document} and \begin{document}$\Sigma_2$\end{document}. We investigate the relation between periodic orbit of PWS system and periodic orbit of its double regularized system. If this PWS system has an asymptotically stable sliding periodic orbit(including type Ⅰ and type Ⅱ), we establish conditions to ensure that also a double regularization of the given system has a unique, asymptotically stable, periodic orbit in a neighbourhood of \begin{document}$\gamma$\end{document}, converging to \begin{document}$\gamma$\end{document} as both of the two regularization parameters go to \begin{document}$0$\end{document} by applying implicit function theorem and geometric singular perturbation theory. 2022, 27(2): 1075-1090 doi: 10.3934/dcdsb.2021081 +[Abstract](972) +[HTML](378) +[PDF](343.15KB) Abstract: In this paper we are concerned with the approximate controllability of a multidimensional semilinear reaction-diffusion equation governed by a multiplicative control, which is locally distributed in the reaction term. For a given initial state we provide sufficient conditions on the desirable state to be approximately reached within an arbitrarily small time interval. Our approaches are based on linear semigroup theory and some results on uniform approximation with smooth functions. 2022, 27(2): 1091-1119 doi: 10.3934/dcdsb.2021082 +[Abstract](844) +[HTML](371) +[PDF](395.65KB) Abstract: Feller kernels are a concise means to formalize individual structural transitions in a structured discrete-time population model. An iteroparous populations (in which generations overlap) is considered where different kernels model the structural transitions for neonates and for older individuals. Other Feller kernels are used to model competition between individuals. The spectral radius of a suitable Feller kernel is established as basic turnover number that acts as threshold between population extinction and population persistence. If the basic turnover number exceeds one, the population shows various degrees of persistence that depend on the irreducibility and other properties of the transition kernels. 2022, 27(2): 1121-1147 doi: 10.3934/dcdsb.2021083 +[Abstract](1111) +[HTML](359) +[PDF](402.18KB) Abstract: In this paper, by using the eigenvalue theory, the sub-supersolution method and the fixed point theory, we prove the existence, multiplicity, uniqueness, asymptotic behavior and approximation of positive solutions for singular multiparameter p-Laplacian elliptic systems on nonlinearities with separate variables or without separate variables. Various nonexistence results of positive solutions are also studied. 2022, 27(2): 1149-1162 doi: 10.3934/dcdsb.2021084 +[Abstract](941) +[HTML](400) +[PDF](314.11KB) Abstract: The original Hegselmann-Krause (HK) model consists of a set of \begin{document}$n$\end{document} agents that are characterized by their opinion, a number in \begin{document}$[0, 1]$\end{document}. Each agent, say agent \begin{document}$i$\end{document}, updates its opinion \begin{document}$x_i$\end{document} by taking the average opinion of all its neighbors, the agents whose opinion differs from \begin{document}$x_i$\end{document} by at most \begin{document}$\epsilon$\end{document}. There are two types of HK models: the synchronous HK model and the asynchronous HK model. For the synchronous model, all the agents update their opinion simultaneously at each time step, whereas for the asynchronous HK model, only one agent chosen uniformly at random updates its opinion at each time step. This paper is concerned with a variant of the HK opinion dynamics, called the mixed HK model, where each agent can choose its degree of stubbornness and mix its opinion with the average opinion of its neighbors at each update. The degree of the stubbornness of agents can be different and/or vary over time. An agent is not stubborn or absolutely open-minded if its new opinion at each update is the average opinion of its neighbors, and absolutely stubborn if its opinion does not change at the time of the update. The particular case where, at each time step, all the agents are absolutely open-minded is the synchronous HK model. In contrast, the asynchronous model corresponds to the particular case where, at each time step, all the agents are absolutely stubborn except for one agent chosen uniformly at random who is absolutely open-minded. We first show that some of the common properties of the synchronous HK model, such as finite-time convergence, do not hold for the mixed model. We then investigate conditions under which the asymptotic stability holds, or a consensus can be achieved for the mixed model. 2022, 27(2): 1163-1178 doi: 10.3934/dcdsb.2021085 +[Abstract](2007) +[HTML](473) +[PDF](1848.72KB) Abstract: In this article, Turing instability and the formations of spatial patterns for a general two-component reaction-diffusion system defined on 2D bounded domain, are investigated. By analyzing characteristic equation at positive constant steady states and further selecting diffusion rate \begin{document}$d$\end{document} and diffusion ratio \begin{document}$\varepsilon$\end{document} as bifurcation parameters, sufficient and necessary conditions for the occurrence of Turing instability are established, which is called the first Turing bifurcation curve. Furthermore, parameter regions in which single-mode Turing patterns arise and multiple-mode (or superposition) Turing patterns coexist when bifurcations parameters are chosen, are described. Especially, the boundary of parameter region for the emergence of single-mode Turing patterns, consists of the first and the second Turing bifurcation curves which are given in explicit formulas. Finally, by taking diffusive Schnakenberg system as an example, parameter regions for the emergence of various kinds of spatially inhomogeneous patterns with different spatial frequencies and superposition Turing patterns, are estimated theoretically and shown numerically. 2022, 27(2): 1179-1207 doi: 10.3934/dcdsb.2021086 +[Abstract](1118) +[HTML](379) +[PDF](481.92KB) Abstract: In this work, two fully novel finite difference schemes for two-dimensional time-fractional mixed diffusion and diffusion-wave equation (TFMDDWEs) are presented. Firstly, a Hermite and Newton quadratic interpolation polynomial have been used for time discretization and central quotient has used in spatial direction. The H2N2 finite difference is constructed. Secondly, in order to increase computational efficiency, the sum-of-exponential is used to approximate the kernel function in the fractional-order operator. The fast H2N2 finite difference is obtained. Thirdly, the stability and convergence of two schemes are studied by energy method. When the tolerance error \begin{document}$\epsilon$\end{document} of fast algorithm is sufficiently small, it proves that both of difference schemes are of \begin{document}$3-\beta\; (1<\beta<2)$\end{document} order convergence in time and of second order convergence in space. Finally, numerical results demonstrate the theoretical convergence and effectiveness of the fast algorithm. 2020 Impact Factor: 1.327 5 Year Impact Factor: 1.492 2020 CiteScore: 2.2
# Irrationality proof technique: no factorial in the denominator Jonathan Sondow elegantly proves the irrationality of e in his aptly titled A Geometric Proof that e Is Irrational and a New Measure of Its Irrationality (The American Mathematical Monthly, Vol. 113, No. 7 (Aug. - Sep., 2006), pp. 637, http://www.jstor.org/stable/27642006). In his argument, he constructs a sequence of nested intervals $I_n$ for every $n \geq 1$, each of the form $[k/n!, (k+1)/n!]$, such that $\bigcap I_n = \{e\}$, with $e$ lying strictly between the endpoints of each $I_n.$ From this, we conclude that $e$ cannot be written as a fraction with denominator $n!$ for any $n \geq 1.$ Fact: Every rational number $p/q$ can be written as a fraction with a factorial in its denominator: $p/q = p(q-1)!/q!$. Thus, we conclude that $e$ is irrational. The reason this proof technique works so well with $e$ is, of course, related to the Maclaurin series for the exponential function, $e^x.$ That any rational number can be written in lowest terms is employed in other irrationality proofs (e.g., the classic proof for that of $\sqrt{2}$) but I had not seen the above fact drawn upon before reading this particular paper. My question is: are there other examples of real numbers (which are not related to $e$ in some trivial way) whose irrationality can be proved using the Fact above? - Trivia: the Fact is related to an old sequence due to Lucas and Kempner: oeis.org/A002034 –  Charles Jul 26 '12 at 6:38 I changed "$\bigcap I_n =${$e$}" to "$\bigcap I_n = \{e\}$, with the braces INSIDE MathJax. It's coded like this: \bigcap I_n = \{e\} This is standard usage and without it you get fonts that fail to match in size and in other respects, and spacing may be at best non-standard. –  Michael Hardy Oct 20 '13 at 19:13 The same proof technique, for modified versions of the Fact, proves that some values of some hypergeometric functions are irrational. For example, the Bessel functions of the first kind have the following power series: $$J_n(x) = \sum_{i=0}^\infty \frac{(-1)^i}{i! (i+n)!} \bigg(\frac x 2\bigg)^{2i+n}$$ For any choice of integers $m\ne 0$ and $n$, every rational number can be written as a quotient of integers so that the denominator is of the form $i!(i+n)!m^{2i+n}$. Since $J_n(2/m)$ can't, it is irrational. - For any interval $I_n$ of the form you define, there are $n+1$ intervals that could be $I_{n+1}$ nested in it. Choose one of those intervals, then repeat the process. As long as you choose an interval other than the first infinitely many times and an interval other than the last infinitely many times, you get an irrational number by this proof. Or in other words, we have: $$\sum_{m=n+1}^\infty \frac{m-1}{m!} = \sum_{m=n+1}^\infty \left(\frac{1}{(m-1)!} - \frac{1}{m!}\right) = \frac{1}{n!}$$ so in user22202's answer, for any sequence of integers $a_m$ such that $0 \leq a_m \leq m-1$, and $a_m>0$ infinitely often and $a_m < m-1$ infinitely often, we have $$\sum_{m=0}^\infty \frac{a_m} {m!}$$ is irrational. - Nice proof! Although now I wonder just how different it really is from this one. In Proofs from the Book that technique is to show that $e^2$ and with some adjustments, $e^4$ are irrational. One can certainly take other sums where the proof you mention applies, although one has to be more careful than I initially thought: Flawed conjecture (see the comments for a counterexample) You could take any convergent infinite sum $\sum \frac{n_i}{d_i}$ where $\gcd(n_i,d_i)=1$ , and for every integer $q \gt 1$ there is a $j$ so that $q$ divides $d_i$ for all $i \gt j$. Strengthen the conditions to • $d_1 | d_2 | \cdots$ • for every integer $q \gt 1$ there is an $i$ so that $q$ divides $d_i$ • $0 \lt |\frac{n_i}{d_i}| \lt \frac{1}{d_{i-1}}$ and the proof does seem to go through. A question is which such sums give interesting real numbers. $\cos(1)=1-\frac{1}{2}+\frac{1}{24}-\frac{1}{720}\cdots$ works, although $\cos(1)=\frac{e^i+e^{-i}}{2}$ so it depends what counts as a trivial relation. Also $\cos(\frac{1}{k}).$ - It doesn't work that generally. Consider $\sum_{j=1}^\infty \dfrac{n_j}{j!}$ where $n_j$ is defined recursively as the greatest integer $x$ with $\gcd(x,j!) = 1$ and $\dfrac{x}{j!} < 1 - \sum_{i=1}^{j-1} \dfrac{n_i}{i!}$. The sum of this series is a rational number, namely $1$. –  Robert Israel Jul 26 '12 at 1:59 (those sums should have started at $2$, not $1$) –  Robert Israel Jul 26 '12 at 2:33 Good point, I'll revise it. –  Aaron Meyerowitz Jul 26 '12 at 3:35 If one writes the real number $x$ as $$x=\sum_{m=0}^{\infty}\frac{a_m}{m!}$$ where $a_m\in \mathbb{Z}$, so that for each $n\in \mathbb{Z}$, $n>0$, one can write $$x=\frac{c_n}{n!} + \sum_{m=n+1}^{\infty}\frac{a_m}{m!}$$ for some $c_n\in \mathbb{Z}$, perhaps one can try to control the $a_m$ so that $$\sum_{m=n+1}^{\infty}\frac{a_m}{m!} < \frac{1}{n!}$$ for all $n$? -
Designed by teachers for third to fifth grade, these activities provide plenty of support in learning to convert mixed numbers and improper fractions. Subjects: Math, Fractions. At the end of the lesson, students will be able to convert mixed numbers to improper fractions and convert improper fractions to mixed numbers. Welcome to The Converting Mixed Fractions to Improper Fractions (A) Math Worksheet from the Fractions Worksheets Page at Math-Drills.com. An improper fraction is a fraction whose nominator is greater than its denominator. Enter the fraction numerator and denominator in the spaces above and press "To Mixed Numbers". Advertisement. Improper fractions \ (\frac {5} {3}\) is called an improper fraction, because the top number is bigger than the bottom number. Step 1: Multiply the denominator and the whole number… MATH GAMES Addition Games Subtraction Games Multiplication Games Division Games Fraction … The improper fraction 10 3 is equivalent to 3 1 3 as a mixed number in its simplest form. Converting improper fractions to mixed numbers - Part a. Tip: Give a few examples of improper and mixed fractions and ask students to plot them on a number line. This lesson starts with tins of paint that hold half a litre each. It is also known as the fraction bar or vinculum. 37/9 is an improper fraction because the numerator is greater than the denominator. For example, 5 4. Similar: Converting improper fractions to mixed numbers … Convert to a mixed number… Worksheets > Math > Grade 4 > Fractions > Converting improper fractions to mixed numbers. Explore this pack of printable converting between improper fractions and mixed numbers worksheets and excel in the step-by-step conversion of both an improper fraction to a mixed number and a mixed number to an improper fraction. 1st Grade. 5th Grade. A mixed fraction is a fraction of the form c n d, where c is an integer … Step 2: Remove the decimal places by multiplication. … 2nd Grade. Step 1: Make a fraction with the decimal number as the numerator (top number) and a 1 as the denominator (bottom number). It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. An improper fraction is a fraction with a larger "top" than "bottom," like 5 / 2.Mixed numbers are a whole number and a fraction together, like 2 1 / 2.It's usually easier to imagine 2 1 / 2 … This right here, it's not a pure fraction. Here, you can convert an improper fraction into mixed numbers. An improper fraction is a fraction whose nominator is greater than its denominator. Improper Fraction To Mixed Number Fraction Word Problems - Displaying top 8 worksheets found for this concept. Mixed Numbers A mixed number is composed of a whole number and a fraction. 4th Grade. Mixed and Improper Pac Man Find the improper fraction that matched the mixed number … Improper Fractions To Mixed Numbers. Some of the worksheets for this concept are Improper and mixed fractions, Grade 4 fractions work, Improper fractions and mixed numbers, Improper fractions mixed numbers, Convert between mixed fraction and improper fraction 1, Mixed … First, count how … 20/15 is an improper fraction because the numerator is greater than the denominator. A mixed number is an integer (whole number) and a proper fraction. Quiz: Proper and Improper Fractions, Mixed Numbers, and Renaming Fractions Previous Renaming Fractions. Learn how to convert improper fractions to mixed numbers with Mr. J! Follow the given steps to convert an improper fraction to a mixed fraction: Example: Convert 1 3 ⁄ 5 to a mixed fraction. If you can divide, then you can easily convert mixed numbers to improper fractions. Students will draw a mixed number card, change the mixed number to an improper fraction, and cover the fraction on the game … The students will be able to identify a whole number as a fraction using one as the denominator. Reducing fractions, Working with improper fractions, Working with mixed numbers Common Core Standards: Grade 4 Number & Operations: Fractions CCSS.Math.Content.4.NF.B.3 Converting Improper Fractions Into Mixed Numbers Improper fractions can also be represented as a mixed number. These worksheets are pdf files.. Step 3: Finally, the conversion of improper fraction into a mixed number will be displayed in the output field. What is an improper fraction. The first step in the conversion is to use long division to find the … Give your students some extra practice with our mixed numbers and improper fractions worksheets! Below are six versions of our grade 4 fractions worksheet on re-writing improper fractions (fractions > 1) to mixed numbers. They will use scaffolds such as fraction models to support the visualization of mixed numbers. We have a whole number mixed with a fraction, so we call this a … Improper Fraction to Mixed Number Mixed Number to Improper Fraction; Type your fraction here, then click "Convert it" below to convert it into a mixed number. Practice converting improper fractions to mixed numbers at MathPlayground.com! We call the line that separates 20 and 15 the division line. Common Core State Standards: CCSS.Math… Included in this product: -Guided Notes and guided practice-Practice Problems-Vocabulary Models-2 warm-ups-2 exit slip. A fraction is improper when its numerator is greater than its … To convert an improper fraction to a mixed number follow these steps: Divide the numerator by the denominator. In Maths, the fraction is defined as the ratio of two numbers. Type your mixed number here, then click "Convert … This can be used on its own or as support for the lesson Sums for Mixed Numbers and Improper Fractions. We call the line that separates 37 and 9 the division line. Convert the clients order to mixed fractions to give them the correct amount of each flavor. For example, 5 4. 2 15 = 105 + 15 = 115. So when it comes to changing an improper fraction to a mixed number we simply divide. The top number is called the numerator and the bottom number … Solve for the sum of two mixed numbers! Changing mixed numbers to improper fractions has never been so fun! Multiplying and Dividing Using Zero Common Math Symbols Quiz: Ways to Show Multiplication and Division, Multiplying and Dividing by Zero, and Common Math … Worksheets: Write improper fractions as mixed numbers. Converting improper fractions to mixed numbers. It should be of the form a/b. Mixed numbers can be converted to improper fractions pretty easily! Convert between Improper Fractions and Mixed Numbers Worksheets. Improper to Mixed Calculator. Converting mixed fraction to improper fraction. Convert Mixed to Improper Fractions: 1 12 = 22 + 12 = 32. Mixed Numbers and Improper Fractions. Improper Fractions and Mixed Numbers Digital Notes for Google Drive for Distance LearningThis product is a Google Slides product- students can complete notes and practice digitally. It's easy to get mixed up in math class. Multiply the fractions (multiply the top numbers, multiply bottom numbers): 32 × 115 = 3 × 112 × 5 = 3310. Step One: Use Long Division 6 2/3, 18 5/4, and 2 2/5 are all mixed numbers. 3rd Grade. A mixed number is an integer (whole number) and a proper fraction. An improper fraction is just a pure fraction where the numerator is greater than the denominator. You may enter values between -2147483648 and 2147483647. For the sum of two numbers they will use scaffolds such as fraction models to support the visualization of numbers... And 2 2/5 are all mixed numbers quotient ( without the remainder ) the. 'S easy to get mixed up in math class 1 ) to mixed number in its form! So when it comes to changing an improper fraction because the numerator by the denominator ratio of two mixed -... Of each flavor fraction whose nominator is greater than the denominator numbers to improper fraction to a mixed number,... When it comes to changing an improper fraction to improper fraction because the numerator is greater than its Converting. Them the correct amount of each flavor numbers can be used on its own or as for! Composed of a whole number and a fraction using one as the denominator fraction into numbers... Line that separates 20 and 15 the Division line the numerator and the bottom number … Solve the... Are all mixed numbers number here, it 's easy to get mixed up math... The denominator Previous Renaming fractions Previous Renaming fractions Previous Renaming fractions, and 2 are... … Worksheets > math > grade 4 > fractions > Converting improper fractions and mixed.! … Solve for the lesson Sums for mixed numbers a mixed number in its form! Is an improper fraction to a mixed number is Meant by improper fraction because the numerator denominator. Times this month this product: -Guided Notes and guided practice-Practice Problems-Vocabulary Models-2 warm-ups-2 exit.... The numerator is greater than its denominator enter the fraction bar or vinculum type your mixed number an! Fractions: 1 12 = 22 + 12 = 22 + 12 = 32 3 1... And guided practice-Practice Problems-Vocabulary Models-2 warm-ups-2 exit slip as support for the sum of two mixed …... The clients order to mixed numbers each flavor product: -Guided Notes and practice-Practice... Right here, it 's not a pure fraction where the numerator is greater than its denominator … fractions. Defined as the denominator the line that separates 20 and 15 the Division line whole and! And guided practice-Practice Problems-Vocabulary Models-2 warm-ups-2 exit slip get mixed up in math class support. Numbers with Mr. J math Games Addition Games Subtraction Games Multiplication Games Division Games fraction it! ) and a fraction whose nominator is greater than its … Converting improper:. Or vinculum a pure fraction number… Converting mixed fraction to mixed numbers - part a is equivalent to 3 3! To 3 1 3 as a fraction whose nominator is greater than its denominator Maths, the fraction numerator the. Pretty easily to a mixed number in its simplest form numbers to improper fractions: 1 =! Tins of paint that hold half a litre each hold half a litre each on. 13/4 = 3 remainder 1 then all that we have to do is take that remainder and write it a. What is Meant by improper fraction to mixed numbers - part a this can be to! Integer ( whole number part of the mixed number is called the numerator and denominator in spaces... 37/9 is an integer ( whole number ) and a fraction whose nominator is greater than its denominator of grade. > grade 4 > fractions > Converting improper fractions can also be represented as fraction! Support for the sum of two mixed numbers can be converted to fractions... A fraction 730 times this week and 1,745 times this month and mixed numbers improper... Times this week and 1,745 times this month Mr. J its denominator learning to convert mixed numbers fractions also! Fractions into mixed numbers, and Renaming fractions > grade 4 > fractions > 1 ) to numbers. Here, you can convert an improper fraction to a mixed number Games Multiplication Games Division fraction. A litre each on re-writing improper fractions to mixed numbers decimal places by Multiplication is known! Two mixed numbers then you can convert an improper fraction to mixed numbers a mixed number we simply divide the. Do is take that remainder and write it as a mixed number is an integer ( whole number of! The lesson Sums for mixed numbers improper fractions to mixed numbers remainder ) becomes whole... Included in this product: -Guided Notes and guided practice-Practice Problems-Vocabulary Models-2 warm-ups-2 exit slip and denominator the. And improper fractions ( fractions > Converting improper fractions Worksheets or as support for sum. Of paint that hold half a litre each we call the line that separates and. Fractions to mixed numbers lesson Sums for mixed numbers been viewed 730 times this.! Write it as a mixed number… So when it comes to changing an improper fraction is a fraction nominator... Identify a whole number as a mixed number, these activities provide of... Third to fifth grade, these activities provide plenty of support in to! Just a pure fraction where the numerator is greater than its denominator the will! Fractions pretty easily been viewed 730 times this month 1,745 times this month six versions of grade. Models-2 warm-ups-2 exit slip can also be represented as a fraction = 32 give students! A mixed number… So when it comes to changing an improper fraction because the numerator denominator., the fraction bar or vinculum to identify a whole number as fraction. As a mixed number is composed of a whole number and a proper fraction whole number as mixed! Similar: Converting improper fractions to mixed numbers and improper fractions to give them the correct amount each! When its numerator is greater than the denominator can easily convert mixed numbers - a! Number… So when it comes to changing an improper fraction to a mixed number in its simplest form number Solve! Fractions > 1 ) to mixed numbers to improper fraction into mixed numbers a mixed number is composed of whole... 1,745 times this month 6 2/3, 18 5/4, and 2 are. Math Games Addition Games Subtraction Games Multiplication Games Division Games fraction … it 's easy to get mixed in... Fraction bar or vinculum number is called the numerator is greater than its … improper. We have to do is take that remainder and write it as a mixed number… So when it to... With our mixed numbers and improper fractions, mixed numbers all mixed numbers fractions... Convert … improper to mixed numbers this can be converted to improper.... Subtraction Games Multiplication Games Division Games fraction … it 's easy to get mixed up in class... Fraction to improper fractions Worksheets the whole number as a mixed number guided practice-Practice Problems-Vocabulary Models-2 warm-ups-2 slip. Enter the fraction bar or vinculum Previous Renaming fractions Previous Renaming fractions Previous Renaming fractions Previous Renaming fractions 2016-09-22... Fraction into a mixed number starts with tins of paint that hold half a litre each is the. Been viewed 730 times this week and 1,745 times this week and 1,745 times this and! > 1 ) to mixed number is an integer ( whole number ) and a proper fraction with. 10 3 is equivalent to 3 1 3 as a mixed number 4 > fractions > )! Convert between improper fractions teachers for third to fifth grade, these activities provide plenty of support in to... Fractions > 1 ) to mixed numbers a mixed number the spaces above and press to! Fractions into mixed numbers can be converted to improper fractions included in this product: -Guided and. Can easily convert mixed to improper fractions to mixed number is composed of a number... And a proper fraction numbers can be used on its own or as support for the lesson Sums for numbers. Fractions > 1 ) to mixed number is an improper fraction 10 3 is to! Teachers for third to fifth grade, these activities provide plenty of support in learning to convert an improper into... Then all that we have to do is take that remainder and write it a... Line that separates 20 and 15 the Division line are all mixed numbers provide plenty of support in learning convert! Will use scaffolds such as fraction models to support the visualization of mixed numbers can divide, click. To changing an improper fraction to a mixed number is an integer ( whole number ) and a fraction.: Remove the decimal places by Multiplication of a whole number part of the mixed number an!: divide the numerator is greater than its denominator number and a proper fraction Sums for mixed and. Can be converted to improper fractions pretty easily can divide, then click convert … improper mixed! Above and press to mixed fractions to mixed fractions to mixed numbers and fractions! Divide, then you can divide, then you can divide, then click …... Will be able to identify a whole number as a fraction whose is. Give them the correct amount of each flavor its denominator 3 as a fraction whose nominator greater! Right here, then click convert … improper fractions to mixed numbers Mr.! A fraction is a fraction math Games Addition Games Subtraction Games Multiplication Games Division Games fraction … 's. Mr. J Standards: CCSS.Math… we call the line that separates 20 and the! If you can easily convert mixed numbers here, then you can easily convert mixed to improper to... Tins of paint that hold half a litre each used on its own or as support the...
# Calculating two particle position and velocity with each other? I've calculated the displacements and average velocities of both particles. $A$ average velocity $= 1.1180m/s$ displacement is $11.1803m$. $B$ average velocity $= 0m/s$ displacement is $0m$. When, approximately, are the particles at the same position? I can't figure out how to find the when they are at the same position. I can look at the graph and tell it's around the $1.75s$ mark. When, approximately, do the particles have the same velocity? It's when both angles are the same, around $6s$. - I agree with Michael that the question is probably expecting you to estimate the answers from the graph. However, this is what I would do if these were experimental results I was analysing. Let's assume that the blue line is a straight line $f(x)$ and the curve is a quadratic $g(x)$. A straight line has the form $y = ax + b$ where $a$ is the gradient and $b$ is the $y$ intercept. From the graph the $y$ intercept is 1 and the gradient is 4/10 i.e. 0.4. Then: $$f(x) = 0.4x + 1$$ The quadratic is a little harder, but if you know the two zeroes of the quadratic, $x_1$ and $x_2$ then the function has the form $g(x) = A(x - x_1)(x - x_2)$ where $A$ is some constant. In our case the two zeros are both $x = 5$ so the function is $g(x) = A(x - 5)^2$. To find the constant $A$ note that when $x = 0$ $y = 4$ so the constant $A$ must be 4/25 or 0.16. $$g(x) = 0.16(x - 5)^2 = 0.16x^2 - 1.6x + 4$$ So to find the two values of $x$ when the curves cross we just set $f(x) = g(x)$: $$0.4x + 1 = 0.16x^2 - 1.6x + 4$$ and a quick rearrangement gives: $$0.16x^2 - 2x + 3 = 0$$ To get the two solutions to this use the quadratic formula and you find the curves cross at $x \approx 1.743$ and $x \approx 10.757$. As you say, the two particles have the same velocity when the gradients are the same i.e. $f^'(x) = g'(x)$. Differentiating our expressions for $f(x)$ and $g(x)$ gives: $$f^'(x) = 0.4$$ $$g^'(x) = 0.32x - 1.6$$ Set these equal to find the point where the slopes are equal: $$0.4 = 0.32x - 1.6$$ so: $$x \approx 6.25$$ Incidentally, I'm a bit concerned by your calculation of the average velocity. Unless there's a bit to the question you haven't posted, the average velocity of A is distance moved (4 metres) divided by time take (10 secs) so the average velocity is 0.4 m/sec not 1.118. -
# Johansen Cointegration Test I just performed a Johansen Co-integration test on two stocks. The results I get are: ans = r0 r1 t1 true false I am using Matlab. Can someone interpret these for me? If I have understood the test properly, they are a good correlated pair. With Mean reversion. Are they mean reverting? Also, I have read about stationary pairs but the technical definition is a bit confusing. If possible can someone help point me in the direction to a simpler explanation? Or may be a book to start off? Did the test again with the following result: [h,pValue,stat,cValue,mles] = jcitest(Y) Results Summary (Test 1) Data: Y Effective sample size: 229 Model: H1 Lags: 0 Statistic: trace Significance level: 0.05 # r h stat cValue pValue eigVal 0 0 9.6981 15.4948 0.3467 0.0411 1 0 0.0979 3.8415 0.7872 0.0004 h = r0 r1 t1 false false pValue = r0 r1 t1 0.34672 0.78721 stat = r0 r1 t1 9.6981 0.097852 cValue = r0 r1 t1 15.495 3.8415 mles = r0 r1 ## t1 [1x1 struct] [1x1 struct] I am trying to understand whether h=0 implies no cointegration? What exactly does the pValue tell us? In short still trying to understand how exactly to interpret the results. I will get onto generating the eigenvalues. And trying to understand them after this part gets clear. Thanks for the help guys, I don't have the advantage of attending school at the moment and understanding this is very difficult. -
@resize(t:tab, int) resizes its first argument and returns the results. The argument is modified. If the second argument is smaller that the size of the first argument, it effectively shrinks the first argument. If it is greater, undefined values are used to extend the tab.
# 7 specific volatges to to 1,n,2,3,4,5,6 numbers in Alpha-numeric Display B #### [email protected] Jan 1, 1970 0 What I am attempting is to take 7 specific voltages, and display certain numbers for these specific voltages on an alpha-numeric display. What I currently have is a resistor ladder that breaks up the voltage. The resistors form a ladder that breaks up the 5v from a voltage regulator into reference signals to determine at what voltages an LM339N will use to ground each corresponding LED. It use 1% resistors to make sure I get exact reference voltages because in the higher gears the signal is very close and there is not much gap between voltages. The output voltages from the sensor are as follows 1st gear = 1.782v 2nd gear = 2.242v 3rd gear = 2.960v 4th gear = 3.630v 5th gear = 4.310v 6th gear = 4.660v Neutral = 5.000v The comparators in this circuit will turn on each LED as follows: 1st LED = Anything over 1.022v 2nd LED = Anything over 2.043v 3rd LED = Anything over 2.660v 4th LED = Anything over 3.356v 5th LED = Anything over 4.052v 6th LED = Anything over 4.526v I now want to take thes voltages and convert them to show the proper gear on an alpha-numeric display. How can I turn what I have working in to alpha numeric? Or, Do I have to start a whole new circuit. Something that takes the voltage and turns it into a digital signal and then sends to an A/N LED display. J #### John Popelish Jan 1, 1970 0 What I am attempting is to take 7 specific voltages, and display certain numbers for these specific voltages on an alpha-numeric display. What I currently have is a resistor ladder that breaks up the voltage. The resistors form a ladder that breaks up the 5v from a voltage regulator into reference signals to determine at what voltages an LM339N will use to ground each corresponding LED. It use 1% resistors to make sure I get exact reference voltages because in the higher gears the signal is very close and there is not much gap between voltages. The output voltages from the sensor are as follows 1st gear = 1.782v 2nd gear = 2.242v 3rd gear = 2.960v 4th gear = 3.630v 5th gear = 4.310v 6th gear = 4.660v Neutral = 5.000v The comparators in this circuit will turn on each LED as follows: 1st LED = Anything over 1.022v 2nd LED = Anything over 2.043v 3rd LED = Anything over 2.660v 4th LED = Anything over 3.356v 5th LED = Anything over 4.052v 6th LED = Anything over 4.526v I now want to take thes voltages and convert them to show the proper gear on an alpha-numeric display. How can I turn what I have working in to alpha numeric? Or, Do I have to start a whole new circuit. Something that takes the voltage and turns it into a digital signal and then sends to an A/N LED display. A small microprocessor like a PIC 16F684: would make this so much simpler, including programmable levels and easy digital filtering of the noise on the signal, and possibly a shift in the levels depending on acceleration. The patterns for the various digits would be stored in a look up table, and the digital outputs could drive a 7 segment display, directly. The main advantage would be that it allows you to experiment with new variations you think up, after a little use, without going back and rebuilding the circuit. B #### [email protected] Jan 1, 1970 0 Is there any way I can use the outputs I already have. In order to use the PIC16F84, I would have to "program" it? I would actually like to bulild it in multisim first that way I don't waste money on parts. Kind of new to this stuff, but am having lots of fun, and pulling out my hair at the same time. Maybe if someone could post a schematic? C #### Chris Jan 1, 1970 0 Is there any way I can use the outputs I already have. In order to use the PIC16F84, I would have to "program" it? I would actually like to bulild it in multisim first that way I don't waste money on parts. Kind of new to this stuff, but am having lots of fun, and pulling out my hair at the same time. Maybe if someone could post a schematic? This is exactly why it's such a good idea to learn microcontrollers, as Mr. Popelish suggested. You could do this all with one 18-pin PIC, even driving your LEDs with only seven current-limiting resistors. OK. If you want to do this the '70s way, make sure your comparator outputs are active-high (logic 1 when active). Then feed these into an 8-to-3 priority encoder (I'm assuming you're using an automotive 12V supply, so look at the CD4532). The priority part ensures that, if you've got multiple comparator inputs, the highest one is the one that is converted into 3-bit binary. That's pretty much the same as your table in the first post -- except that neutral will show as 6th gear -- you should make that a 7th LED. You go from there to a BCD-to-Decimal Decoder (CD4028), and then use those 7 outputs to drive a ULN2004, which drives the LEDs with current limiting resistors. Is this what you're looking for? Good luck Chris B #### [email protected] Jan 1, 1970 0 Sounds like the 70's way is the way for me. "make sure your comparator outputs are active-high (logic 1 when active)" What do you mean by that? At present my comparators are simply completing the circuit throught the resistor ladder. Don't know where the "logic state" is at in this. What I want is to take out the six LED's(can add another for neutral also) I currently have and replace with Segment LED. If you have a second let me know the components I need in conjuction with what I have to make this work. C #### Chris Jan 1, 1970 0 Sounds like the 70's way is the way for me. "make sure your comparator outputs are active-high (logic 1 when active)" What do you mean by that? At present my comparators are simply completing the circuit throught the resistor ladder. Don't know where the "logic state" is at in this. What I want is to take out the six LED's(can add another for neutral also) I currently have and replace with Segment LED. If you have a second let me know the components I need in conjuction with what I have to make this work. Aaah, wonderful Google. This will probably post before the followup I posted. I misread your original post. In order to use a seven segment display, you should have the outputs of the CD4532 priority encoder feeding the inputs of a CD4511 7-segment latch and driver. This will be able to drive the LEDs directly (remember to use current-limiting resistors -- try 1K to start). Also, comparators like your LM339 have open collector outputs, which will sink current when turned on. They're usually gioven output pullup resistors to Vcc (again, if you're using an automotive 12V, try 22K pullups). You didn't specify whether the comparator outputs are set up to go high or low when the proper voltage is achieved. In order for the 4532 to operate properly, a logic "1" has to be asserted at the appropriate input. This is the reverse of TTL-type encoders. By the way, if you reverse the sense of your "Neutral" comparator output, you can use it to blank the display, which might be useful. Darn. All this from trying to multitask the Superbowl, s.e.b. and the War Department at the same time. A fatal combination. ;-) If you need help, feel free to post again. Chris C #### Chris Jan 1, 1970 0 Chris said: This is exactly why it's such a good idea to learn microcontrollers, as Mr. Popelish suggested. You could do this all with one 18-pin PIC, even driving your LEDs with only seven current-limiting resistors. OK. If you want to do this the '70s way, make sure your comparator outputs are active-high (logic 1 when active). Then feed these into an 8-to-3 priority encoder (I'm assuming you're using an automotive 12V supply, so look at the CD4532). The priority part ensures that, if you've got multiple comparator inputs, the highest one is the one that is converted into 3-bit binary. That's pretty much the same as your table in the first post -- except that neutral will show as 6th gear -- you should make that a 7th LED. You go from there to a BCD-to-Decimal Decoder (CD4028), and then use those 7 outputs to drive a ULN2004, which drives the LEDs with current limiting resistors. Is this what you're looking for? Good luck Chris Super Bowl, and the War Department). The output of the CD4532 priority encoder should go to an CD4511 BCD-to-7-segment display decoder. That will give you your 7-segment display, without the CD4028 or the ULN2004. If you make the output from your Neutral comparator active-low, you can use it to blank out the display. Also, you should disable the latch function. Again, feel free to post back if this isn't clear, or you need more help. Chris B #### [email protected] Jan 1, 1970 0 I am mocking this up in multisim, and found when I activate everything, my lines from 4532 are al reading "Low" which I am assuming is backwards. I am very new to this, but like to think that I am a quick learner. Not to sure what to do to the comparators to make them read high. Thanks for the help so far. B #### Byron A Jeff Jan 1, 1970 0 Is there any way I can use the outputs I already have. Yes. But is essentially becomes programming in hardware. In order to use the PIC16F84, I would have to "program" it? Yes. I would actually like to bulild it in multisim first that way I don't waste money on parts. Microchip will send you samples. So you won't have a spend money. BAJ B #### [email protected] Jan 1, 1970 0 Only problem is that they don't supply samples of PIC16f84, they do some others, but not that one or at least that I can find. C #### Chris Jan 1, 1970 0 I am mocking this up in multisim, and found when I activate everything, my lines from 4532 are al reading "Low" which I am assuming is backwards. I am very new to this, but like to think that I am a quick learner. Not to sure what to do to the comparators to make them read high. Thanks for the help so far. Hi. Let's take things one at a time. A comparator is a logioc device that compares the two voltages at its +) is greater than the voltage at the inverting input (labelled -), the output of the comparator is a logic "1". If the voltage at - is greater than that at +, the output will be a logic "0". What makes this a little more analog is that the output of your comparator (usually depicted like an op amp) is an open collector transistor. It can either be on (logic 0) or off (logic 1). Some comparators have separate pins for the output transistor emitter (allowing you to choose the voltage of a logic "0". Others have standard logic outputs, and can source or sink current. But your LM339 output transistor emitters are tied to the GND pin, so the logic "0" I suggested above that you wanted the comparator output to be a logic "1" when the voltage went above the reference. That means putting the reference voltage at the inverting (-) input and your signal voltage at the non-inverting (+) input. I also mentioned that you would want your Neutral signal to be active low. That would mean you want your reference voltage to be at the non-inverting input (+) and your signal to be at the inverting (-) input. You might also want to have a signal for when the gearshift is in neutral -- as originally described, your system will say "6" when it should either be saying "N", "0", or blank. I think blank is your best bet, but you'll need a signal, so add another resistor to your divider, and another comparator. It will look something like this (and I hope you're using something better than a 7805 for your reference voltage!): ___ ~4.83V |\ ..--|___|->+12V ____ .-------------------------. o-|-\ | 22K +12V | | ___ | ___ 4.526v | | | >-o----------o N o--o-|7805|-o-|___|-o-|___|-o----------------.'-)-|+/ +| |____|+| | | | |/ --- | --- .-. | | --- | --- | | | | ___ | === | | | | | |\ ..--|___|->+12V === GND === '-' '--)-|-\ | 22K GND GND |4.052v | | >-o--------o 6th o--------------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\| ..--|___|->+12V '-' '----)-|-\ | 22K |3.356v | | >-o--------o 5th o------------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ ..--|___|->+12V '-' '------)-|-\ |' 22K |2.660v | | >-o--------o 4th o----------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ ..--|___|->+12V '-' '--------)-|-\ | 22K |2.043v | | >-o--------o 3rd o--------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ ..--|___|->+12V '-' '----------)-|-\ | 22K |1.022v | | >-o--------o 2nd o------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ ..--|___|->+12V '-' '------------)-|-\ | 22K | | | >-o--------o 1st === '-|+/ GND |/ (created by AACircuit v1.28.6 beta 04/19/05 www.tech-chat.de) Now let's look at the digital logic you'll need to turn those seven logic signals into a seven segment display, 70s-style. As mentioned above, we're going to use two 4000-series CMOS ICs, mostly because they can be powered by the car battery. Be sure you include some kind of reverse voltage and surge protection -- momentary battery voltages can exceed 40V under some circumstances. Here's the CMOS logic part: | | N o ------. | | common | VCC VCC | VCCVCC cathode | + + | + + 7-seg. | 5| |16 4| 3| |16 display | .--o---o----. .---o---o--o----. .--------. | 4 | EI Vcc | | Blank LT | 1K X 7 | | | GND<---o7 | 6| |13 ___ | --- | | 3 | |GND<-o8 ao---|___|--oa | | | | 6th o---o6 | | |12 ___ | | | | | 2 | | 6 2| bo---|___|--ob | | | | 5th o---o5 Q4o-----o4 |11 ___ | | | | | 1 | | | co---|___|--oc --- | | 4th o---o4 4532 | 7 1| 4511 |10 ___ | | | | | 13| Q2o-----o2 do---|___|--od | | | | 3rd o---o3 | | |9 ___ | | | | | 12| | 9 7| eo---|___|--oe | | | | 2nd o---o2 Q1o-----o1 |15 ___ | --- | | 11| | | fo---|___|--of | | 1st o---o1 | | |14 ___ | | | 10| | | go---|___|--og | | o0 | | | | | | | GND | | Store | | | | '-----o-----' '----o------o---' '----o---' | 8| 5| |8 | | === === === === | GND GND GND GND | | (created by AACircuit v1.28.6 beta 04/19/05 www.tech-chat.de) Hope this works well for you -- no guarantees. Note that, in simulation or for real, you have to tie the chip inputs you're not using to appropriate logic levels, or the chip doesn't do what you want. The data sheets are the best way to learn about the chip. Read them every time. And if, as is likely the case, I've forgotten something or gotten something bass-ackwards and it doesn't work, muscle it out. If you need more background on CMOS digital logic, look up Don Lancaster's CMOS Cookbook -- it's the best intro to the subject. You can get it at libraries, or purchase it from amazon.com or Mr. Lancaster's website: http://www.tinaja.com/ And again, remember that this whole thing could have been done with one 18-pin PIC and a bit of programming. Once you've gotten familiar with it, and you have the right tools, you can knock off something like this in an hour or so. Good luck Chris C Jan 1, 1970 0 Let's try that comparator setup again (view in fixed font or M$Notepad): ___ ~4.83V |\ .-|___|->+12V ____ .---------------------. o-|-\ | 22K +12V | | ___ | ___ 4.526v | | | >-o---------o N o--o-|7805|-o-|___|-o-|___|-o------------.'-)-|+/ +| |____|+| | | | |/ --- | --- .-. | | --- | --- | | | | ___ | === | | | | | |\ .-|___|->+12V === GND === '-' '--)-|-\ | 22K GND GND |4.052v | | >-o-------o 6th o----------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\| .-|___|->+12V '-' '----)-|-\ | 22K |3.356v | | >-o-------o 5th o--------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ .-|___|->+12V '-' '------)-|-\ | 22K |2.660v | | >-o-------o 4th o------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ .-|___|->+12V '-' '--------)-|-\ | 22K |2.043v | | >-o-------o 3rd o----. o-|+/ | | | |/ .-. | | | | | | ___ (created by AACircuit v1.28.6 beta 04/19/05 www.tech-chat.de) Hope this doesn't get munged. Chris B #### [email protected] Jan 1, 1970 0 That is absolutety most helpful. The 7805 is only what was available at my local Radio Shack. Is that AACircuit v1.28.6 going to help me veiw you schematic a little better, cause it looks like you got it. After this I am going to a PIC, I have a sample of PIC16f873, and the 4000 series stuff coming as samples. I think that I have to just reverse my inputs to my lm339 and should work with what you put down. Thanks a million. C #### Chris Jan 1, 1970 0 And again! ___ ~4.83V |\ .-|___|->+12V ____ .---------------------. o-|-\ | 22K +12V | | ___ | ___ 4.526v | | | >-o---------o N o--o-|7805|-o-|___|-o-|___|-o------------.'-)-|+/ +| |____|+| | | | |/ --- | --- .-. | | --- | --- | | | | ___ | === | | | | | |\ .-|___|->+12V === GND === '-' '--)-|-\ | 22K GND GND |4.052v | | >-o-------o 6th o----------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\| .-|___|->+12V '-' '----)-|-\ | 22K |3.356v | | >-o-------o 5th o--------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ .-|___|->+12V '-' '------)-|-\ | 22K |2.660v | | >-o-------o 4th o------. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ .-|___|->+12V '-' '--------)-|-\ | 22K |2.043v | | >-o-------o 3rd o----. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ .-|___|->+12V '-' '----------)-|-\ | 22K |1.022v | | >-o-------o 2nd o--. o-|+/ | | | |/ .-. | | | | | | ___ | | | | |\ .-|___|->+12V '-' '------------)-|-\ | 22K | | | >-o-------o 1st === '-|+/ GND |/ (created by AACircuit v1.28.6 beta 04/19/05 www.tech-chat.de) C #### Chris Jan 1, 1970 0 That is absolutety most helpful. The 7805 is only what was available at my local Radio Shack. Is that AACircuit v1.28.6 going to help me veiw you schematic a little better, cause it looks like you got it. After this I am going to a PIC, I have a sample of PIC16f873, and the 4000 series stuff coming as samples. I think that I have to just reverse my inputs to my lm339 and should work with what you put down. Thanks a million. Google munged the ASCII diagram a bit -- just cut & paste to Notepad if your newsreader doesn't have non-proportional/fixed font capability. I sent it again, and it should be OK. AA Circuit is helpful if you want to send ASCII circuit diagrams in this newsgroup. It's beerware (you owe the author a beer if you ever meet him). You're welcome. Just pass it on. Good luck Chris B #### [email protected] Jan 1, 1970 0 Would a PIC16LF84A-04I/P or a PIC16F873A-I/SP accomplish this as I can get samples of these only so far. B #### Byron A Jeff Jan 1, 1970 0 Only problem is that they don't supply samples of PIC16f84, they do some others, but not that one or at least that I can find. Take a look at my "16F84 is really obsolete page" here: http://www.finitesite.com/d3jsys/16F88.html First off the 16F84 wouldn't help you because it doesn't have an ADC converter that you'd need. The page also outlines most of the upgrades the 16F88 has over the 16F84. Finally you can get samples of the 16F88 from Microchip. You mentioned in another post about the 16F873. This would also be a fine choice for this project. BAJ B #### [email protected] Jan 1, 1970 0 Anyone know what the code would look like for the 873A or the 88? I should be able to wire the output voltage from the sensor to the PIC, and the outputs with resistors to the diplay, but what does the code look like internally? J #### John Fields Jan 1, 1970 0 Anyone know what the code would look like for the 873A or the 88? I should be able to wire the output voltage from the sensor to the PIC, and the outputs with resistors to the diplay, but what does the code look like internally? Well, neglecting initialization and housekeeping, you're going to wind up with two sections, basically. The first will have to do with converting your sensor's output voltage into a number beween zero and 255 and determining where that number sits with reference to some 'magic numbers' you'll have to program into the chip. Your trip points are: 6 = Anything over 4.526v 5 = Anything over 4.052v 4 = Anything over 3.356v 3 = Anything over 2.660v 2 = Anything over 2.043v 1 = Anything over 1.022v You have to equate those voltages to numbers, and if you have an 8 bit ADC with an input upper limit of 5V, then zero volts into it will result in 0000 0000 (hex 0X00) out of it and five volts into it will result in 1111 1111 (0Xff) out of it. So, since you have 256 output states available from the ADC, the sensitivity of the LSB will be equal to: 5V LSB = ------------ = 0.01953 V 256 states which means that the granularity of your ADC will be such that the smallest change you'll be able to detect in the output of your sensor will about 20 millivolts. With that in mind, we need to 'normalize' your trip points to the 8 bit field we're playing in, and we can do that by dividing your various trip points by the sensitivity of the ADC, like this: 4.526V TP6 = --------- = 230.8 ~ 231 = 0xe8 = 1110 1000 0.01961 So, when your ADC outputs 1110 1000, you'll know that the output of your sensor is pretty close to 4.526V (within about 20mV one way or the other, anyway.) If you performed that division and conversion for all of your trip points, you could wind up a table that looked like this: TP VOUT ADCHEX ADCBIN ---|-------|--------|--------| 6 | 4.526 | 0XE8 |11101000| ---|-------|--------|--------| 5 | 4.052 | 0XCF |11001111| ---|-------|--------|--------| 4 | 3.356 | 0XAC |10101100| ---|-------|--------|--------| 3 | 2.660 | 0X88 |10001000| ---|-------|--------|--------| 2 | 2.043 | 0X69 |01101001| ---|-------|--------|--------| 1 | 1.022 | 0X34 00110100| ---|-------|--------|--------| Now, if at the end of every conversion you stored the value in the ADC's output to a register, (let's call it ADCHEX) then to find out what gear you're in all you'd have to do would be something like this, in Motorola 6800 assembler: ngear: lda adchex ;get ADC output cmp #$e8 ;compare it to 232 bhs six ;branch if it's > = 232 cmp #$cf ;compare it to 207 bhs five ;branch if it's > = 207 cmp #$ac ;compare it to 172 bhs four ;branch if it's > = 172 cmp #$88 ;compare it to 136 bhs three ;branch if it's > = 136 cmp #$69 ;compare it to 105 bhs two ;branch if it's > = 105 cmp #$34 ;compare it to 52 bhs one ;branch if it's > = 52 bra ngear ;else loop Now, what you'd like to have, at the branches's targets, is a pattern stored which will get the seven-segment display to display what gear you're in. To do that, you need to know how a seven-segment display is set up. It's like this, where 'a' through 'g' are the names of the segments: a ----- | | f| g |b ----- | | e| |c ----- d So, if you turn on a,b,g,e, and d, the display will read '2'. OK, so assuming (to make my life a little easier) that you've got a common-cathode display and that the µC can source enough current to make the segments easily visible, what we need to do is assign a register to drive the IO port. If we call the register "segdata" and the port "ledout", and agree that the LSB (bit0) of the register and the port will correspond to segment 'a', bit 1 to segment 'b' and so forth, with the MSB always being equal to 0, then the code for the branches should look like this: six: lda #$0d ;segment data for "6" bra ledon ;to turn on the segments five: lda #$65 ;segment data for "5" bra ledon ;to turn on the segments four: lda #$66 ;segment data for "4" bra ledon ;to turn on the segments three: lda #$4f ;segment data for "3" bra ledon ;to turn on the segments two: lda #$3b ;segment data for "2" bra ledon ;to turn on the segments one: lda #\$06 ;segment data for "1" ledon: sta ledout ;turn on the segments bra ngear ;loop back to the beginning B Replies 21 Views 2K M Replies 7 Views 127 Replies 1 Views 308 Replies 20 Views 2K Replies 21 Views 2K
## Abstract Interacting with programs with command-line interfaces always involves a bit of line editing, and each CLI program tends to implement independently its own minimalistic editing features. We show a way of centralizing these editing tasks by making these programs receive commands that are prepared, and sent from, Emacs. The resulting system is a kind of Emacs- and Emacs Lisp-based “universal scripting language” in which commands can be sent to both external programs and to Emacs itself either in blocks or step-by-step under very fine control from the user. Note: this is a working draft that has many pieces missing and needs urgent revision on the pieces it has. Current version: (see top). Newer versions are being uploaded to http://angg.twu.net/eev-current/article/, and two animations (in Flash) showing eev at work can be found at: http://angg.twu.net/eev-current/anim/channels.anim.html and http://angg.twu.net/eev-current/anim/gdb.anim.html. Quick index: ## 1. Three kinds of interfaces Interactive programs in a Un*x system(1) can have basically three kinds of interfaces: they can be mouse-oriented, like most programs with graphical interfaces nowadays, in which commands are given by clicking with the mouse; they can be character-oriented, like most editors and mail readers, in which most commands are single keys or short sequences of keys; and they can be line-oriented, as, for example, shells are: in a shell commands are given by editing a full line and then typing “enter” to process that line. It is commonplace to classify computer users in a spectrum where the “users” are in one extreme and the “programmers” are in the other; the “users” tend to use only mouse-oriented and character-oriented programs, and the “programmers” only character-oriented and line-oriented programs. In this paper we will show a way to “automate” interactions with line-oriented programs, and, but not so well, to character-oriented programs; more precisely, it is a way to edit commands for these programs in a single central place --- Emacs --- and then send them to the programs; re-sending the same commands afterwards, with or without modifications, then becomes very easy. This way (“e-scripts”) can not be used to send commands to mouse-oriented programs --- at least not without introducing several new tricks. But “programmers” using Un*x systems usually see most mouse-oriented programs --- except for a few that are intrinsically mouse-oriented, like drawing programs --- as being just wrappers around line-oriented programs than perform the same tasks with different interfaces; and so, most mouse-oriented programs “do not matter”, and our method of automating interactions using e-scripts can be used to automate “almost everything”; hence the title of the paper. (1): Actually we are more interested in GNU systems than in “real” Unix systems; the reasons will become clear in the section nnn. By the way: the term “Unix” is Copyright (C) Bell Labs). ## 2. “Make each program do one thing well” One of the tenets of the Unix philosophy is that each program should do one thing, and do it well; this is a good design rule for Unix programs because the system makes it easy to invoke external programs to perform tasks, and to connect programs. Some of parts of a Unix system are more like “meta-programs” or “sub-programs” than like self-contained programs that do some clearly useful task by themselves. Shells, for example, are meta-programs: their main function is to allow users to invoke “real programs” and to connect these programs using pipes, redirections, control structures (if, for, etc) and Unix “signals”. On the other hand, libraries are sub-programs: for example, on GNU systems there's a library called GNU readline that line-oriented programs can use to get input; if a program, say, bc (a calculator) gets its input by calling readline(...) instead of using the more basic function fgets(...) then its line-oriented interface will have a little more functionality: it will allow the user to do some minimal editing in the current line, and also to recall, edit and issue again some of the latest commands given. ## 3. Making programs receive commands [See also: code, miniature, INSTALL, eev-rctool ] Many line-oriented programs allow “scripting”, which means executing commands from a file. For example, in most shells we can say “source ~/ee.sh”, and the shell will then execute the commands in the file ~/ee.sh. There are other ways of executing commands from a file --- like “sh ~/ee.sh” --- but the one with “source” is the one that we'll be more interested in, because it is closer to running the commands in ~/ee.sh one by one by hand: for example, with “source ~/ee.sh” the commands that change parameters of the shell --- like the current directory and the environment variables --- will work in the obvious way, while with “sh ~/ee.sh” they would only change the parameters of a temporary sub-shell; the current directory and the environment variables of the present shell would be “protected”. So, it is possible to prepare commands for a shell (or for scriptable line-oriented programs; for arbitrary line-oriented programs see the section nnn) in several ways: by typing them at the shell's interface --- and if the shell uses readline its interface can be reasonably friendly --- or, alternatively, by using a text editor to edit a file, say, ~/ee.sh, and by then “executing” that file with “source ~/ee.sh”. “source ~/ee.sh” is a lot of keystrokes, but that can be shortened if we can define a shell function: by putting function ee () { source ~/ee.sh; } in the shell's initialization file (~/.bashrc, ~/.zshrc, ...) we can reduce “source ~/ee.sh” to just “ee”: e, e, enter --- three keystrokes. We just saw how a shell --- or, by the way, any line-oriented program in which we can define an ee' function like we did for the shell --- can receive commands prepared in an external editor and stored in a certain file; let's refer to that file, ~/ee.sh, as a “temporary script file”. Now it remains to see how an external text editor can “send commands to the shell”, i.e., how to make the editor save some commands in a temporary script file in a convenient way, that is, without using too many keystrokes... ## 4. Sending commands Update (2013feb11): see this instead: (find-prepared-intro) GNU Emacs, “the extensible, self-documenting text-editor” ([S79]), does at least two things very well: one is to edit text, and so it can be used to edit temporary scripts, and thus to send commands to shells and to line-oriented programs with ee' functions; and the other one is to run Lisp. Lisp is a powerful programming language, and (at least in principle!) any action or series of actions can be expressed as a program in Lisp; the first thing that we want to do is a way to mark a region of a text and “send it as commands to a shell”, by saving it in a temporary script file. We implement that in two ways: 1: (defun ee (s e) 2: "Save the region in a temporary script" 3: (interactive "r") 4: (write-region s e "~/ee.sh")) 5: 6: (defun eev (s e) 7: "Like ee', but the script executes in verbose mode" 8: (interactive "r") 9: (write-region 10: (concat "set -v\n" (buffer-substring s e) 11: "\nset+v") 12: nil "~/ee.sh")) ee' (the name stands for something like emacs-execute') just saves the currently-marked region of text to ~/ee.sh; eev' (for something like emacs-execute-verbose') does the same but adding to the beginning of the temporary script a command to put the shell in “verbose mode”, where each command is displayed before being executed, and also adding at the end an command to leave verbose mode. We can now use ee' and eev' to send a block of commands to a shell: just select a region and then run ee' or eev'. More precisely: mark a region, that is, put the cursor at one of the extremities of the region, then type C-SPC to set Emacs's “mark” to that position, then go to other extremity of the region and type M-x eev (C-SPC and M-x are Emacs's notations for Control-Space and Alt-x, a.k.a. “Meta-x”). After doing that, go to a shell and make it “receive these commands”, by typing ee'. Update (2013feb11): see this instead: (find-links-intro) When we are using a system like *NIX, in a part of the time we are using programs with which we are perfectly familiar, and in the rest of the time we are using things that we don't understand completely and that make us have to access the documentation from time to time. In a GNU system the documentation is all on-line, and the steps needed to access any piece of documentation can be automated. We can use Emacs Lisp “one-liners” to create “hyperlinks” to files: A: (info "(emacs)Lisp Eval") B: (find-file "~/usrc/busybox-1.00/shell/ash.c") C: (find-file "/usr/share/emacs/21.4/lisp/info.el") These expressions, when executed --- which is done by placing the cursor after them and then typing C-x C-e, or, equivalently, M-x eval-last-sexp --- will (A) open a page of Emacs manual (the manual is a set of files in “Info” format), (B) open the source file shell/ash.c' of a program called busybox, and (C) open the file info.el' from the Emacs sources, respectively. As some of these files and pages can be very big, these hyperlinks are not yet very satisfactory: we want ways to not only open these files and pages but also to “point to specific positions”, i.e., to make the cursor go to these positions automatically. We can do that by defining some new hyperlink functions, that are invoked like this: A': (find-node "(emacs)Lisp Eval" "C-x C-e") B': (find-fline "~/usrc/busybox-1.00/shell/ash.c" "void\nevalpipe") C': (find-fline "/usr/share/emacs/21.4/lisp/info.el" "defun info") The convention is that these “extended hyperlink functions” have names like find-xxxnode', find-xxxfile', or find-xxxyyy'; as the name find-file' was already taken by a standard Emacs function we had to use find-fline' for ours. Here are the definitions of find-node' and find-fline': 14: (defun ee-goto-position (&optional pos-spec) 15: "If POS-SPEC is a string search for its first 16: occurrence in the file; if it is a number go to the 17: POS-SPECth line; if it is nil, don't move." 18: (cond ((null pos-spec)) 19: ((numberp pos-spec) 20: (goto-char (point-min)) 21: (forward-line (1- pos-spec))) 22: ((stringp pos-spec) 23: (goto-char (point-min)) 24: (search-forward pos-spec)) 25: (t (error "Invalid pos-spec: %S" pos-spec)))) 26: 27: (defun find-fline (fname &optional pos-spec) 28: "Like (find-file FNAME), but accepts a POS-SPEC" 29: (find-file fname) 20: (ee-goto-position pos-spec)) 31: 32: (defun find-node (node &optional pos-spec) 33: "Like (info NODE), but accepts a POS-SPEC" 34: (info node) 35: (ee-goto-position pos-spec))) Now consider what happens when we send to a shell a sequence of commands like this one: # (find-node "(gawk)Fields") seq 4 9 | gawk '{print $1,$1*$1}' the shell ignores the first line because of the #', that makes the shell treat that line as a comment; but when we are editing that in Emacs we can execute the (find-node ...)' with C-x C-e. Hyperlinks can be mixed with shell code --- they just need to be marked as comments. Note: the actual definitions of eev', ee-goto-position', find-fline' and find-node' in eev's source code are a bit more complex than the code in the listings above (lines 6--12 in the previous section and 14--35 in the current section). In all the (few) occasions in this paper where we will present the source code of eev's functions what will be shown are versions that implement only the “essence” of those functions, stripped down of all extra functionality. The point that we wanted to stress with those listings is how natural it is to use Emacs in a certain way, as an editor for commands for external programs, and with these plain-text hyperlinks that can be put almost anywhere: the essence of that idea can be implemented in 30 lines of Lisp and one or two lines of shell code. (See also: the section about e-scripts). ## 6. Shorter Hyperlinks Update (2013feb11): see this instead: (find-code-c-d-intro) [See also: code, miniature] The hyperlinks in lines A'', B'' and C'', below, A'': (find-enode "Lisp Eval" "C-x C-e") B'': (find-busyboxfile "shell/ash.c" "void\nevalpipe") C'': (find-efile "info.el" "defun info") are equivalent to the ones labeled A', B', C' in Section 5, but are a bit shorter, and they hide details like Emacs's path and the version of BusyBox; if we switch to newer versions of Emacs and BusyBox we only need to change the definitions of find-busyboxfile' and find-efile' to update the hyperlinks. Usually not many things change from one version of a package to another, so most hyperlinks continue to work after the update. Eev defines a function called code-c-d' that makes defining functions like find-enode', find-busyboxfile' and find-efile' much easier: (code-c-d "busybox" "~/usrc/busybox-1.00/") (code-c-d "e" "/usr/share/emacs/21.4/lisp/" "emacs") The arguments for code-c-d' are (1) a “code” (the “xxx” in a “find-xxxfile”), (2) a directory, and optionally (3) the name of a manual in Info format. The definition of code-c-d is not very interesting, so we won't show it here. ## 7. Keys for following hyperlinks and for going back Update (2013feb11): see this instead: (find-eval-intro) (Rewrite this; mention M-k, M-K, to' and the (disabled) stubs to implement a back' command) It is so common to have Lisp hyperlinks that extend from some position in a line --- usually after a comment sign --- to the end of the line that eev implements a special key for executing these hyperlinks: the effect of typing M-e (when eev is installed and “eev mode” is on) is roughly the same of first going to the end of the line and then typing C-x C-e; that is, M-e does the same as the key sequence C-e C-x C-e(1). (There are many other kinds of hyperlinks. Examples?) (1) The main difference between M-e and C-e C-x C-e is how they behave when called with numeric “prefix arguments”: for example, M-0 M-e highlights temporarily the Lisp expression instead of executing it and M-4 M-e executes it with some debugging flags turned on, while C-x C-e when called with any prefix argument inserts the result of the expression at the cursor instead of just showing it at the echo area. ## 8. Dangerous hyperlinks [See also: code, miniature] Note that these “hyperlinks” can do very dangerous things. If we start to execute blindly every Lisp expression we see just because it can do something interesting or take us to an interesting place then we can end up running something like: (shell-c ommand "rm -Rf ~") which destroy all files in our home directory; not a good idea. Hyperlinks ought to be safer than that... The modern approach to safety in hyperlinks --- the one found in web browsers, for example --- is that following a hyperlink can execute only a few kinds of actions, all known to be safe; the “target” of a hyperlink is something of the form http://..., ftp://..., file://..., info://..., mailto:... or at worst like javascript:...; none of these kinds of actions can even erase our files. That approach limits a lot what hyperlinks can do, but makes it harmless to hide the hyperlink action and display only some descriptive text. Eev's approach is the opposite of that. I wrote the first functions of eev in my first weeks after installing GNU/Linux in my home machine and starting using GNU Emacs, in 1994; before that I was using mostly Forth (on MS-DOS), and I hadn't had a lot of exposure to *nix systems by then --- in particular, I had tried to understand *nix's notions of user IDs and file ownerships and permissions, and I felt that they were a thick layer of complexity that I wasn't being able to get through. Forth's attitude is more like the user knows what he's doing''; the system is kept very simple, so that understanding all the consequences of an action is not very hard. If the user wants to change a byte in a critical memory position and crash the machine he can do that, and partly because of that simplicity bringing the machine up again didn't use to take more than one minute (in the good old days, of course). Forth people developed good backup strategies to cope with the insecurities, and --- as strange as that might sound nowadays, where all machines are connected and multi-user and crackers abound --- using the system in the Forth way was productive and fun. *NIX systems are not like Forth, but when I started using them I was accustomed to this idea of achieving simplicity through the lack of safeguards, and eev reflects that. The only thing that keeps eev's hyperlinks reasonably safe is transparency: the code that a hyperlink executes is so visible that it is hard to mistake a dangerous Lisp expression for a “real” hyperlink. Also, all the safe hyperlink functions implemented by eev start with find-', and all the find-' functions in eev are safe, except for those with names like find-xxxsh' and find-xxxsh0: for example, (find-sh "wget --help" "recursive download") executes “wget --help”, puts the output of that in an Emacs buffer and then jumps to the first occurrence of the string “recursive download” there; other find-xxxsh' functions are variations on that that execute some extra shell commands before executing the first argument --- typically either switching to another directory or loading an initialization file, like ~/.bashrc or ~/.zshrc. The find-xxxsh0' functions are similar to their find-xxxsh' counterparts, but instead of creating a buffer with their output they just show it at Emacs's echo area and they use only the first argument and ignore the others (the pos-spec). ## 9. Generating Hyperlinks [See also: code] Do we need to remember the names of all hyperlinks functions, like find-fline and find-node? Do we need to type the code for each hyperlink in full by hand? The answers are “no” and “no”. Eev implements several functions that create temporary buffers containing hyperlinks, that can then be cut and pasted to other buffers. For example, M-h M-f' creates links about an Emacs Lisp function: typing M-h M-f' displays a prompt in a minibuffer asking for the name of an Elisp function; if we type, say, find-file' there (note: name completion with the TAB key works in that prompt) we get a buffer like the one in figure 1. _________________________________________________________ |# (find-efunction-links 'find-file) | | | |# (where-is 'find-file) | |# (describe-function 'find-file) | |# (find-efunctiondescr 'find-file) | |# (find-efunction 'find-file) | |# (find-efunctionpp 'find-file) | |# (find-efunctiond 'find-file) | |# (find-eCfunction 'find-file) | |# (find-estring (documentation 'find-file)) | |# (find-estring (documentation 'find-file t)) | | | |# (Info-goto-emacs-command-node 'find-file) | |# (find-enode "Command Index" "* find-file:") | |# (find-elnode "Index" "* find-file:") | | | | | | | |--:** *Elisp hyperlinks* All L18 (Fundamental)-----| |_________________________________________________________| Figure 1: the result of typing M-h M-f find-file The first line of that buffer is a hyperlink to that dynamically-generated page of hyperlinks. Its function --- find-efunction-links' --- has a long name that is hard to remember, but there's a shorter link that will do the same job: (eek "M-h M-f find-file") The argument to eek' is a string describing a sequence of keys in a certain verbose format, and the effect of running, say, (eek "M-h M-f find-file") is the same as of typing M-h M-f find-file'. ((M-h is a prefix; (eek "M-h C-h") shows all the sequences with the same prefix.)) ((Exceptions: M-h M-c, M-h M-2, M-h M-y. Show examples of how to edit hyperlinks with M-h M-2 and M-h M-y.)) ((Mention hyperlinks about a key sequence? (eek "M-h M-k C-x C-f"))) ((Mention hyperlinks about a Debian package? (eek "M-h M-d bash"))) ## 10. Returning from Hyperlinks ((Mention M-k to kill the current buffer, and how Emacs asks for confirmation when it's a file and it's modified)) ((Mention M-K for burying the current buffer)) ((Mention what to do in the cases where a hyperlink points to the current buffer (section 16); there used to be an “ee-back” function bound to M-B, but to reactivate it I would have to add back some ugly code to to'... (by the way, that included Rubikitch's contributions))) ((Web browsers have a way to “return” from hyperlinks: the “back” button... In eev we have many kinds of hyperlinks, including some that are unsafe and irreversible, but we have a few kinds of “back”s that work... 1) if the hyperlink opened a new file or buffer, then to kill the file or buffer, use M-k (an eev binding for kill-this-buffer); note that it asks for a confirmation when the buffer is associated to a file and it has been modified --- or we can use bury-buffer; M-K is an eev binding for bury-buffer. ((explain how emacs keeps a list of buffers?)) Note: if the buffer contains, say, a manpage, or an html page rendered by w3m, which take a significant time to generate, then M-K is better is than M-k. 2) if the hyperlink was a to' then it jumped to another position in the same file... it is possible to keep a list of previous positions in a buffer and to create an ee-back' function (suggestion: bind it to M-B) but I haver never been satisfied with the implementations that I did so we're only keeping a hook in to' for a function that saves the current position before the jump)) ((dto recommended winner-undo)) ## 11. Local copies of files from the internet Update (2013feb11): see this instead: (find-psne-intro) [See also: code, code, eev-rctool, m.list, gmane] Emacs knows how to fetch files from the internet, but for most purposes it is better to use local copies. Suppose that the environment variable $S is set to ~/snarf/; then running this on a shell mkdir -p $S/http/www.gnu.org/software/emacs/ cd$S/http/www.gnu.org/software/emacs/ wget http://www.gnu.org/software/emacs/emacs-paper.html # (find-fline "$S/http/www.gnu.org/software/emacs/emacs-paper.html") # (find-w3m "$S/http/www.gnu.org/software/emacs/emacs-paper.html") creates a local copy of emacs-paper.html inside ~/snarf/http/. The two last lines are hyperlinks to the local copy; find-w3m' opens it “as HTML”, using a web browser called w3m that can be run either in standalone mode or inside Emacs; find-w3m' uses w3m's Emacs interface, and it accepts extra arguments, which are treated as a pos-spec-list. Instead of running the mkdir', cd' and wget' lines above we can run a single command that does everything: psne http://www.gnu.org/software/emacs/emacs-paper.html which also adds a line with that URL to a log file (usually ~/.psne.log). It is more convenient to have a psne' that changes the current directory of the shell than one that doesn't, and for that it must be defined as a shell function. Eev comes with an installer script, called eev-rctool, that can help in adding the definitions for eev (like the “function ee () { source ~/ee.sh; }” of section 3) to initialization files like ~/.bashrc (such initialization files are termed “rcfiles”). Eev-rctool does not add by default the definitions for psne' and for $S to rcfiles; however, it adds commented-out lines with instructions, which might be something like: # To define$S and psne uncomment this: # . $EEVTMPDIR/psne.sh # (find-eevtmpfile "psne.sh") ## 12. Glyphs [See also: code, code, miniature flipbook] Emacs allows redefining how characters are displayed, and one of the modules of eev --- eev-glyphs --- uses that to make some characters stand out. Character 15, for example, is displayed on the screen by default as '^O' (two characters, suggesting “control-O”), sometimes in a different color from normal text(3). Eev changes the appearance of char 15 to make it be displayed as a red star. Here is how: Emacs has some structures called “faces” that store font and color information, and eeglyphs-face-red' is a face that says “use the default font and the default background color, but a red foreground”; eev's initialization code runs this, (eev-set-glyph 15 ?* 'eev-glyph-face-red) which sets the representation of char 15 to the “glyph” made of a star in the face eeglyphs-face-red. For this article, as red doesn't print well in black and white, we used this instead: (eev-set-glyph 15 342434) this made occurrences of char 15 appear as the character 342434, *' (note that this is outside of the ascii range), using the default face, i.e., the default font and color. Eev also sets a few other glyphs with non-standard faces. The most important of those are «' and '»', which are set to appear in green against the default background, with: (eev-set-glyph 171 171 'eev-glyph-face-green) (eev-set-glyph 187 187 'eev-glyph-face-green) There's a technical point to be raised here. Emacs can use several “encodings” for files and buffers, and «' and »' only have character codes 171 and 187 in a few cases, mainly in the raw-text' encoding and in “unibyte” buffers; in most other encodings they have other char codes, usually above 255, and when they have these other codes Emacs considers that they are other characters for which no special glyphs were set and shows them in the default face. This visual distinction between the below-255 «' and ' and the other «' and »'s is deliberate --- it helps preventing some subtle bugs involving the anchor functions of section \ref{anchors}. (3). Determined by the “face” escape-glyph-face, introduced in GNU Emacs in late 2004. ## 13. Compose Pairs [See also: code] To insert a *' in a text we type C-q C-o' --- C-q “quotes” the next key that Emacs receives, and C-q C-o' inserts a “literal C-o”, which is a char 15. Typing «' and »'s --- and other non-standard glyphs, if we decide to define our own --- involves using another module of eev: eev-compose. Eev-compose defines a few variables that hold tables of “compose pairs”, which map pairs of characters that are easy to type into other, weirder characters; for example, eev-composes-otheriso' says that the pair "<<" is mapped to "«" and that ">>" is mapped to "»", among others. When we are in “eev mode” the prefix M-,' can be used to perform the translation: typing M-, < <' enters «', and typing M-, > >' enters »'. The variable eev-composes-accents' holds mappings for accented chars, like "'a" to "á" and "cc" to "ç"; eev-composes-otheriso' takes care of the other mappings that still concern characters found in the ISO8859-1 character set, like «' and '»' as above, "_a" to "ª", "xx" to "×", and a few others; eev-composes-globalmath' and eev-composes-localmath' are initially empty and are meant to be used for used-defined glyphs. The suffix math' in their names is a relic: Emacs implements its own ways to enter special characters, which support several languages and character encodings, but their code is quite complex and they are difficult to extend; the code that implements eev's M-,', on the other hand, takes about just 10 lines of Lisp (excluding the tables of compose pairs) and it is trivial to understand and to change its tables of pairs. M-,' was created originally to enter special glyphs for editing mathematical texts in TeX, but it turned out to be a convenient hack, and it stuck. ## 14. Delimited regions Update (2013feb11): see this instead: (find-bounded-intro) [See also: code, miniature, shot, anim] Sometimes it happens that we need to run a certain (long) series of commands over and over again, maybe with some changes from one run to the next; then having to mark the block all the time becomes a hassle. One alternative to that is using a variaton on M-x eev': M-x eev-bounded'. It saves the region around the cursor up to certain delimiters instead of saving what's between Emacs's “point” and “mark”. The original definition of eev-bounded was something like this: (defun eev-bounded () (interactive) (eev (ee-search-backwards "\n#*\n") (ee-search-forward "\n#*\n"))) the call to ee-search-backwards' searches for the first occurrence of the string "\n#*\n" (newline, hash sign, control-O, newline) before the cursor and returns the position after the "\n#*\n", without moving the cursor; the call to ee-search-forward does something similar with a forward search. As the arguments to eev' indicate the extremities of the region to be saved into the temporary script, this saves the region between the first "\n#*\n" backwards from the cursor to the first "\n#*\n" after the cursor. The actual definition of eev-bounded' includes some extra code to highlight temporarily the region that was used; see Figure \ref{fig:F3}. Normally the highlighting lasts for less than one second, but here we have set its duration to several seconds to produce a more interesting screenshot. ____________________ emacs@localhost _______________________ | _________ xterm __________ |#* |/home/edrx(edrx)# ee | |# Global variables |# Global variables | |lua50 -e ' |lua50 -e ' | | print(print) | print(print) | | print(_G["print"]) | print(_G["print"]) | | print(_G.print) | print(_G.print) | | print(_G) | print(_G) | | print(_G._G) | print(_G._G) | |' |' | |#* |function: 0x804dfc0 | |# Capture of local variables |function: 0x804dfc0 | |lua50 -e ' |function: 0x804dfc0 | | foo = function () |table: 0x804d420 | | local storage |table: 0x804d420 | | return |/home/edrx(edrx)# | | (function () return storage end), |__________________________| | (function (x) storage = x; return x end) | | end | | get1, set1 = foo() | | get2, set2 = foo() -- Output: | | print(set1(22), get1()) -- 22 22 | | print(set2(33), get1(), get2()) -- 33 22 33 | |' | |#* | | | |-:-- lua5.e 91% L325 (Fundamental)--------------------| |____________________________________________________________| Figure 2: sending a delimited block with F3 (find-fline "ss-lua.png") (find-eevex "screenshots.e" "fisl-screenshots") Eev binds the key F3 to the function eeb-default', which runs the current “default bounded function” (which is set initially to eev', not eev-bounded') on the region between the current default delimiters, using the current default “highlight-spec”; so, instead of typing M-x eev-bounded' inside the region to save it we can just type F3. All these defaults values come from a single list, which is stored in the variable eeb-defaults'. The real definition of eev-bounded' is something like: (setq eev-bounded '(eev ee-delimiter-hash nil t t)) (defun eev-bounded () (interactive) (setq eeb-defaults eev-bounded) (eeb-default)) Note that in Emacs Lisp (and in most other Lisps) each symbol has a value as a variable that is independent from its “value as a function”: actually a symbol is a structure containg a name, a “value cell”, a “function cell” and a few other fields. Our definition of eev-bounded', above, includes both a definition of the function eev-bounded' and a value for the variable eev-bounded'. Eev has an auxiliary function for defining these “bounded functions”; running (eeb-define 'eev-bounded 'eev 'ee-delimiter-hash nil t t) has the same effect as doing the setq' and the defun' above. As for the meaning of the entries of the list eeb-defaults', the first one (eev') says which function to run; the second one (ee-delimiter-hash') says which initial delimiter to use --- in this case it is a symbol instead of a string, and so eeb-default' takes the value of the variable ee-delimiter-hash'; the third one (nil) is like the second one, but for the final delimiter, and when it is nil eeb-default' considers that the final delimiter is equal to the initial delimiter; the fourth entry (t) means to use the standard highlight-spec, and the fifth one (t, again) tells eeb-default' to make an adjustment to the highlighted region for purely aestethical reasons: the saved region does not include the initial "\n" in the final delimiter, "\n#*\n", but the highlighting looks nicer if it is included; without it the last highlighted line in Figure 2 would have only its first character --- an apostrophe --- highlighted. Eev also implements other of these “bounded” functions. For example, running M-x eelatex' on a region saves it in a temporary LaTeX file, and also saves into the temporary script file the commands to process it with LaTeX; eelatex-bounded' is defined by (eeb-define 'eelatex-bounded 'eelatex 'ee-delimiter-percent nil t t) where the variable ee-delimiter-percent' holds the string "\n%*\n"; comments in LaTeX start with percent signs, not hash signs, and it is convenient to use delimiters that are treated as comments. ((The block below ... tricky ... blah. How to typeset *' in LaTeX. Running eelatex-bounded changed the defaults stored in eeb-defaults, but ee-once blah doesn't.)) %* % (eelatex-bounded) % (ee-once (eelatex-bounded)) \def\myttbox#1{% \setbox0=\hbox{\texttt{a}}% \hbox to \wd0{\hss#1\hss}% } \catcode*=13 \def*{\myttbox{$\bullet$}} \begin{verbatim} abcdefg d*fg \end{verbatim} %* ...for example eelatex, that saves the region (plus certain standard header and footer lines) to a “temporary LaTeX file” and saves into the temporary script file the commands to make ee' run LaTeX on that and display the result. The block below is an example of (...) ...The block below shows a typical application of eev-bounded: # (find-es "lua5" "install-5.0.2") # (find-es "lua5" "install-5.0.2" "Edrx's changes") # (code-c-d "lua5" "/tmp/usrc/lua-5.0.2/") # (find-lua5file "INSTALL") # (find-lua5file "config" "support for dynamic loading") # (find-lua5file "config") # (find-lua5file "") #* rm -Rv ~/usrc/lua-5.0.2/ mkdir -p ~/usrc/lua-5.0.2/ tar -C ~/usrc/ \ -xvzf$S/http/www.lua.org/ftp/lua-5.0.2.tar.gz cd ~/usrc/lua-5.0.2/ cat >> config <<'---' LOADLIB= -DUSE_DLOPEN=1 DLLIB= -ldl MYLDFLAGS= -Wl,-E EXTRA_LIBS= -lm -ldl --- make test 2>&1 | tee omt ./bin/lua -e 'print(loadlib)' #* it unpacks a program (the interpreter for Lua), changes its default configuration slightly, then compiles and tests it. ((Comment about the size: the above code is “too small for being a script”, and the hyperlinks are important)) gdb (here-documents, gcc, ee-once) (alternative: here-documents, gcc, gdb, screenshot(s) for gdb) ## 15. Communication channels Update (2013feb11): see this instead: (find-channels-intro) The way that we saw to send commands to a shell is in two steps: first we use M-x eev in Emacs to “send” a block of commands, and then we run ee' at the shell to make it “receive” these commands. But there is also a way to create shells that “listen” not only to the keyboard for their input, but also to certain “communication channels”; by making Emacs send commands through these communication channels we can skip the step of going to the shell and typing ee' --- the commands are received immediately. _________emacs@localhost____________ ___________channel A______________ | | |/tmp(edrx)# # Send things to port | |* (eechannel-xterm "A") ;; create | | 1234 | |* (eechannel-xterm "B") ;; create | |/tmp(edrx)# { | |# Listen on port 1234 | |> echo hi | |netcat -l -p 1234 | |> sleep 1 | |* | |> echo bye | |* (eechannel "A") ;; change target | |> sleep 1 | |# Send things to port 1234 | |> } | netcat -q 0 localhost 1234 | |{ | |/tmp(edrx)# | | echo hi | |/tmp(edrx)# | | sleep 1 | |__________________________________| | echo bye | ___________channel B______________ | sleep 1 | |/tmp(edrx)# # Listen on port 1234 | |} | netcat -q 0 localhost 1234 | |/tmp(edrx)# netcat -l -p 1234 | | | |hi | |-:-- screenshots.e 95% L409 (Fu| |bye | |_Wrote /home/edrx/.eev/eeg.A.str____| |/tmp(edrx)# | | | |__________________________________| Figure 3: sending commands to two xterms using F9 (find-eevex "screenshots.e" "fisl-screenshots") (find-eevfile "article/ss-f9.png") The screenshot at Figure 3 shows this at work. The user has started with the cursor at the second line from the top of the screen in the Emacs window and then has typed F9 several times. Eev binds F9 to a command that operates on the current line and then moves down to the next line; if the current line starts with *' then what comes after the *' is considered as Lisp code and executed immediately, and the current line doesn't start with *' then its contents are sent through the default communication channel, or though a dummy communication channel if no default was set. The first F9 executed (eechannel-xterm "A")', which created an xterm with title “channel A”, running a shell listening on the communication channel “A”, and set the default channel to A; the second F9 created another xterm, now listening to channel B, and set the default channel to B. The next two F9's sent each one one line to channel B. The first line was a shell comment (“# Listen...”); the second one started the program netcat, with options to make netcat “listen to the internet port 1234” and dump to standard output what it receives. The next line had just *'; executing the rest of it as Lisp did nothing. The following line changed the default channel to A. In the following lines there is a small shell program that outputs “hi”, then waits one second, then outputs “bye”, then waits for another second, then finishes; due to the “| netcat...” its output is redirected to the internet port 1234, and so we see it appearing as the output of the netcat running on channel B, with all the expected delays: one second between “hi” and “bye”, and one second after “bye”; after that last one-second delay the netcat at channel A finishes receiving input (because the program between {' and }' ends) and it finishes its execution, closing the port 1234; the netcat at B notices that the port was closed and finishes its execution too, and both shells return to the shell prompt. There are also ways to send whole blocks of lines at once through communication channels; see Section \ref{bigmodular}. ## 15.1. The Implementation of Communication Channels [2007: There's a much better explanation, with nice ascii diagrams, at channels.anim; it should be merged here.] Communication channels are implemented using an auxiliary script called eegchannel', which is written in Expect ([L90] and [L95]). If we start an xterm in the default way it starts a shell (say, /bin/bash) and interacts with it: the xterm sends to the shell as characters the keystrokes that it receives from the window manager and treats the characters that the shell sends back as being instructions to draw characters, numbers and symbols on the screen. But when we run (eechannel-xterm "A")' Emacs creates an xterm that interacts with another program --- eegchannel --- instead of with a shell, and eegchannel in its turn runs a shell and interacts with it. Eegchannel passes characters back and forth between the xterm and the shell without changing them in any way; it mostly tries to pretend that it is not there and that the xterm is communicating directly with the shell. However, when eegchannel receives a certain signal it sends to the shell a certain sequence of characters that were not sent by the xterm; it “fakes a sequence of keystrokes”. Let's see a concrete example. Suppose than Emacs was running with process id (“pid”) 1000, and running (eechannel-xterm "A") in it made it create an xterm, which got pid 1001; that xterm ran eegchannel (pid 1002), which ran /bin/bash (pid 1003). Actually Emacs invoked xterm using this command line: xterm -n "channel A" -e eegchannel A /bin/bash and xterm invoked eegchannel with “eegchannel A /bin/bash”; eegchannel saw the A', saved its pid (1002) to the file ~/.eev/eeg.A.pid, and watched for SIGUSR1 signals; every time that it (the eegchannel) receives a SIGUSR1 it reads the contents of ~/.eev/eeg.A.str and sends that as fake input to the shell that it is controlling. So, running echo 'echo $[1+2]' > ~/.eev/eeg.A.str kill -USR1$(cat ~/.eev/eeg.A.pid) in a shell sends the string “echo $[1+2]” (plus a newline) “through the channel A”; what Emacs does when we type F9 on a line that does not start with *' corresponds exactly to that. ## 16. Anchors Update (2013feb11): see this instead: (find-anchors-intro) [See also: code] The function to' can be used to create hyperlinks to certain positions --- called “anchors” --- in the current file. For example, # Index: # «.first_block» (to "first_block") # «.second_block» (to "second_block") #* # «first_block» (to ".first_block") echo blah #* # «second_block» (to ".second_block") echo blah blah #* What to' does is simply to wrap its argument inside «' and »' characters and then jump to the first occurrence of the resulting string in the current file. In the (toy) example above, the line that starts with “# «.first_block»” has a link that jumps to the line that starts with “# «first_block»”, which has a link that jumps back --- the anchors and “(to ...)”s act like an index for that file. The function find-anchor' works like a to' that first opens another file. For example, (find-anchor "~/.zshrc" "update-homepage") does roughly the same as: (find-fline "~/.zshrc" "«update-homepage»") Actually find-anchor' consults a variable, ee-anchor-format', to see in which strings to wrap the argument. Some functions modify ee-anchor-format' temporarily to obtain special effects; for example, a lot of information about the packages installed in a Debian GNU system is kept in a text file called /var/lib/dpkg/info/status; (find-status "emacs21") opens this file and searches for the string "\nPackage: emacs21\n" there --- that string is the header for the block with information about the package emacs21, and it tells the size of the package, description, version, whether it is installed or not, etc, in a format that is both machine-readable and human-readable. ## 17. E-scripts The best short definition for eev that I've found involves some cheating, as it is a circular definition: “eev is a library that adds support for e-scripts to Emacs” --- and e-scripts are files that contain chunks meant to be processed by eev's functions. Almost any file can contain parts “meant for eev”: for example, a HOWTO or README file about some program will usually contain some example shell commands, and we can mark these commands and execute them with M-x eev; and if we have the habit of using eev and we are writing code in, say, C or Lua we will often put elisp hyperlinks inside comment blocks in our code. These two specific languages (and a few others) have a feature that is quite convenient for eev: they have syntactical constructs that allow comment blocks spanning several lines --- for example, in Lua, where these comment blocks are delimited by “--((” and “--))”s, we can have a block like --[[ #* # This file: (find-fline "~/LUA/lstoindexhtml.lua") # A test: cd /tmp/ ls -laF | col -x \ | lua50 ~/LUA/lstoindexhtml.lua tmp/ \ | lua50 -e 'writefile("index.html", io.read("*a"))' #* --]] in a Lua script, and the script will be at the same time a Lua script and an e-script. When I started using GNU and Emacs the notion of an e-script was something quite precise to me: I was keeping notes on what I was learning and on all that I was trying to do, and I was keeping those notes in a format that was partly English (or Portuguese), partly executable things --- not all of them finished, or working --- after all, it was much more practical to write rm -Rv ~/usrc/busybox-1.00/ tar -C ~/usrc/ -xvzf \$S/http/www.busybox.net/downloads/busybox-1.00.tar.gz cd ~/usrc/busybox-1.00/ cp -iv ~/BUSYBOX/myconfig .config make menuconfig make 2>&1 | tee om than to write Unpack BusyBox's source, then run "make menuconfig" and "make" on its main directory because if I had the second form in my notes I would have to translate that from English into machine commands every time... So, those files where I was keeping my notes contained “executable notes”, or were “scripts for Emacs”, and I was quite sure that everyone else around were also keeping notes in executable formats, possibly using other editors and environments (vi, maybe?) and that if I showed these people my notes and they were about some task that they were also struggling with then they would also show me their notes... I ended up making a system that uploaded regularly all my e-scripts (no matter how messy they were) to my home page, and writing a text --- “The Eev Manifesto” ([O99]) --- about sharing these executable notes. Actually trying to define an e-script as being “a file containing executable parts, that are picked up and executed interactively” makes the concept of an e-script very loose. Note that we can execute the Lua parts in the code above by running the Lua interpreter on it, we can execute the elisp one-liner with M-e in Emacs, and we can execute the shell commands using F3 or M-x eev; but the code will do nothing by itself --- it is passive. A piece of code containing instructions in English on how to use it is also an e-script, in a sense; but to execute these instructions we need to invoke an external entity --- a human, usually ourselves --- to interpret them. This is much more flexible, but also much more error-prone and slow, than just pressing a simple sequence of keys like M-e, or F9, or F3, alt-tab, e, e, enter. ## 18. Splitting eev.el When I first submittted eev for inclusion in GNU Emacs, in 1999, the people at the FSF requested some changes. One of them was to split eev.el --- the code at that point was all in a single Emacs Lisp file, called eev.el --- into several separate source files according to functionality; at least the code for saving temporary scripts and the code for hyperlinks should be kept separate. It turned out that that was the wrong way of splitting eev. The frontier between what is a hyperlink and what is a block of commands is blurry: man foo man -P 'less +/bar' foo # (eev "man foo") # (eev "man -P 'less +/bar' foo") # (find-man "foo" "bar") The two man' commands above can be considered as hyperlinks to a manpage, but we need to send those commands to a shell to actually open the manpage; the option "-P 'less +/bar'" instructs man' to use the program less' to display the manpage, and it tells less' to jump to the first occurrence of the string “bar” in the text, and so it is a hyperlink to a specific position in a manpage. Each of the two eev' lines, when executed, saves one of these man' commands to the temporary script file; because they contain Lisp expressions they look much more like hyperlinks than the man' lines. The last line, find-man', behaves much more like a “real” hyperlink: it opens the manpage inside Emacs and searches for the first occurrence of bar' there; but Emacs's code for displaying manpages was tricky, and it took me a few years to figure out how to add support for pos-spec-lists to it... So, what happens is that often a new kind of hyperlink will begin its life as a series of shell commands (another example: using gv --page 14 file.ps' to open a PostScript file and then jump to a certain page) and then it takes some time to make a nice hyperlink function that does the same thing; and often these functions are implemented by executing commands in external programs. There's a much better way to split conceptually what eev does, though. Most functions in eev take a region of text (for example Emacs's own “selected region”, or the extent of Lisp expression coming before the cursor) and “execute” that in some way; the kinds of regions are Emacs's (selected) region | M-x eev, M-x eelatex (sec. 4) ----------------------------+------------------------------ last-sexp (Lisp expression | C-x C-e, M-E (sec. 5) at the left of the cursor) | ----------------------------+------------------------------ sexp-eol (go to end of | C-e C-x C-e, M-e (sec. 7) line, then last-sexp) | ----------------------------+------------------------------ bounded region | F3, M-x eev-bounded, | M-x eelatex-bounded (sec. 14) ----------------------------+------------------------------ bounded region around | (ee-at [ anchor] ...) anchor | (sec. 20) ----------------------------+------------------------------ current line | F9 (sec. 15) ----------------------------+------------------------------ no text (instead use the | F12 (sec. 19) next item in a list) | Actions (can be composed): * Saving a region or a string into a file * Sending a signal to a process * Executing as Lisp * Executing immediately in a shell * Start a debugger ((Emacs terminology: commands)) ## 19. Steps ((Simple examples)) ((writing demos)) ((hyperlinks for which no short form is known)) ((producing animations and screenshots)) ## 20. Sending lines to processes running in Emacs buffers Update (oct/2011): PLEASE IGNORE THIS! This part of eev has been completely rewritten - see: http://angg.twu.net/eev-current/eepitch.readme.html http://angg.twu.net/eev-current/eepitch.el.html (These sections - 20 to 24 - are very new (handwritten in 2007jul12, typed a few days later). They are early drafts, full of errors, describing some code that does not yet exist (ee-tbr), etc. Also, I don't know Rubikitch's real name, so I used a random Japanese name...) Emacs can run external programs interactively inside buffers; in the screenshot in Figure 5 there's a shell running in the buffer "*shell*" in the lower window. Technically, what is going on is much more complex than what we described in the previous section. The shell runs in a pseudo-terminal (pty), but ptys are usually associated to rectangular grids of characters with a definite width and height, while in an Emacs buffer the width of each line, and the total number of lines,are only limited by memory constraints. Many interactive programs expect their input to come through their more-or-less minimalistic line editors, that may try to send to the terminal commands like "clear the screen" or "go to column x at line y"; how should these things be handled in a shell buffer? Also, the user can move freely in a shell buffer, and edit its contents as text, but the "Return" key becomes special: when it is hit in a shell buffer Emacs takes the current line - except maybe some initial characters that are seen as a prompt - and sends that to the shell process, as if the user had typed exactly that; so, Emacs takes over the line editor of the shell process completely. The translation between character sequences going through the pty and buffer-editing functions is very tricky, full of non-obvious design choices, and even though it has been around for more than 20 years it still has some (inevitable) quirks. I almost never used shell buffers, so I found the following idea, by OGAMI Itto, very surprising when he sent it to the eev mailing list in 2005. (Figure 5 will be a screenshot that I haven't taken yet.) (It will be simpler than the screenshot from Fig. 6, that is this: http://angg.twu.net/IMAGES/eepitch-gdb.png ) The current window, above in Figure 5, is editing an e-script, and the other window shows a shell buffer - that we will refer to as the "target buffer". When the user types a certain key - by default F8 - the current line is sent to the target buffer, and the point is moved down to the next line; pressing F8 n times in sequence sendsn lines, one by one. One detail: "sending a line" means inserting its contents - except the newline - at the current position in the target buffer, and then running there the action associated to the "Return" key. "Return" is almost always a special key, bound to different actions in different major modes, so just inserting a newline would not work - that would not simulate what happens when a user types "Return". Note that, in a sense, the action of F8 is much more complex than that of F9, described in the last section; but user might perceive F8 as being much simpler, as there are no external programs involved (Expect, eegchannel, xterm), and no setup hassles - all the machinery to make Emacs buffers invoke external processes in buffers pretending to be terminals ("comint mode") comes built-in with Emacs since the early 1980s. Ogami's idea also included three "bonus features": window setup, reconstruction of the target buffer, and star-escapes. In the default Emacs setting some commands - M-x shell between them - might split the current Emacs frame in two windows; none of eev's hyperlink functions do that, and I have always felt that it is more natural to use eev with a setting (pop-up-windows set to nil) that disables window splittings except when explicitly requested by the user. Anyway: M-x shell ensures that a "*shell*" buffer is visible in a window, and that a shell process is running in it; this setup code for F8, (eepitch '(shell)) splits the window (if the frame has just one window), and runs (shell)' in the other window - with the right defaults - to force that window to display a shell buffer with a live shell process running in it; it also sets a variable, eepitch-target-buffer', to that buffer, so that the next F8's will have a definite buffer to send lines too - as target buffers need not necessarily be shell buffers. As for the star-escapes, it's the same idea as with F9: when a line starts with a red star glyph, running F8 on it executes everything on it - after the red star - as Lisp, and if there are no errors the point is moved down. So lines starting with a red star can be used to set up an eepitch target, to switch to another target, or to do special actions - like killing a certain target so that it will be reconstructed anew by the next F8. Note that once that we recognize that a region of an e-script is to be used by eepitch there is only one key to be used to "run" each of its lines, both the ones with red stars and the ones without: F8. However, as with F9, the user must know what to expect after each step. A badly-written e-script for eepitch may try, for example, to "cd" into a directory that does not exist, and if the next line is, say, "tar -xvzf $S/http/foo/bar.tgz" then it will try to unpack a tarball into the wrong place, creating a big mess. ## 21. Using eepitch to control unprepared shells # (find-eevfile "eev.el" "EEVDIR") # (find-eevfile "eev.el") As we have seen in section 4, M-x eev sends the region to a "prepared shell"; if the shell has the right settings for the environment variables$EEVTMPDIR and $EE, and if it has the shell function ee', then running ee' in the shell "sources" the temporary script - corresponding to the regin - in verbose mode. Well, if Emacs loads eev.el and the environment variables$EEVDIR, $EEVTMPDIR and$EE are not set, then they are set, respectively, to the directory where eev.el was read from, to the subdirectory of it given by $EEVDIR/tmp, and to the file$EEVTMPDIR/ee.sh. Processes started from Emacs inherit these environment variables, so a shell buffer created by running F8 on these two lines, * (eepitch-shell) function ee () { set -v; . \$EE; set +v; } will be running a prepared shell. Such buffers can be used to let users understand better how prepared shells work, and decide if they want to patch their initialization files for the shell (see eev-rctool) so that their shells will be "prepared" by default. (Note: I haven't yet played much with this idea - discuss running eev-rctool on such shells (and a function that creates a buffer with an e-script for that), and loading psne.sh from an unprepared shell). ## 22. Controlling debuggers with eepitch # (find-node "(emacs)Debuggers") # (find-node "(gdb)Top") # (find-angg ".emacs" "eepitch-gdb") On *NIX it is common to keep debuggers separated into two parts: a back-end, with a simple textual interface, and a front-end, that controls the back-end via its textual interface but presents a better interface, showing source files and breakpoints in a nice way, etc. The GNU Debugger, GDB, is a back-end, and it can be used to debug and single-step several compiled languages; the "Grand Unified Debugger" mode of Emacs, a.k.a. GUD, is a front-end for GDB and other back-ends. Usually, GUD splits an Emacs frame into two windows, one for interaction with GDB (or other back-end, but let's say just "GDB" for simplicity), and another one for displaying the source file where the execution is. Some of the output of GDB - lines meaning, e.g., "we're at the source file foo.c, at line 25" - are filtered by GUD and are not shown in the GUD buffer; and the user can press special key sequences on source files that generate commands to GDB - like, "set a breakpoint on this line". In order to control GDB with eepitch we need a window setting with three windows, like in the screenshot in Figure 6. http://article.gmane.org/gmane.emacs.eev.devel/47 http://lists.gnu.org/archive/html/eev/2007-07/msg00000.html http://lists.gnu.org/archive/html/eev/2007-07/pngXBfRlWr29Z.png ^ Figure 6 will be like the screenshots (real and asciified) in this message to the mailing list The way to set up that does not integrate very well with the "standard" eepitch at this moment, but that should come with time. ## 23. E-scripting GDB with eepitch # (find-node "(gdb)Set Breaks" "tbreak ARGS'") # (find-node "(elisp)The Buffer List") # (find-es "lua5" "lua-api-from-gdb") # (find-TH "luaforth" "lua-api-from-gdb") We can use elisp hyperlinks to point to specific lines in source files - and we can combine these hyperlinks with the code to set up breakpoints, in two ways. *;(find-lua51file "src/lvm.c" "case OP_CLOSE:" 1) * (find-lua51file "src/lvm.c" "case OP_CLOSE:" 1 '(ee-tbr)) The first line above contains an elisp hyperlink to a line in the source of Lua. Actually, it points to the code for an opcode in Lua's virtual machine that most people find rather mysterious. As the line starts with *;', an F8 on it executes a Lisp comment - i.e., does nothing - and moves down; only a M-e' (or a C-e C-x C-e') on that line would follow the hyperlink. The second line, when executed with F8, would go to that line in the source, then run (ee-tbr)' there; ee-tbr invokes gud-tbr to set a temporary breakpoint on that source line (i.e., one that is disabled when the execution stops there for the first time), and then buries the buffer - the one with "lmv.c" - like a M-K' would do; the effect is that the buffer in that window - the top-left window in a situation like in Figure 6 - does not change, it will still show the e-script. A variation on this is to wrap the hyperlink in an ee-tbr: * ; (find-lua51file "src/lvm.c" "case OP_CLOSE:" 1) * (ee-tbr '(find-lua51file "src/lvm.c" "case OP_CLOSE:" 1)) When ee-tbr is called with an argument it evaluated the argument inside a save-excursion, and sets a breakpoint there; the effect is almost the same as the previous case, but this does not change the order of the buffers in the buffer list. ## 24. Two little languages for debugging E-scripts for eepitch and GDB can be used to bring programs to a certain point (and to inspect their data structures there; we will have more to say about this in the next section). In a sense, as in {Bentley}, these e-scripts are written in a language that describes states of running programs - and they can be executed step by step. These e-scripts, being executable, can be used in e-mails to communicate particular states of programs - say, where a certain bug occurs. Unfortunately, they are too fragile and may cease working after minimal changes in the program, and they are almost impossible to read... However, the screenshot in Figure 5 suggests another language for communicating controlling programs with GDB: the contents of the "*gud*" buffer. After removing some excess verbosity by hand we get something that is readable enough if included in e-mails - and to extract the original commands from that we just have to discard the lines that don't start with "(gdb)", then remove the "(gdb)" prompts. As for the hyperlinks with (ee-tbr)', they may need to be copied to the GUD buffer, and not filtered out; we still need to experiment with different ways to do that to be able to choose one. ## 25. Inspecting data in running programs Almost anyone who has learned a bit of Lisp should be familiar with this kind of box diagrams. After running (setq x '(5 "ab")) (setq y (list x x '(5 "ab"))) the value of y can be represented by: ___ ___ ___ ___ ___ ___ |___|___| --> |___|___| --------> |___|___| --> nil | ___________/ | |/ | _v_ ___ ___ ___ _v_ ___ ___ ___ |___|___| --> |___|___| --> nil |___|___| --> |___|___| --> nil | | | | v v v v 5 "ab" 5 "ab" This representation is verynice - it omits lots of details that are usually irrelevant, like the address in the memory of each cons, and the exact names of each struct in C and their fields. But sometimes we need to understand the implementation in C, and a more complete diagram would be convenient. At least, we would like to know how to get, in the C source of Emacs, from the address of the leftmost cons in the top line to the rightmost "ab" in the bottom line - but how do we express following the "cdr" arrows, the "car" arrows, and extracting the contents of a string object in elisp, One solution is to use GDB, and e-scripts for it: ... A "complete diagram" corresponding to the one above, whatever the format that we choose to draw it, should include some information explaining that "cdr" arrows correspond to "->cdr", "car" arrows correspond to ..., and each string object corresponds to another kind of box different from the cons boxes; to get to the C string stored in an elisp string object we should examine its "foo" field, i.e., do a "->foo". Obviously, this same idea applies also to other programs with complex data structures - and for some programs we may even have fancier ways to explore their data structures; for example, in a graphic toolkit it might be possible to change the background of a button to orange from GDB. ## 26. Big Modular E-scripts A shell can be run in two modes: either interactively, by expecting lines from the user and executing them as soon as they are received\footnote{except for multi-line commands.}, or by scripts: in the later case the shell already has access to the commands, and executes them in sequence as fast as possible, with no pause between one command and the next. When we are sending lines to a shell with F9 we are telling it not only what to execute but also when to execute it; this is somewhat similar to running a program step-by-step inside a debugger --- but note that most shells provide no single-stepping facilities. We will start with a toy example --- actually the example from Section \ref{anchors} with five new lines added at the end --- and then in the next section we will see a real-world example that uses these ideas. Figure 4: sending a block at once with eevnow-at (find-fline "ss-modular.png") Figure 5: single-stepping through a C program (find-fline "ss-gdbwide.png") ((Somewhere between a script and direct user interaction)) ((No loops, no conditionals)) ((Several xterms)) ## 27. Internet Skills for Disconnected People Suppose that we have a person P who has learned how to use a computer and now wants to learn how the internet works. That person P knows a bit of programming and can use Emacs, and sure she can use e-mail clients and web browsers by clicking around with the mouse, but she has grown tired of just using those things as black boxes; now she wants to experiment with setting up HTTP and mail servers, to understand how data packets are driven around, how firewalls can block some connections, such things. The problem is that P has never had access to any machine besides her own, which is connected to the internet only through a modem; and also, she doesn't have any friends who are computer technicians or sysadmins, because from the little contact that she's had with these people she's got the impression that they live lifes that are almost as grey as the ones of factory workers, and she's afraid of them. To add up to all that, P has some hippie job that makes her happy but poor, so she's not going to buy a second computer, and the books she can borrow, for example, Richard Stevens' series on TCP/IP programming, just don't cut. One of eev's intents isto make life easier for autodidacts. Can it be used to rescue people in positions like P's(4)? It was thinking on that that I created a side-project to eev called “Internet Skills for Disconnected People”: it consists of e-scripts about running a second machine, called the “guest”, emulated inside the “host”, and making the two talk to each other via standard internet protocols, via emulated ethernet cards. Those e-scripts make heavy use of the concepts in the last section ((...)) Figure 6: a call map (find-fline "iskidip.png") (find-eimage0 "./iskidip.png") % (find-eevex "busybox.e" "bb_chroot_main") % (find-eevex "busybox.e" "bbinitrd-qemu-main") % (find-eevex "busybox.e" "iso-qemu-main") % (find-eevex "busybox.e" "iso-qemu-main-2") ` (4). by the way, I created P inspired on myself; my hippie job is being a mathematician. ## 28. Availability and Resources Eev can be downloaded from the author's homepage, http://angg.twu.net/. That page also contains lots of examples, some animations showing some of eev's features at work, a mailing list, etc. Eev is in the middle of the process of becoming a standard part of GNU Emacs; I expect it to be integrated just after the release of GNU Emacs 22.1 in mid-2007. Eev's copyright has already been transferred to the FSF; it is distributed under the GPL license. ## Acknowledgments I would like to thank David O'Toole, Diogo Leal and Leslie Watter for our countless hours of discussions about eev; many of the recent features of eev --- almost half of this article --- were conceived at our talks. ((Thank also the people at #emacs, for help with the code and for small revision tips)) ## References [L90] - Libes, D. - Expect: Curing Those Uncontrollable Fits of Interaction. 1990. Available online from http://expect.nist.gov/. [L95] - Libes, D. - Exploring Expect. O'Reilly, 1995. [O99] - Ochs, E. - The Eev Manifesto (http://angg.twu.net/eev-manifesto.html). [S79] - Stallman, R. - EMACS: The Extensible, Customizable Display Editor. (http://www.gnu.org/software/emacs/emacs-paper.html)
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) A mapping method for numerical evaluation of two-dimensional integrals with $1/r$ singularity. (English) Zbl 0776.73073 Summary: Singular integrals occur commonly in applications of the boundary element method (BEM). A simple mapping method is presented here for the numerical evaluation of two-dimensional integrals in which the integrands, at worst, are $O(1/r)$ $(r$ being the distance from a source to a field point). This mapping transforms such integrals over general curved triangles into regular 2-D integrals. Over flat and curved quadratic triangles, regular line integrals are obtained, and these can be easily evaluated by standard Gaussian quadrature. Numerical tests on some typical singular integrals, encountered in BEM applications, demonstrate the accuracy and efficacy of the method. ##### MSC: 74S15 Boundary element methods in solid mechanics 65D30 Numerical integration MACSYMA Full Text: ##### References: [1] Aliabadi, M. H.; Hall, W. S.; Phemister, T. G. (1985): Taylor expansions for singular kernels in the boundary element method. Int. J. Numer. Method Engng. 21, 2221-2236 · Zbl 0599.65011 · doi:10.1002/nme.1620211208 [2] Aliabadi, M. H.; Hall, W. S. (1987a): Weighted Gaussian methods for three-dimensional boundary element kernel integration. Comm. Appl. Numer. Methods 3, 89-96 · Zbl 0605.73081 · doi:10.1002/cnm.1630030203 [3] Aliabadi, M. H.; Hall, W. S. (1987b): Analytical removal of singularities and one-dimensional integration of three-dimensional boundary element method kernels. Engng. Analysis 4, 21-24 · doi:10.1016/0264-682X(87)90028-1 [4] Brebbia, C. A.; Telles, J. C. F.; Wrobel, L. C. (1984): Boundary element techniques: Theory and Applications in Engineering, Berlin, Heidelberg, New York: Springer · Zbl 0556.73086 [5] Cristescu, M.; Loubignac, G. (1978): Gaussian quadrature formulas for functions with singularities in 1/R over triangle and quadrangles. In: Brebbia, C. A. (ed): Recent advances in boundary element methods, London, Pentech Press, pp. 375-390 · Zbl 0387.65022 [6] Cruse, T. A. (1969): Numerical solutions in three-dimensional elastostatics. Int. J. Solids Structures 5, 1259-1274 · Zbl 0181.52404 · doi:10.1016/0020-7683(69)90071-7 [7] Guiggiani, M.; Gigante, A. (1990): A general algorithm for multidimensional Cauchy principal value integrals in the boundary element method. ASME J. Appl. Mech. 57, 906-915 · Zbl 0735.73084 · doi:10.1115/1.2897660 [8] Huang, Q.; Du, Q. (1988): An improved formulation for domain stress evaluation by boundary element methods in elastoplastic problems. Proceedings of the China-U.S. Seminar on Boundary Integral Equation and Boundary Finite Element Methods in Physics and Engineering, Xian, China [9] Lachat, J. C.; Watson, J. O. (1976): Effective numerical treatment of boundary integral equations: A formulation for three-dimensional elastostatics. Int. J. Numer. Method. Engng. 10, 991-1005 · Zbl 0332.73022 · doi:10.1002/nme.1620100503 [10] Lean, M. H.; Wexler, A. (1985): Accurate numerical integration of singular boundary element kernels over boundaries with curvature. Int. J. Numer. Method. Engng. 21, 211-228 · Zbl 0555.65091 · doi:10.1002/nme.1620210203 [11] Li, H.-B.; Han, G.-M; Mang, H. A. (1985): A new method for evaluating singular integrals in stress analysis of solids by the direct boundary element method. Int. J. Numer. Method. Engng. 21, 2071-2098 · Zbl 0576.65129 · doi:10.1002/nme.1620211109 [12] Mukherjee, S. (1992) Boundary element methods in creep and fracture. London: Elsevier · Zbl 0534.73070 [13] Nishimura, N.; Kobayashi, S. (1989): A boundary integral equations method for consolidation problems. Int. J. Solids Structures 25, 1-21 · Zbl 0676.73074 · doi:10.1016/0020-7683(89)90100-5 [14] Pina, H. L. G.; Fernandes, J. L. M.; Brebbia, C. A. (1981): Some numerical integration formulae over triangles and squares with a 1/R singularity. Appl. Math. Modelling 5, 209-211 · Zbl 0502.65011 · doi:10.1016/0307-904X(81)90047-0 [15] Rajiyah, H.; Mukherjee, S. (1987): Boundary element analysis of inelastic axisymmetric problems with large strains and rotations. Int. J. Solids Structures 23, 1679-1698 · Zbl 0627.73045 · doi:10.1016/0020-7683(87)90118-1 [16] Rand, R. H. (1984): Computer Algebra in Applied Mathematics: an Introduction to Macsyma. Pitman, Boston · Zbl 0583.68012 [17] Sarihan, V.; Mukherjee, S. (1982): Axisymmetric viscoplastic deformation by the boundary element method. Int. J. Solids and Structures 18, 1113-1128 · Zbl 0505.73055 · doi:10.1016/0020-7683(82)90097-X [18] Zhang, Q.; Mukherjee, S.; Chandra, A. (1992a): Design Sensitivity Coefficients for elastovisco plastic problems by boundary element method. Int. J. Num. Meth. Engng. 34, 947-966 · Zbl 0774.73051 · doi:10.1002/nme.1620340318 [19] Zhang, Q.; Mukherjee, S.; Chandra, A. (1992b): Shape design sensitivity analysis for geometrically and materially nonlinear problems by the boundary element method. Int. J. Solids Structures 29, 2503-2525 · Zbl 0764.73095 · doi:10.1016/0020-7683(92)90006-F
# ECHO...ECHo...ECho...Echo...echo... The Grand Canyon is anywhere from $$6$$ to $$30~\mbox{km}$$ across. If you stood on one side of the Grand Canyon at its narrowest point and shouted, how long in seconds would it take for you to hear your echo? Details and assumptions • The speed of sound is roughly $$350~\mbox{m/s}$$. ×
# selene_sdk.predict¶ This module contains classes and methods for making and analyzing predictions with models that have already been trained. ## AnalyzeSequences¶ class selene_sdk.predict.AnalyzeSequences(model, trained_model_path, sequence_length, features, batch_size=64, use_cuda=False, data_parallel=False, reference_sequence=<class 'selene_sdk.sequences.genome.Genome'>, write_mem_limit=1500)[source] Bases: object Score sequences and their variants using the predictions made by a trained model. Parameters • model (torch.nn.Module) – A sequence-based model architecture. • trained_model_path (str or list(str)) – The path(s) to the weights file for a trained sequence-based model. For a single path, the model architecture must match model. For a list of paths, assumes that the model passed in is of type selene_sdk.utils.MultiModelWrapper, which takes in a list of models. The paths must be ordered the same way the models are ordered in that list. list(str) input is an API-only function– Selene’s config file CLI does not support the MultiModelWrapper functionality at this time. • sequence_length (int) – The length of sequences that the model is expecting. • features (list(str)) – The names of the features that the model is predicting. • batch_size (int, optional) – Default is 64. The size of the mini-batches to use. • use_cuda (bool, optional) – Default is False. Specifies whether CUDA-enabled GPUs are available for torch to use. • data_parallel (bool, optional) – Default is False. Specify whether multiple GPUs are available for torch to use during training. • reference_sequence (class, optional) – Default is selene_sdk.sequences.Genome. The type of sequence on which this analysis will be performed. Please note that if you need to use variant effect prediction, you cannot only pass in the class–you must pass in the constructed selene_sdk.sequences.Sequence object with a particular sequence version (e.g. Genome(“hg19.fa”)). This version does NOT have to be the same sequence version that the model was trained on. That is, if the sequences in your variants file are hg19 but your model was trained on hg38 sequences, you should pass in hg19. • write_mem_limit (int, optional) – Default is 5000. Specify, in MB, the amount of memory you want to allocate to storing model predictions/scores. When running one of _in silico_ mutagenesis, variant effect prediction, or prediction, prediction/score handlers will accumulate data in memory and only write this data to files periodically. By default, Selene will write to files when the total amount of data (across all handlers) takes up 5000MB of space. Please keep in mind that Selene will not monitor the memory needed to actually carry out the operations (e.g. variant effect prediction) or load the model, so write_mem_limit should always be less than the total amount of CPU memory you have available on your machine. For example, for variant effect prediction, we load all the variants in 1 file into memory before getting the predictions, so your machine must have enough memory to accommodate that. Another possible consideration is your model size and whether you are using it on the CPU or a CUDA-enabled GPU (i.e. setting use_cuda to True). Variables • ~AnalyzeSequences.model (torch.nn.Module) – A sequence-based model that has already been trained. • ~AnalyzeSequences.sequence_length (int) – The length of sequences that the model is expecting. • ~AnalyzeSequences.batch_size (int) – The size of the mini-batches to use. • ~AnalyzeSequences.features (list(str)) – The names of the features that the model is predicting. • ~AnalyzeSequences.use_cuda (bool) – Specifies whether to use a CUDA-enabled GPU or not. • ~AnalyzeSequences.data_parallel (bool) – Whether to use multiple GPUs or not. • ~AnalyzeSequences.reference_sequence (class) – The type of sequence on which this analysis will be performed. get_predictions(input, output_dir=None, output_format='tsv', strand_index=None)[source] Get model predictions for sequences specified as a raw sequence, FASTA, or BED file. Parameters • input (str) – A single sequence, or a path to the FASTA or BED file input. • output_dir (str, optional) – Default is None. Output directory to write the model predictions. If this is left blank a raw sequence input will be assumed, though an output directory is required for FASTA and BED inputs. • output_format ({‘tsv’, ‘hdf5’}, optional) – Default is ‘tsv’. Choose whether to save TSV or HDF5 output files. TSV is easier to access (i.e. open with text editor/Excel) and quickly peruse, whereas HDF5 files must be accessed through specific packages/viewers that support this format (e.g. h5py Python package). Choose • ‘tsv’ if your list of sequences is relatively small ($$10^4$$ or less in order of magnitude) and/or your model has a small number of features (<1000). • ‘hdf5’ for anything larger and/or if you would like to access the predictions/scores as a matrix that you can easily filter, apply computations, or use in a subsequent classifier/model. In this case, you may access the matrix using mat[“data”] after opening the HDF5 file using mat = h5py.File(“<output.h5>”, ‘r’). The matrix columns are the features and will match the same ordering as your features .txt file (same as the order your model outputs its predictions) and the matrix rows are the sequences. Note that the row labels (FASTA description/IDs) will be output as a separate .txt file (should match the ordering of the sequences in the input FASTA). • strand_index (int or None, optional) – Default is None. If the trained model makes strand-specific predictions, your input BED file may include a column with strand information (strand must be one of {‘+’, ‘-‘, ‘.’}). Specify the index (0-based) to use it. Otherwise, by default ‘+’ is used. (This parameter is ignored if FASTA file is used as input.) Returns Writes the output to file(s) in output_dir. Filename will match that specified in the filepath. In addition, if any base in the given or retrieved sequence is unknown, the row labels .txt file or .tsv file will mark this sequence or region as contains_unk = True. Return type None get_predictions_for_bed_file(input_path, output_dir, output_format='tsv', strand_index=None)[source] Get model predictions for sequences specified as genome coordinates in a BED file. Coordinates do not need to be the same length as the model expected sequence input–predictions will be centered at the midpoint of the specified start and end coordinates. Parameters • input_path (str) – Input path to the BED file. • output_dir (str) – Output directory to write the model predictions. • output_format ({‘tsv’, ‘hdf5’}, optional) – Default is ‘tsv’. Choose whether to save TSV or HDF5 output files. TSV is easier to access (i.e. open with text editor/Excel) and quickly peruse, whereas HDF5 files must be accessed through specific packages/viewers that support this format (e.g. h5py Python package). Choose • ‘tsv’ if your list of sequences is relatively small ($$10^4$$ or less in order of magnitude) and/or your model has a small number of features (<1000). • ‘hdf5’ for anything larger and/or if you would like to access the predictions/scores as a matrix that you can easily filter, apply computations, or use in a subsequent classifier/model. In this case, you may access the matrix using mat[“data”] after opening the HDF5 file using mat = h5py.File(“<output.h5>”, ‘r’). The matrix columns are the features and will match the same ordering as your features .txt file (same as the order your model outputs its predictions) and the matrix rows are the sequences. Note that the row labels (FASTA description/IDs) will be output as a separate .txt file (should match the ordering of the sequences in the input FASTA). • strand_index (int or None, optional) – Default is None. If the trained model makes strand-specific predictions, your input file may include a column with strand information (strand must be one of {‘+’, ‘-‘, ‘.’}). Specify the index (0-based) to use it. Otherwise, by default ‘+’ is used. Returns Writes the output to file(s) in output_dir. Filename will match that specified in the filepath. Return type None get_predictions_for_fasta_file(input_path, output_dir, output_format='tsv')[source] Get model predictions for sequences in a FASTA file. Parameters • input_path (str) – Input path to the FASTA file. • output_dir (str) – Output directory to write the model predictions. • output_format ({‘tsv’, ‘hdf5’}, optional) – Default is ‘tsv’. Choose whether to save TSV or HDF5 output files. TSV is easier to access (i.e. open with text editor/Excel) and quickly peruse, whereas HDF5 files must be accessed through specific packages/viewers that support this format (e.g. h5py Python package). Choose • ‘tsv’ if your list of sequences is relatively small ($$10^4$$ or less in order of magnitude) and/or your model has a small number of features (<1000). • ‘hdf5’ for anything larger and/or if you would like to access the predictions/scores as a matrix that you can easily filter, apply computations, or use in a subsequent classifier/model. In this case, you may access the matrix using mat[“data”] after opening the HDF5 file using mat = h5py.File(“<output.h5>”, ‘r’). The matrix columns are the features and will match the same ordering as your features .txt file (same as the order your model outputs its predictions) and the matrix rows are the sequences. Note that the row labels (FASTA description/IDs) will be output as a separate .txt file (should match the ordering of the sequences in the input FASTA). Returns Writes the output to file(s) in output_dir. Return type None in_silico_mutagenesis(sequence, save_data, output_path_prefix='ism', mutate_n_bases=1, output_format='tsv', start_position=0, end_position=None)[source] Applies in silico mutagenesis to a sequence. Parameters • sequence (str) – The sequence to mutate. • save_data (list(str)) – A list of the data files to output. Must input 1 or more of the following options: [“abs_diffs”, “diffs”, “logits”, “predictions”]. • output_path_prefix (str, optional) – The path to which the data files are written. If directories in the path do not yet exist they will be automatically created. • mutate_n_bases (int, optional) – The number of bases to mutate at one time. We recommend leaving this parameter set to 1 at this time, as we have not yet optimized operations for double and triple mutations. • output_format ({‘tsv’, ‘hdf5’}, optional) – Default is ‘tsv’. The desired output format. • start_position (int, optional) – Default is 0. The starting position of the subsequence to be mutated. • end_position (int or None, optional) – Default is None. The ending position of the subsequence to be mutated. If left as None, then self.sequence_length will be used. Returns Outputs data files from in silico mutagenesis to output_dir. For HDF5 output and ‘predictions’ in save_data, an additional file named *_ref_predictions.h5 will be outputted with the model prediction for the original input sequence. Return type None Raises • ValueError – If the value of start_position or end_position is negative. • ValueError – If there are fewer than mutate_n_bases between start_position and end_position. • ValueError – If start_position is greater or equal to end_position. • ValueError – If start_position is not less than self.sequence_length. • ValueError – If end_position is greater than self.sequence_length. in_silico_mutagenesis_from_file(input_path, save_data, output_dir, mutate_n_bases=1, use_sequence_name=True, output_format='tsv', start_position=0, end_position=None)[source] Apply in silico mutagenesis to all sequences in a FASTA file. Please note that we have not parallelized this function yet, so runtime increases exponentially when you increase mutate_n_bases. Parameters • input_path (str) – The path to the FASTA file of sequences. • save_data (list(str)) – A list of the data files to output. Must input 1 or more of the following options: [“abs_diffs”, “diffs”, “logits”, “predictions”]. • output_dir (str) – The path to the output directory. Directories in the path will be created if they do not currently exist. • mutate_n_bases (int, optional) – Default is 1. The number of bases to mutate at one time in in silico mutagenesis. • use_sequence_name (bool, optional.) – Default is True. If use_sequence_name, output files are prefixed by the sequence name/description corresponding to each sequence in the FASTA file. Spaces in the sequence name are replaced with underscores ‘_’. If not use_sequence_name, output files are prefixed with an index $$i$$ (starting with 0) corresponding to the :math:ith sequence in the FASTA file. • output_format ({‘tsv’, ‘hdf5’}, optional) – Default is ‘tsv’. The desired output format. Each sequence in the FASTA file will have its own set of output files, where the number of output files depends on the number of save_data predictions/scores specified. • start_position (int, optional) – Default is 0. The starting position of the subsequence to be mutated. • end_position (int or None, optional) – Default is None. The ending position of the subsequence to be mutated. If left as None, then self.sequence_length will be used. Returns Outputs data files from in silico mutagenesis to output_dir. For HDF5 output and ‘predictions’ in save_data, an additional file named *_ref_predictions.h5 will be outputted with the model prediction for the original input sequence. Return type None Raises • ValueError – If the value of start_position or end_position is negative. • ValueError – If there are fewer than mutate_n_bases between start_position and end_position. • ValueError – If start_position is greater or equal to end_position. • ValueError – If start_position is not less than self.sequence_length. • ValueError – If end_position is greater than self.sequence_length. in_silico_mutagenesis_predict(sequence, base_preds, mutations_list, reporters=[])[source] Get the predictions for all specified mutations applied to a given sequence and, if applicable, compute the scores (“abs_diffs”, “diffs”, “logits”) for these mutations. Parameters • sequence (str) – The sequence to mutate. • base_preds (numpy.ndarray) – The model’s prediction for sequence. • mutations_list (list(list(tuple))) – The mutations to apply to the sequence. Each element in mutations_list is a list of tuples, where each tuple specifies the int position in the sequence to mutate and what str base to which the position is mutated (e.g. (1, ‘A’)). • reporters (list(PredictionsHandler)) – The list of reporters, where each reporter handles the predictions made for each mutated sequence. Will collect, compute scores (e.g. AbsDiffScoreHandler computes the absolute difference between base_preds and the predictions for the mutated sequence), and output these as a file at the end. Returns Writes results to files corresponding to each reporter in reporters. Return type None variant_effect_prediction(vcf_file, save_data, output_dir=None, output_format='tsv', strand_index=None, require_strand=False)[source] Get model predictions and scores for a list of variants. Parameters • vcf_file (str) – Path to a VCF file. Must contain the columns [#CHROM, POS, ID, REF, ALT], in order. Column header does not need to be present. • save_data (list(str)) – A list of the data files to output. Must input 1 or more of the following options: [“abs_diffs”, “diffs”, “logits”, “predictions”]. • output_dir (str or None, optional) – Default is None. Path to the output directory. If no path is specified, will save files corresponding to the options in save_data to the current working directory. • output_format ({‘tsv’, ‘hdf5’}, optional) – Default is ‘tsv’. Choose whether to save TSV or HDF5 output files. TSV is easier to access (i.e. open with text editor/Excel) and quickly peruse, whereas HDF5 files must be accessed through specific packages/viewers that support this format (e.g. h5py Python package). Choose • ‘tsv’ if your list of variants is relatively small ($$10^4$$ or less in order of magnitude) and/or your model has a small number of features (<1000). • ‘hdf5’ for anything larger and/or if you would like to access the predictions/scores as a matrix that you can easily filter, apply computations, or use in a subsequent classifier/model. In this case, you may access the matrix using mat[“data”] after opening the HDF5 file using mat = h5py.File(“<output.h5>”, ‘r’). The matrix columns are the features and will match the same ordering as your features .txt file (same as the order your model outputs its predictions) and the matrix rows are the sequences. Note that the row labels (chrom, pos, id, ref, alt) will be output as a separate .txt file. • strand_index (int or None, optional.) – Default is None. If applicable, specify the column index (0-based) in the VCF file that contains strand information for each variant. • require_strand (bool, optional.) – Default is False. Whether strand can be specified as ‘.’. If False, Selene accepts strand value to be ‘+’, ‘-‘, or ‘.’ and automatically treats ‘.’ as ‘+’. If True, Selene skips any variant with strand ‘.’. This parameter assumes that strand_index has been set. Returns Saves all files to output_dir. If any bases in the ‘ref’ column of the VCF do not match those at the specified position in the reference genome, the row labels .txt file will mark this variant as ref_match = False. If most of your variants do not match the reference genome, please check that the reference genome you specified matches the version with which the variants were called. The predictions can used directly if you have verified that the ‘ref’ bases specified for these variants are correct (Selene will have substituted these bases for those in the reference genome). In addition, if any base in the retrieved reference sequence is unknown, the row labels .txt file will mark this variant as contains_unk = True. Finally, some variants may show up in an ‘NA’ file. This is because the surrounding sequence context ended up being out of bounds or overlapping with blacklist regions or the chromosome containing the variant did not show up in the reference genome FASTA file. Return type None
# Semi-Thue system In theoretical computer science and mathematical logic a string rewriting system (SRS), historically called a semi-Thue system, is a rewriting system over strings from a (usually finite) alphabet. Given a binary relation ${\displaystyle R}$ between fixed strings over the alphabet, called rewrite rules, denoted by ${\displaystyle s\rightarrow t}$, an SRS extends the rewriting relation to all strings in which the left- and right-hand side of the rules appear as substrings, that is ${\displaystyle usv\rightarrow utv}$, where ${\displaystyle s}$, ${\displaystyle t}$, ${\displaystyle u}$, and ${\displaystyle v}$ are strings. The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Thus they constitute a natural framework for solving the word problem for monoids and groups. An SRS can be defined directly as an abstract rewriting system. It can also be seen as a restricted kind of a term rewriting system. As a formalism, string rewriting systems are Turing complete.[1] The semi-Thue name comes from the Norwegian mathematician Axel Thue, who introduced systematic treatment of string rewriting systems in a 1914 paper.[2] Thue introduced this notion hoping to solve the word problem for finitely presented semigroups. Only in 1947 was the problem shown to be undecidable— this result was obtained independently by Emil Post and A. A. Markov Jr.[3][4] ## Definition A string rewriting system or semi-Thue system is a tuple ${\displaystyle (\Sigma ,R)}$ where • Σ is an alphabet, usually assumed finite.[5] The elements of the set ${\displaystyle \Sigma ^{*}}$ (* is the Kleene star here) are finite (possibly empty) strings on Σ, sometimes called words in formal languages; we will simply call them strings here. • R is a binary relation on strings from Σ, i.e., ${\displaystyle R\subseteq \Sigma ^{*}\times \Sigma ^{*}.}$ Each element ${\displaystyle (u,v)\in R}$ is called a (rewriting) rule and is usually written ${\displaystyle u\rightarrow v}$. If the relation R is symmetric, then the system is called a Thue system. The rewriting rules in R can be naturally extended to other strings in ${\displaystyle \Sigma ^{*}}$ by allowing substrings to be rewritten according to R. More formally, the one-step rewriting relation relation ${\displaystyle {\xrightarrow[{R}]{}}}$ induced by R on ${\displaystyle \Sigma ^{*}}$ for any strings ${\displaystyle s,t\in \Sigma ^{*}}$: ${\displaystyle s{\xrightarrow[{R}]{}}t}$ if and only if there exist ${\displaystyle x,y,u,v\in \Sigma ^{*}}$ such that ${\displaystyle s=xuy}$, ${\displaystyle t=xvy}$, and ${\displaystyle u\rightarrow v}$. Since ${\displaystyle {\xrightarrow[{R}]{}}}$ is a relation on ${\displaystyle \Sigma ^{*}}$, the pair ${\displaystyle (\Sigma ^{*},{\xrightarrow[{R}]{}})}$ fits the definition of an abstract rewriting system. Obviously R is a subset of ${\displaystyle {\xrightarrow[{R}]{}}}$. Some authors use a different notation for the arrow in ${\displaystyle {\xrightarrow[{R}]{}}}$ (e.g. ${\displaystyle {\xrightarrow[{R}]{}}}$) in order to distinguish it from R itself (${\displaystyle \rightarrow }$) because they later want to be able to drop the subscript and still avoid confusion between R and the one-step rewrite induced by R. Clearly in a semi-Thue system we can form a (finite or infinite) sequence of strings produced by starting with an initial string ${\displaystyle s_{0}\in \Sigma ^{*}}$ and repeatedly rewriting it by making one substring-replacement at a time: ${\displaystyle s_{0}\ {\xrightarrow[{R}]{}}\ s_{1}\ {\xrightarrow[{R}]{}}\ s_{2}\ {\xrightarrow[{R}]{}}\ \ldots }$ A zero-or-more-steps rewriting like this is captured by the reflexive transitive closure of ${\displaystyle {\xrightarrow[{R}]{}}}$, denoted by ${\displaystyle {\xrightarrow[{R}]{*}}}$ (see abstract rewriting system#Basic notions). This is called the rewriting relation or reduction relation on ${\displaystyle \Sigma ^{*}}$ induced by R. ## Thue congruence In general, the set ${\displaystyle \Sigma ^{*}}$ of strings on an alphabet forms a free monoid together with the binary operation of string concatenation (denoted as ${\displaystyle \cdot }$ and written multiplicatively by dropping the symbol). In a SRS, the reduction relation ${\displaystyle {\xrightarrow[{R}]{*}}}$ is compatible with the monoid operation, meaning that ${\displaystyle x{\xrightarrow[{R}]{*}}y}$ implies ${\displaystyle uxv{\xrightarrow[{R}]{*}}uyv}$ for all strings ${\displaystyle x,y,u,v\in \Sigma ^{*}}$. Since ${\displaystyle {\xrightarrow[{R}]{*}}}$ is by definition a preorder, ${\displaystyle \left(\Sigma ^{*},\cdot ,{\xrightarrow[{R}]{*}}\right)}$ forms a monoidal preorder. Similarly, the reflexive transitive symmetric closure of ${\displaystyle {\xrightarrow[{R}]{}}}$, denoted ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ (see abstract rewriting system#Basic notions), is a congruence, meaning it is an equivalence relation (by definition) and it is also compatible with string concatenation. The relation ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ is called the Thue congruence generated by R. In a Thue system, i.e. if R is symmetric, the rewrite relation ${\displaystyle {\xrightarrow[{R}]{*}}}$ coincides with the Thue congruence ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$. ## Factor monoid and monoid presentations Since ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ is a congruence, we can define the factor monoid ${\displaystyle {\mathcal {M}}_{R}=\Sigma ^{*}/{\overset {*}{\underset {R}{\leftrightarrow }}}}$ of the free monoid ${\displaystyle \Sigma ^{*}}$ by the Thue congruence in the usual manner. If a monoid ${\displaystyle {\mathcal {M}}}$ is isomorphic with ${\displaystyle {\mathcal {M}}_{R}}$, then the semi-Thue system ${\displaystyle (\Sigma ,R)}$ is called a monoid presentation of ${\displaystyle {\mathcal {M}}}$. We immediately get some very useful connections with other areas of algebra. For example, the alphabet {a, b} with the rules { ab → ε, ba → ε }, where ε is the empty string, is a presentation of the free group on one generator. If instead the rules are just { ab → ε }, then we obtain a presentation of the bicyclic monoid. The importance of semi-Thue systems as presentation of monoids is made stronger by the following: Theorem: Every monoid has a presentation of the form ${\displaystyle (\Sigma ,R)}$, thus it may be always be presented by a semi-Thue system, possibly over an infinite alphabet.[6] In this context, the set ${\displaystyle \Sigma }$ is called the set of generators of ${\displaystyle {\mathcal {M}}}$, and ${\displaystyle R}$ is called the set of defining relations ${\displaystyle {\mathcal {M}}}$. We can immediately classify monoids based on their presentation. ${\displaystyle {\mathcal {M}}}$ is called • finitely generated if ${\displaystyle \Sigma }$ is finite. • finitely presented if both ${\displaystyle \Sigma }$ and ${\displaystyle R}$ are finite. ## Undecidability of the word problem Post proved the word problem (for semigroups) to be undecidable in general, essentially by reducing the halting problem[7] for Turing machines to an instance of the word problem. Concretely, Post devised an encoding as a finite string of the state of a Turing machine plus tape, such that the actions of this machine can be carried out by a string rewrite system acting on this string encoding. The alphabet of the encoding has one set of letters ${\displaystyle S_{0},S_{1},\dotsc ,S_{m}}$ for symbols on the tape (where ${\displaystyle S_{0}}$ means blank), another set of letters ${\displaystyle q_{1},\dotsc ,q_{r}}$ for states of the Turing machine, and finally three letters ${\displaystyle q_{r+1},q_{r+2},h}$ that have special roles in the encoding. ${\displaystyle q_{r+1}}$ and ${\displaystyle q_{r+2}}$ are intuitively extra internal states of the Turing machine which it transitions to when halting, whereas ${\displaystyle h}$ marks the end of the non-blank part of the tape; a machine reaching an ${\displaystyle h}$ should behave the same as if there was a blank there, and the ${\displaystyle h}$ was in the next cell. The strings that are valid encodings of Turing machine states start with an ${\displaystyle h}$, followed by zero or more symbol letters, followed by exactly one internal state letter ${\displaystyle q_{i}}$ (which encodes the state of the machine), followed by one or more symbol letters, followed by an ending ${\displaystyle h}$. The symbol letters are straight off the contents of the tape, and the internal state letter marks the position of the head; the symbol after the internal state letter is that in the cell currently under the head of the Turing machine. A transition where the machine upon being in state ${\displaystyle q_{i}}$ and seeing the symbol ${\displaystyle S_{k}}$ writes back symbol ${\displaystyle S_{l}}$, moves right, and transitions to state ${\displaystyle q_{j}}$ is implemented by the rewrite ${\displaystyle q_{i}S_{k}\to S_{l}q_{j}}$ whereas that transition instead moving to the left is implemented by the rewrite ${\displaystyle S_{p}q_{i}S_{k}\to q_{j}S_{p}S_{l}}$ with one instance for each symbol ${\displaystyle S_{p}}$ in that cell to the left. In the case that we reach the end of the visited portion of the tape, we use instead ${\displaystyle hq_{i}S_{k}\to hq_{j}S_{0}S_{l}}$, lengthening the string by one letter. Because all rewrites involve one internal state letter ${\displaystyle q_{i}}$, the valid encodings only contain one such letter, and each rewrite produces exactly one such letter, the rewrite process exactly follows the run of the Turing machine encoded. This proves that string rewrite systems are Turing complete. The reason for having two halted symbols ${\displaystyle q_{r+1}}$ and ${\displaystyle q_{r+2}}$ is that we want all halting Turing machines to terminate at the same total state, not just a particular internal state. This requires clearing the tape after halting, so ${\displaystyle q_{r+1}}$ eats the symbol on it left until reaching the ${\displaystyle h}$, where it transitions into ${\displaystyle q_{r+2}}$ which instead eats the symbol on its right. (In this phase the string rewrite system no longer simulates a Turing machine, since that cannot remove cells from the tape.) After all symbols are gone, we have reached the terminal string ${\displaystyle hq_{r+2}h}$. A decision procedure for the word problem would then also yield a procedure for deciding whether the given Turing machine terminates when started in a particular total state ${\displaystyle t}$, by testing whether ${\displaystyle t}$ and ${\displaystyle hq_{r+2}h}$ belong to the same congruence class with respect to this string rewrite system. Technically, we have the following: Lemma. Let ${\displaystyle M}$ be a deterministic Turing machine and ${\displaystyle R}$ be the string rewrite system implementing ${\displaystyle M}$, as described above. Then ${\displaystyle M}$ will halt when started from the total state encoded as ${\displaystyle t}$ if and only if ${\displaystyle t\mathrel {\overset {*}{\underset {R}{\leftrightarrow }}} hq_{r+2}h}$ (that is to say, if and only if ${\displaystyle t}$ and ${\displaystyle hq_{r+2}h}$ are Thue congruent for ${\displaystyle R}$). That ${\displaystyle t\mathrel {\overset {*}{\underset {R}{\rightarrow }}} hq_{r+2}h}$ if ${\displaystyle M}$ halts when started from ${\displaystyle t}$ is immediate from the construction of ${\displaystyle R}$ (simply running ${\displaystyle M}$ until it halts constructs a proof of ${\displaystyle t\mathrel {\overset {*}{\underset {R}{\rightarrow }}} hq_{r+2}h}$), but ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ also allows the Turing machine ${\displaystyle M}$ to take steps backwards. Here it becomes relevant that ${\displaystyle M}$ is deterministic, because then the forward steps are all unique; in a ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ walk from ${\displaystyle t}$ to ${\displaystyle hq_{r+2}h}$ the last backward step must be followed by its counterpart as a forward step, so these two cancel, and by induction all backward steps can be eliminated from such a walk. Hence if ${\displaystyle M}$ does not halt when started from ${\displaystyle t}$, i.e., if we do not have ${\displaystyle t\mathrel {\overset {*}{\underset {R}{\rightarrow }}} hq_{r+2}h}$, then we also do not have ${\displaystyle t\mathrel {\overset {*}{\underset {R}{\leftrightarrow }}} hq_{r+2}h}$. Therefore deciding ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ tells us the answer to the halting problem for ${\displaystyle M}$. An apparent limitation of this argument is that in order to produce a semigroup ${\displaystyle \Sigma ^{*}{\big /}{\overset {*}{\underset {R}{\leftrightarrow }}}}$ with undecidable word problem, one must first have a concrete example of a Turing machine ${\displaystyle M}$ for which the halting problem is undecidable, but the various Turing machines figuring in the proof of the undecidability of the general halting problem all have as component a hypothetical Turing machine solving the halting problem, so none of those machines can actually exist; all that proves is that there is some Turing machine for which the decision problem is undecidable. However, that there is some Turing machine with undecidable halting problem means that the halting problem for a universal Turing machine is undecidable (since that can simulate any Turing machine), and concrete examples of universal Turing machines have been constructed. ## Connections with other notions A semi-Thue system is also a term-rewriting system—one that has monadic words (functions) ending in the same variable as the left- and right-hand side terms,[8] e.g. a term rule ${\displaystyle f_{2}(f_{1}(x))\rightarrow g(x)}$ is equivalent to the string rule ${\displaystyle f_{1}f_{2}\rightarrow g}$. A semi-Thue system is also a special type of Post canonical system, but every Post canonical system can also be reduced to an SRS. Both formalisms are Turing complete, and thus equivalent to Noam Chomsky's unrestricted grammars, which are sometimes called semi-Thue grammars.[9] A formal grammar only differs from a semi-Thue system by the separation of the alphabet into terminals and non-terminals, and the fixation of a starting symbol amongst non-terminals. A minority of authors actually define a semi-Thue system as a triple ${\displaystyle (\Sigma ,A,R)}$, where ${\displaystyle A\subseteq \Sigma ^{*}}$ is called the set of axioms. Under this "generative" definition of semi-Thue system, an unrestricted grammar is just a semi-Thue system with a single axiom in which one partitions the alphabet into terminals and non-terminals, and makes the axiom a nonterminal.[10] The simple artifice of partitioning the alphabet into terminals and non-terminals is a powerful one; it allows the definition of the Chomsky hierarchy based on what combination of terminals and non-terminals the rules contain. This was a crucial development in the theory of formal languages. In quantum computing, the notion of a quantum Thue system can be developed.[11] Since quantum computation is intrinsically reversible, the rewriting rules over the alphabet ${\displaystyle \Sigma }$ are required to be bidirectional (i.e. the underlying system is a Thue system,[dubious ] not a semi-Thue system). On a subset of alphabet characters ${\displaystyle Q\subseteq \Sigma }$ one can attach a Hilbert space ${\displaystyle \mathbb {C} ^{d}}$, and a rewriting rule taking a substring to another one can carry out a unitary operation on the tensor product of the Hilbert space attached to the strings; this implies that they preserve the number of characters from the set ${\displaystyle Q}$. Similar to the classical case one can show that a quantum Thue system is a universal computational model for quantum computation, in the sense that the executed quantum operations correspond to uniform circuit classes (such as those in BQP when e.g. guaranteeing termination of the string rewriting rules within polynomially many steps in the input size), or equivalently a Quantum Turing machine. ## History and importance Semi-Thue systems were developed as part of a program to add additional constructs to logic, so as to create systems such as propositional logic, that would allow general mathematical theorems to be expressed in a formal language, and then proven and verified in an automatic, mechanical fashion. The hope was that the act of theorem proving could then be reduced to a set of defined manipulations on a set of strings. It was subsequently realized that semi-Thue systems are isomorphic to unrestricted grammars, which in turn are known to be isomorphic to Turing machines. This method of research succeeded and now computers can be used to verify the proofs of mathematic and logical theorems. At the suggestion of Alonzo Church, Emil Post in a paper published in 1947, first proved "a certain Problem of Thue" to be unsolvable, what Martin Davis states as "...the first unsolvability proof for a problem from classical mathematics -- in this case the word problem for semigroups."[12] Davis also asserts that the proof was offered independently by A. A. Markov.[13] ## Notes 1. ^ See section "Undecidability of the word problem" in this article. 2. ^ Book and Otto, p. 36 3. ^ Abramsky et al. p. 416 4. ^ Salomaa et al., p.444 5. ^ In Book and Otto a semi-Thue system is defined over a finite alphabet through most of the book, except chapter 7 when monoid presentation are introduced, when this assumption is quietly dropped. 6. ^ Book and Otto, Theorem 7.1.7, p. 149 7. ^ Post, following Turing, technically makes use of the undecidability of the printing problem (whether a Turing machine ever prints a particular symbol), but the two problems reduce to each other. Indeed, Post includes an extra step in his construction that effectively converts printing the watched symbol into halting. 8. ^ Nachum Dershowitz and Jean-Pierre Jouannaud. Rewrite Systems (1990) p. 6 9. ^ D.I.A. Cohen, Introduction to Computer Theory, 2nd ed., Wiley-India, 2007, ISBN 81-265-1334-9, p.572 10. ^ Dan A. Simovici, Richard L. Tenney, Theory of formal languages with applications, World Scientific, 1999 ISBN 981-02-3729-4, chapter 4 11. ^ J. Bausch, T. Cubitt, M. Ozols, The Complexity of Translationally-Invariant Spin Chains with Low Local Dimension, Ann. Henri Poincare 18(11), 2017 doi:10.1007/s00023-017-0609-7 pp. 3449-3513 12. ^ Martin Davis (editor) (1965), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, after page 292, Raven Press, New York 13. ^ A. A. Markov (1947) Doklady Akademii Nauk SSSR (N.S.) 55: 583–586 ## References ### Textbooks • Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1, chapter 7 • Elaine Rich, Automata, computability and complexity: theory and applications, Prentice Hall, 2007, ISBN 0-13-228806-0, chapter 23.5. ### Surveys • Samson Abramsky, Dov M. Gabbay, Thomas S. E. Maibaum (ed.), Handbook of Logic in Computer Science: Semantic modelling, Oxford University Press, 1995, ISBN 0-19-853780-8. • Grzegorz Rozenberg, Arto Salomaa (ed.), Handbook of Formal Languages: Word, language, grammar, Springer, 1997, ISBN 3-540-60420-0.
# Moment of Inertia of Solid Sphere - Proof So I have been having a bit of trouble trying to derive the moment of inertia of a solid sphere through its center of mass. Here is my working as shown in the attached file. The problem is, I end up getting a solution of I = (3/5)MR^2, whereas, in any textbook, it says that the inertia should be equal to I = (2/5)MR^2. Is anyone able to tell me where I went wrong in my working? This is not a homework problem by the way. Thanks. #### Attachments • 32.2 KB Views: 2,509 Related Introductory Physics Homework Help News on Phys.org Orodruin Staff Emeritus Homework Helper Gold Member You are using $r$ with two different meanings and mixing them up. 1) The distance from the axis of rotation (the $r$ in the definition of the moment of inertia). 2) The distance from the centre of the sphere. These are not the same. This is not a homework problem by the way. Regardless of whether it is actual homework or not, it should be posted in the homework forums if it is homework-like. Delta2 Homework Helper Gold Member Just to add something to Orodruin's enlightening post, In many moment of inertia calculations these two meanings happen to be the same thing (for example in the calculation of the Moment of Inertia of an infinitesimally thin circular disc, the distance from the axis of rotation (that passes through the center and is perpendicular to the plane of the disc) equals the distance from the center of the disc ) BUT in the general case they are not the same thing and in this case they are not the same thing. What is the equation that relates r' (the distance from the axis of rotation) and r (the distance from the center of the sphere) in this case? Also the dV you calculate is not the same as the dV that appears in the definition of the moment of inertia. You calculate the infinitesimal volume between a sphere with radius r and radius r+dr. But the dV in the integral in the definition of the moment of inertia is $dV=r^2\sin\theta dr d\theta d\phi$ (r is the distance from the center of the sphere). You just cant use your definition of dV because if you find the equation of r' correctly you ll see that it depends on $\theta$ and $r$ and not only $r$. Last edited:
## Monday, May 31, 2010 ### Israel was right to shoot a few leftist criminals on the boat In 2007, Hamas took over Gaza. Because it's a terrorist organization that has harmed lots of people and whose existence is incompatible with the basic human rights, Israel has understandably imposed a blockade. The security of the civil population of Israel isn't compatible with bringing goods to the Hamas-ruled territory - and it's not just weapons that are dangerous. Whatever the cargo is, it remains unknown unless or until it is checked by the official Israeli forces. Two weeks ago, Israel warned European nations that their leftist organizations trying to bring aid to Gaza are illegal and won't be allowed to complete their mission. As you know, today, a boat of this kind - Turkish passenger boat Mavi Marmara - was trying to enter the Israeli waters from the international ones. The mostly Turkish people on the boat were informed once again that what they were trying to do was illegal. The leading Czech media also offer a horizontal footage of the combative "activists" and their attack on the IDF troops. That includes sound. They continued to pretend that there was no state of Israel and there was no law that applied to the territory and "Operation Sea Breeze" of the IDF had to begin. This kind of denial, if I can use a popular word, must bring them some extraordinary feelings. It must be as thrilling to deny that Israel is a country as it was for the Nazis to deny that the Jews were people. So the "activists" continued in their journey. Now, just ask a simple question: What should have the Israeli army done? They could have told them: I see, you are leftist and therefore politically correct activists and you don't want to change your mind. In that case, it's simply OK. Israel will act as if it were invisible. You may do anything you want. We wish you a nice trip and a pleasant stay. Send our best regards to Hamas. Of course, this is absurd. If Israel or any other country began to be making fun out of its own laws, especially those that are important for its national security, it couldn't last long. On the other hand, if Israel were a straightforward military power that doesn't want to waste any time, it would simply sink the hostile boat. Instead, Israel is something in between: it is a decent democracy. So it sent a helicopter with soldiers whose task was to re-navigate the boat so that the laws would be obeyed. What would a sensible TRF reader on the boat do if the Israeli army were climbing down from the helicopter? Well, I guess that you would relax. The adventurous game is over; IDF won. Don't move and pray. ;-) Instead, what happened is seen on the official IDF video above. The troops who climbed down on the rope were beaten by dozens of passengers with metal sticks, knives, grenades, firebombs, and the troops' own guns. The "activists" are not just clones of Mahatma Gandhi. They're latent anti-Israel soldiers. In fact, they were even trying to bring down the helicopter by attaching the rope to an antenna. Previously, the "activists" screamed "Remember Khaybar, Khaybar, oh Jews. The army of Mohammed will return." Quite a militant chanting for a "humanitarian team". ;-) Khaybar is an oasis near Medina in Saudi Arabia that was inhabited by Jews before Muhammed conquered it in 629 A.D. Also, I am intrigued by their word "singing" for this uproar that is musically inferior to the most drunken Pilsner ice-hockey fans' chorales. It's completely obvious that with such a reaction, shooting simply had to begin at some moment. At the beginning, the soldiers were only using paintball rifles but it probably became too obvious to the "activists" that it was just a toy so "real weapons" had to follow. I find it remarkable that only such a small number of the "activists" were shot. Others will be deported and those who don't co-operate will be imprisoned and they should be very grateful if that's everything that happens to them. Many pundits have explained the subtle and strange coalition between the Islamic fundamentalists and the radical Western leftists. The passengers of this boat were actually hybrids of both groups. They wanted to bring a material aid that would be beneficial for Hamas and they may have ties to Al Qaeda; but they may also be aware of the P.R. issues surrounding similar events, much like the postmodern Western leftists. If they carry "humanitarian aid", it looks nice, they're the good guys, and they're allowed to violate any laws, aren't they? It turns out that they can but some of them could only use their freedom to perform this dangerous experiment once in their life. ;-) Independently of their denial, Israel does exist and does control the territory. They were still violating an important law and whether they also had some humanitarian aid is simply not too important. Given their incredibly aggressive - and stupid - reactions to the orders by the Israeli army, what happened to some of them was inevitable which is why they could deserve the Darwin Award. In this sense, one may include the killed passengers on the boat among the mujahideen - people who are ready to sacrifice their lives if it hurts Israel. As you can see, there's no "qualitative" gap that distinguishes the Islamic terrorists from the Western left-wing activists: there's a continuum. Benjamin Netanyahu fully endorses the raid and he says that he expects Barack Obama to do the same. Given the radical leftist supporters behind Obama, I have certain doubts that the latter expectation will ever materialize. ;-) #### 5 comments: 1. Lubos, I commend you for denouncing anti-Israeli terrorism and taking a stand against international condemnation of Israel's right to self-defense. What is your view on the Obama-Netanyahu wide difference of opinions about peace in the Middle East? Regards, Ervin 2. Dear Ervin, thanks for your warm feedback. I am confident that Obama doesn't have any inherent emotional anti-Israeli emotions - and the cooling of the relations during his reign should be linked to the attitudes of his supporters that he has to represent to a certain extent. Still, he seems to be no real friend with Netanyahu. I guess that when a genuine lethal threat would stand in front of Israel, Obama's America would support Israel. Otherwise, I expect Obama himself and others to pay infinite lip service to the same largely unworkable comments about a peaceful two-state solution. If Israel settles a more lasting and sensible arrangement, using its actual knowledge and reflecting its actual interests a little bit more than the U.S. would recommend, I guess that the U.S. may react in an excited way but they won't openly oppose Israel. 3. There is only one issue. Was the ship in international waters? If so, Israel was stupid and wrong. If not their actions are justified. 4. There is only one issue: Was the ship in international waters? If so, then what Israel did was stupid and wrong. If not, their actions were justified. 5. Dear norm, your condition is legally incorrect. See the text by Prof Dershowitz that the actions were lawful. It's a standard thing to enforce the blockade before the ships enter the domestic waters if there's no doubt about the intent of the ship. America and many others have done the same thing many times, too.
# GCD and LCM properties proof The following website https://www.cut-the-knot.org/arithmetic/GcdLcmProperties.shtml presents three properties of GCD and LCM. I was trying to understand the proof of it, but the proof seems to me very random and does not say when any of the three properties are proven. Does anyone else make sense of that proof? • which parts seem the most random ? – Roddy MacPhee Aug 16 at 15:31 • Well, the whole proof section, I do not see the connection to the first three properties. – Michael Munta Aug 16 at 15:51 The proof is not very clear at all - it looks like it only seeks to prove the third property, but does a very sketchy job of that and then meanders on to a completely different property. Nothing in there addresses the first two properties presented. The article does hint at a viable proof technique*: if you know that every integer $$N$$ has a unique expression as $$N=p_1^{n_1}p_2^{n_2}\ldots p_k^{n_k}$$ for distinct prime $$p_i$$ and positive integer $$n_i$$, questions about divisibility tend to become a lot easier. Note that, if you have two numbers $$N$$ and $$M$$ (or more generally, a finite collection of numbers) you can expand them as products of powers of the same set of primes $$N=p_1^{n_1}p_2^{n_2}\ldots p_k^{n_k}$$ $$M=p_1^{m_1}p_2^{m_2}\ldots p_k^{m_k}$$ where some the exponents might be zero - for instance we might write $$6=2^1\cdot 3^1 \cdot 5^0$$ and $$5=2^0\cdot 3^0\cdot 5^1$$. This makes a lot of operations really easy. For instance, we can write $$NM=p_1^{n_1+m_1}p_2^{n_2+m_2}\ldots p_k^{n_k+m_k}.$$ However, then the statement that $$N|M$$ also becomes easy; the statement $$N|M$$ is defined to mean that there exists some $$c$$ such that $$cN=M$$. However, if we factored $$c$$ then multiplied by our previous rule, that just means that we can get the factorization of $$M$$ by just adding some non-negative quantities to the exponents in $$N$$'s factorization. We can do so exactly when $$n_i \leq m_i$$ for every $$i$$ at which point we can take $$c=p_1^{m_1-n_1}p_2^{m_2-n_2}\ldots p_k^{m_k-n_k}$$ and note $$cN=M$$. However, using this, we also find expressions for the $$\gcd$$ and $$\operatorname{lcm}$$: $$\gcd(N,M)=p_1^{\min(n_1,m_1)}p_2^{\min(n_2,m_2)}\ldots p_k^{\min(n_k,m_k)}$$ $$\operatorname{lcm}(N,M)=p_1^{\max(n_1,m_1)}p_2^{\max(n_2,m_2)}\ldots p_k^{\max(n_k,m_k)}.$$ We can both prove that these are correct and prove the first two properties by noting that if $$K=p_1^{l_1}p_2^{l_2}\ldots p_k^{l_k}$$ is a common divisor of $$N$$ and $$M$$, then $$l_i\leq n_i$$ and $$l_i\leq m_i$$ for each $$i$$, so $$l_i \leq \min(n_i,m_i)$$ meaning that $$K$$ divides the $$\gcd$$ we proposed - also implying the proposed $$\gcd$$ is at least as large as any other common divisor. A similar argument holds for the min. One can then show that $$\operatorname{lcm}(N,M)\gcd(N,M)=NM$$ by noting that $$\min(n_i,m_i)+\max(n_i,m_i)=n_i+m_i$$ and one can show that $$\gcd(K\cdot N,K\cdot M)=K\cdot \gcd(N,M)$$ by factoring $$K$$ as before and noting that $$\min(n_i+l_i,m_i+l_i)=\min(n_i,m_i)+l_i$$ and do a similar thing for $$\operatorname{lcm}$$. Pretty much everything on that web-page follows from this observation - but they don't explain it very clearly. *A slight note on this technique is that unique prime factorization is a deeper theorem than the properties it is being used to prove. Generally, these theorems also follow from the fact that for any pair of integers $$a,b$$, there exist integers $$x$$ and $$y$$ so that $$ax+by=\gcd(a,b)$$ (which happens to have a short - though rather hard to find - proof)- but this is a substantially different viewpoint than the webpage adopts. • In the section with $cN=M$ what do you mean by 'existence of a solution'? – Michael Munta Aug 16 at 20:38 • @MichaelMunta I clarified that a bit with an edit - I'm referring to the fact that $N|M$ generally is defined to mean that there is some integer $c$ so that $cN=M$ - i.e. that there exists a solution $c$ to this equation. – Milo Brandt Aug 16 at 22:37
zbMATH — the first resource for mathematics Exact connections between current fluctuations and the second class particle in a class of deposition models. (English) Zbl 1147.82348 Summary: We consider a large class of nearest neighbor attractive stochastic interacting systems that includes the asymmetric simple exclusion, zero range, bricklayers’ and the symmetric K-exclusion processes. We provide exact formulas that connect particle flux (or surface growth) fluctuations to the two-point function of the process and to the motion of the second class particle. Such connections have only been available for simple exclusion where they were of great use in particle current fluctuation investigations. MSC: 82C22 Interacting particle systems in time-dependent statistical mechanics Full Text: References: [1] E. D. Andjel, Invariant measures for the zero range process. Ann. Probab. 10(3):325–547 (1982). · Zbl 0492.60096 [2] M. Balázs, Growth fluctuations in a class of deposition models. Ann. Inst. H. Poincaré Probab. Statist. 39:639–685 (2003). · Zbl 1029.60075 [3] M. Balázs, E. Cator and T. Seppäläinen, Cube root fluctuations for the corner growth model associated to the exclusion process. Electron. J. Probab. 11:1094–1132 (2006). · Zbl 1139.60046 [4] M. Balázs, F. Rassoul-Agha, T. Seppäläinen, and S. Sethuraman, Existence of the zero range process and a deposition model with superlinear growth rates. To appear in Ann. Probab., http:// arxiv.org/abs/math.PR/0511287 (2006). [5] M. Balázs and T. Seppäläinen, Order of current variance and diffusivity in the asymmetric simple exclusion process. Submitted, http://arxiv.org/abs/math.PR/0608400 (2006). [6] L. Booth, Random Spatial Structures and Sums. PhD thesis, Utrecht University (2002). · Zbl 1040.35122 [7] C. Cocozza-Thivent, Processus des misanthropes. Z. Wahrsch. Verw. Gebiete 70:509–523 (1985). · Zbl 0554.60097 [8] P. A. Ferrari and L. R. G. Fontes, Current fluctuations for the asymmetric simple exclusion process. Ann. Probab. 22:820–832 (1994). · Zbl 0806.60099 [9] P. L. Ferrari and H. Spohn, Scaling limit for the space-time covariance of the stationary totally asymmetric simple exclusion process. Comm. Math. Phys. 265(1):1–44 (2006). · Zbl 1118.82032 [10] T. M. Liggett, An infinite particle system with zero range interactions. Ann. Probab. 1(2):240–253 (1973). · Zbl 0264.60083 [11] T. M. Liggett, Interacting Particle Systems (Springer-Verlag, 1985). · Zbl 0559.60078 [12] M. Prähofer and H. Spohn, Current fluctuations for the totally asymmetric simple exclusion process. In In and out of equilibrium (Mambucaba, 2000), Vol. 51 of Progr. Probab. (Birkhäuser Boston, Boston, MA, 2002, pp. 185–204). · Zbl 1015.60093 [13] C. Quant, On the construction and stationary distributions of some spatial queueing and particle systems. PhD thesis, Utrecht University, 2002. [14] J. Quastel and B. Valkó, t1/3 Superdiffusivity of finite-range asymmetric exclusion processes on $$\mathbb{Z}$$. To appear in Comm. Math. Phys. http://arxiv.org/abs/math.PR/0605266 (2006). [15] F. Rezakhanlou, Hydrodynamic limit for attractive particle systems on $$\mathbb{Z}$$d. Comm. Math. Phys. 140(3):417–448 (1991). · Zbl 0738.60098 [16] F. Spitzer, Interaction of Markov processes. Adv. Math. 5:246–290 (1970). · Zbl 0312.60060 [17] H. Spohn, Large Scale Dynamics of Interacting Particles. Texts and Monographs in Physics (Springer Verlag, Heidelberg, 1991). · Zbl 0742.76002 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# parts of a polynomial The elements of a polynomial A polynomial can contain variables, constants, coefficients, exponents, and operators. Click on the lesson below that interests you, or follow the lessons in order for a complete study of the unit. Input p is a vector containing n+1 polynomial coefficients, starting with the coefficient of x n. A coefficient of 0 indicates an intermediate power that is not present in the equation. The only real information that we’re going to need is a complete list of all the zeroes (including multiplicity) for the polynomial. Spell. Save. I am not able to find any reason for this. Polynomials of degree greater than 2: Polynomials of degree greater than 2 can have more than one max or min value. : A polynomial may have more than one variable. Teresa Coppens from Ontario, Canada on April 15, 2012: Another great math hub Mel. So thanks! 10th grade . Quadratic Polynomial: A polynomial of degree 2 is called quadratic polynomial. The Fundamental Theorem of Algebra states that there is at least one complex solution, call it ${c}_{1}$. Section 1-5 : Factoring Polynomials. In other words, it must be possible to write the expression without division. C = convn (A, B) C = convn (A, B, shape) Return the n-D convolution of A and B. So, if you can’t factor the polynomial then you won’t be able to even start the problem let alone finish it. When a term contains an exponent, it tells you the degree of the term. To create a polynomial, one takes some terms and adds (and subtracts) them together. r = roots(p) returns the roots of the polynomial represented by p as a column vector. By the same token, a monomial can have more than one variable. This quiz is incomplete! A polynomial function is a function comprised of more than one power function where the coefficients are assumed to not equal zero. Edit. Test. Share practice link. Print; Share; Edit; Delete; Host a game. A univariate polynomial has one variable—usually x or t.For example, P(x) = 4x 2 + 2x – 9.In common usage, they are sometimes just called “polynomials”.. For real-valued polynomials, the general form is: p(x) = p n x n + p n-1 x n-1 + … + p 1 x + p 0.. 8. And if you graph a polynomial of a single variable, you'll get a nice, smooth, curvy line with continuity (no holes. Math and I don't get on. They can be named for the degree of the polynomial as well as by the number of terms it has. Play. Polynomial Functions . FRACTIONAL PARTS OF POLYNOMIALS OVER THE PRIMES ROGER BAKER Dedicated to the memory of Klaus Roth Abstract. Played 186 times. You can divide up a polynomial into "terms", separated by each part that is being added. 64% average accuracy. There are different ways polynomials can be categorized. Live Game Live. See also: deconv, conv2, convn, fftconv. Maths Polynomials part 6 (Degree of Zero polynomial) CBSE class 9 Mathematics IX Solo Practice. For example, put the dividend under the long division bar and the diviser to the left. For example, if you add or subtract polynomials, you get another polynomial. It can be used to find these eigenvalues, prove matrix similarity, or characterize a linear transformation from a vector space to itself. In each case, the accompanying graph is shown under the discussion. standard form of a polynomial . The following examples illustrate several possibilities. Ask Question Asked 7 years, 7 months ago. The short answer is that polynomials cannot contain the following: division by a variable, negative exponents, fractional exponents, or radicals. Polynomials are usually written in decreasing order of terms. For example, p = [3 2 -2] represents the polynomial … The answer key is below. :). I don't know if stackoverflow is the right place to post it but since I use matlab and want to do this with it, I post it there. We've got you covered—master 315 different topics, practice over 1850 real world examples, and learn all the best tips and tricks. Is a term that has a variable. The term with the highest degree of the variable in polynomial functions is called the leading term. 4xy + 2x 2 + 3 is a trinomial. The graph of the polynomial function of degree n n must have at most n – 1 n – 1 turning points. The definition can be derived from the definition of a polynomial equation. Polynomials are composed of some or all of the following: There are a few rules as to what polynomials cannot contain:Polynomials cannot contain division by a variable.For example, 2y2+7x/4 is a polynomial, because 4 is not a variable. For example, x + y and x 2 + 5y + 6 are still polynomials although they have two different variables x and y. I have a feeling I'll be referring back to it as my kids get a little older! The term whose exponents add up to the highest number is the leading term. Here are some examples: There are quadrinomials (four terms) and so on, but these are usually just called polynomials regardless of the number of terms they contain. This topic covers: - Adding, subtracting, and multiplying polynomial expressions - Factoring polynomial expressions as the product of linear factors - Dividing polynomial expressions - Proving polynomials identities - Solving polynomial equations & finding the zeros of polynomial functions - Graphing polynomial functions - Symmetry of functions I have a problem of algorithm. Played 58 times. If a polynomial has the degree of two, it is often called a quadratic. Viewed 417 times 6. Flashcards. A polynomial function of n th n th degree is the product of n n factors, so it will have at most n n roots or zeros, or x-intercepts. For example, 2 × x × y × z is a monomial. If you do have javascript enabled there may have been a loading error; try refreshing your browser. 0. Degree of polynomial. a year ago. There are a number of operations that can be done on polynomials. My marks have improved a lot and I'm so happy:). Learn terms and … In terms of degree of polynomial polynomial. This means the graph has at most one fewer turning point than the degree of the polynomial or one fewer than the number of factors. Products of Polynomials (GNU Octave (version 6.1.0)) Next: ... Return the central part of the convolution with the same size as a. shape = "valid" Return only the parts which do not include zero-padded edges. Improve your skills with free problems in 'Identifying Parts of a Polynomial Function (Degree, Type, Leading Coefficient)' and thousands of other practice lessons. Parts of an Equation. Practice. A graph of a polynomial of a single variable shows nice curvature. A Polynomial can be expressed in terms that only have positive integer exponents and the operations of addition, subtraction, and multiplication. Delete Quiz. Here the FOIL method for multiplying polynomials is shown. Finish Editing. Given a graph of a polynomial function of degreeidentify the zeros and their multiplicities. If you choose, you could then multiply these factors together, and you should get the original polynomial (this is a great way to check yourself on your factoring skills). The sum of the multiplicities is the degree of the polynomial function. A polynomial function is a function that can be expressed in the form of a polynomial. cardelean from Michigan on April 17, 2012: Excellent guide. Save. 6th - 10th grade . Name Per Mathematics. It is usually … For each question, choose the best answer. To divide polynomials, start by writing out the long division of your polynomial the same way you would for numbers. Polynomials are often easier to use than other algebraic expressions. This quiz is incomplete! A polynomial function is a function that involves only non-negative integer powers or only positive integer exponents of a variable in an equation like the quadratic equation, cubic equation, etc.For example, 2x+5 is a polynomial that has exponent equal to 1. This unit is a brief introduction to the world of Polynomials. Similarity and difference between a monomial and a polynomial. Model and solve one-step linear equations: Solving two-step linear equations using addition and subtraction: Solving two-step linear equations using multiplication and division: Solving two-step linear equations using distributive property: Convert between radicals and rational exponents, Conversion between entire radicals and mixed radicals, Conversions between metric and imperial systems, Understanding graphs of linear relationships, Understanding tables of values of linear relationships, Representing patterns in linear relations, Solving linear equations using multiplication and division. a year ago. So people can talk about equations, there are names for different parts (better than saying "that thingy there"!) It's great that he feels more confident in math now. Negative exponents are a form of division by a variable (to make the negative exponent positive, you have to divide.) The Remainder Theorem If a polynomial f(x) is divided by x − k,then the remainder is the value f(k). She will love it :). Finally, subtract from the dividend before repeating the previous 3 steps on the … Engaging math & science practice! Edit. :), Melbel I will not take your quiz because I already know I will fail hehe Math never was my thing. by elizabethr.pratt_63997. 0. But from what I could comprehend this seems to be a good hub and I don't doubt you'll be helping loads of people who maybe didn't understand their instructor's explanation. Welcome to the Algebra 1 Polynomials Unit! The largest possible number of minimum or maximum points is one less than the degree of the polynomial. PLAY. The highest power of the variable of P(x)is known as its degree. By the same token, a monomial can have more than one variable. Polynomials. Also, polynomials can consist of a single term as we see in the third and fifth example. The polynomial expressions are solved by: Combining like terms (monomials having same variables using arithmetic operations). The primitive part of a greatest common divisor of polynomials is the greatest common divisor (in R) of their primitive parts: {\displaystyle \operatorname {pp} (\operatorname {gcd} (P_ {1},P_ {2}))=\operatorname {gcd} (\operatorname {pp} (P_ {1}),\operatorname {pp} (P_ {2})).} A polynomial is an expression containing two or more algebraic terms. parts of a polynomial. What are the rules for polynomials? Oddly enough my daughter (11) is a math genius and I am going to let her read this tomorrow. A polynomial is a mathematical expression consisting of a sum of terms, each term including a variable or variables raised to a power and multiplied by a coefficient. Match. I love maths, but I'm a little rusty on the terminology. Remember that a polynomial is any algebraic expression that consists of terms in the form $$a{x^n}$$. is a letter that is used to present a unknown number. This topic covers: - Adding, subtracting, and multiplying polynomial expressions - Factoring polynomial expressions as the product of linear factors - Dividing polynomial expressions - Proving polynomials identities - Solving polynomial equations & finding the zeros of polynomial functions - Graphing polynomial functions - Symmetry of functions. Very useful for those struggling with these concepts and there are many out there including parents struggling to help their kids in grades 6 to 8 with basic algebra. Live Game Live. Share practice link. Phil Plasma from Montreal, Quebec on April 14, 2012: Excellent explanation of what a polynomial is. 1. In mathematics, a polynomial is an expression consisting of variables (also called indeterminates) and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponentiation of variables. Edit. This really is a polynomial even it may not look like one. 0. Let f be a polynomial of degree k > 1 with irrational leading coefficient. Now that you understand what makes up a polynomial, it's a good idea to get used to working with them. We will add, subtract, multiply, and even start factoring polynomials. Of all the topics covered in this chapter factoring polynomials is probably the most important topic. There are some pretty cool things about polynomials. Given a polynomial function f, evaluate f(x) at x = k using the Remainder Theorem. 69% average accuracy. All subsequent terms in a polynomial function have exponents that decrease in value by one. Don't procrastinate any longer, it could be too late! In mathematics, factorization or factoring is the breaking apart of a polynomial into a product of other smaller polynomials. If you're taking an algebra course, chances are you'll be doing operations on polynomials such as adding them, subtracting them, and even multiplying and dividing polynomials (if you're not already doing so.). My child used to get confused a lot in math class before. The largest term or the term with the highest exponent in the polynomial is usually written first. It's easiest to understand what makes something a polynomial equation by looking at examples and non examples as shown below. If you multiply them, you get another polynomial.Polynomials often represent a function. How do you solve polynomial expressions? Moon Daisy from London on April 18, 2012: A great hub. "Nomial", also Greek, refers to terms, so polynomial means "multiple terms.". Delete Quiz. Polynomials with degrees higher than three aren't usually named (or the names are seldom used.). 2xy 3 + 4y is a binomial. For example, x-3 is the same thing as 1/x3.Polynomials cannot contain fractional exponents.Terms containing fractional exponents (such as 3x+2y1/2-1) are not considered polynomials.Polynomials cannot contain radicals.For example, 2y2 +√3x + 4 is not a polynomial. STUDY. Homework. The sum of the exponents is the degree of the equation.Example: Figure out the degree of 7x2y2+5y2x+4x2.Start out by adding the exponents in each term.The exponents in the first term, 7x2y2 are 2 (from 7x2) and 2 (from y2) which add up to four.The second term (5y2x) has two exponents. The degree of this polynomial is four. An example of a polynomial of a single indeterminate x is x − 4x + 7. Melanie has a BS in physical science and is in grad school for analytics and modeling. The degree of polynomial with single variable is the highest power among all the monomials. Play. Polynomial terms do not have square roots of variables, factional powers, nor does it have … They are often the sum of several terms containing different powers (exponents) of variables. To play this quiz, please finish editing it. Here is a typical polynomial: Notice the exponents (that is, the powers) on each of the three terms. Degree of a polynomial function is very important as it tells us about the behaviour of the function P(x) when x becomes very large. Parts of a Polynomial DRAFT. Another way to write the last example is Use synthetic division to divide the polynomial by x − k. What is the easiest or fastest way to extract the homogeneous part of a polynomial in Mathematica. Univariate Polynomial. Finish Editing. She also runs a YouTube channel: The Curious Coder. We will discuss factoring out the greatest common factor, factoring by grouping, factoring quadratics and factoring polynomials … The domain of a polynomial f… The size of the result is max (size (a) - size (b) + 1, 0). A polynomial is generally represented as P(x). Parts of a Polynomial DRAFT. A one-variable (univariate) polynomial of degree n has the following form: anxn + an-1xn-1 +... + a2x2 + a1x1 + ax Polynomial rings over polynomial rings are multigraded, so either use a multidegree or specify weights to avoid errors. Polynomial Examples: 4x 2 y is a monomial. If harder operations are used, such as division or square root s, then this algebraic expression is not a polynomial. A polynomial is an algebraic expression in which the only arithmetic is addition, subtraction, multiplication and whole number exponentiation. If it has a degree of three, it can be called a cubic. In this section we look at factoring polynomials a topic that will appear in pretty much every chapter in this course and so is vital that you understand it. You can still navigate around the site and check out our free content, but some functionality, such as sign up, will not work. Procrastinate any longer, it tells you the degree of a matrix is a more interactive way study! 3 terms. am not able to find the degree of the polynomial function of! May have been a loading error ; try refreshing your browser well as the. The most important topic it as my kids get a little more ( better than saying that... Many sections in later chapters where the first step will be to factor a polynomial called. I 'm so happy: ) variables is x − 4x + 7 arithmetic operations ) irrational coefficient...: ) feels more confident in math class and I can always find the help I need so easily over... Ask Question Asked 7 years, 7 months ago f is a brief introduction to the of! A little rusty on the terminology exponent positive, you get another polynomial.Polynomials often represent a function comprised of than. A polynomial > 1 with irrational leading coefficient about polynomials... informative with the power. Third and fifth example breaking apart of a single zero the three terms... Say, 3x 2 + 2x + 4, there are many in. A lot and I 'm so happy parts of a polynomial ), Melbel I will not take your quiz because already. The accompanying graph is shown under the long division bar and the operations of addition subtraction... 2Xyz − yz + 1, 0 ) ) returns the roots of the term with highest. More interactive way of study math and offers students an easy access to stay on in... Also, polynomials can consist of a polynomial of a matrix is a polynomial, down. Polynomial may have more than one variable the first step will be to factor polynomial. There may have been a loading error ; try refreshing your browser getting a rough sketch of a polynomial one. Other smaller polynomials over the PRIMES ROGER BAKER Dedicated to the determinant of a polynomial associated to a matrix and... Will fail hehe math never was my thing x1. ) when a term contains exponent... B ) + 1 it must be possible to write the expression without.. Known as its degree square root s, then this algebraic expression that consists terms..., parts of a polynomial example, if you do have javascript enabled there may have more than one function! From x, this parts of a polynomial because x is the same token, a monomial and a polynomial over polynomial are... Write the expression without division, Melbel I will not take your quiz because I already know will. Hub Mel almost linear at the intercept, it could be too late FOIL method for getting rough... This algebraic expression is not a polynomial function of degree greater than 2 can more! Must be possible to write the expression without division, start by writing out the long division of polynomial... At the intercept, it is often called a leading term avoid errors are names for PARTS! Other algebraic expressions terms ( monomials having same variables using arithmetic operations.. To the memory of Klaus Roth Abstract whose exponents add up to the determinant of a of... Let her read this tomorrow comprised of more than one variable fifth example factors, for example, you... Complete study of the term whose exponents add up to the highest exponent in the polynomial as well as the. To find the degree of three, it 's easiest to understand: Excellent explanation of what a polynomial of. The monomials ( a ) - size ( a { x^n } )... The most important topic some terms and adds ( and subtracts ) together! ( better than saying that thingy there ''! has a BS in physical science and is grad! ) is known as its degree write the expression without division was my thing and tricks up of two more! That thingy there ''! one max or min value a complete study of matrix! Step will be to factor a polynomial of a matrix that gives information about the matrix your quiz because already. Have positive integer exponents and the diviser to the world of polynomials that thingy there '' )! Are 2 ( from x, this is because x is the breaking apart parts of a polynomial... Poly '' means multiple. max or min value, so means... This tomorrow a graph of a single zero expression that consists of terms in a into. Covers all the best tips and tricks division bar and the diviser to the highest power among the. Multiple. the expressions in Maths, but I 'm so happy ). Studypug is a more interactive way of study math and offers students easy! Have to divide polynomials, you get another polynomial.Polynomials often represent a.. Example a little rusty on the lesson below that interests you, or follow lessons! N'T procrastinate any longer, it can be called a leading term 1 n – turning... You can divide up a polynomial equation one max or min value polynomials with degrees higher than three n't. Makes something a polynomial function f, evaluate f ( x ) eigenvalues, prove matrix,. 4X 2 y is a letter that is being added to look at a method for multiplying polynomials is.. From Greek and means multiple terms. the help I need so.. Having same variables using arithmetic operations ) of polynomials over the PRIMES ROGER BAKER Dedicated to the determinant of polynomial. Graph crosses the x -axis and appears almost linear at the intercept, it could be too!... Parts ( better than saying that thingy there ''! in math! Confused a lot and I am going to let her read this tomorrow division of your polynomial same! Topics covered in this set ( 10 ) Coefficient function of degree and... Vector space to itself shown under the discussion Gurgaon, India on April 14, 2012: explanation... In a polynomial can contain variables, constants, coefficients, exponents, and even start factoring polynomials probably... The intercept, it is a monomial can have more than one variable this algebraic expression that consists of in... They can be called a cubic in later chapters where the first term in a polynomial have! Be to factor a polynomial, say, 3x 2 + 3 a. This tomorrow your quiz because I already know I will fail hehe math never was my thing shown under long... As well as by the number of minimum or maximum points is one less than the of... + 3 is a letter that is being added expression without division sketch of a matrix that gives information the! From London on April 17, 2012: another great math parts of a polynomial Mel greater than 2: of. Studypug is a typical polynomial: Notice the exponents ( that is, the Poly! Using arithmetic operations ) breaking apart of a matrix, and learn all the topics I in. Often the sum of several terms containing different powers ( exponents ) of and! And offers students an easy access to stay on track in their math class and can! Class and I can always find the degree of the variable in comes! Pug 's math videos are concise and easy to understand a linear from... Confident in math class and I am going to let her read this tomorrow quiz! Is shown under the discussion, then this algebraic expression made up of or... – 1 n – 1 parts of a polynomial points weights to avoid errors improved a lot and I always. F, evaluate f ( x ) is known as its degree or the term little! Expressions are solved by: Combining like terms ( monomials having same variables arithmetic... About the matrix highest exponent in the polynomial function of degree four and [ latex ] f\left ( x\right =0..., there are many sections in later chapters where the coefficients are assumed to not equal zero polynomials... Non examples as shown below factoring polynomials the terminology also runs a YouTube:. Set of factors, for example, 2 × x × y z! That you understand what makes up a polynomial may have more than one.. Definition of a polynomial of a matrix that gives information about the matrix single! Polynomials... informative brief introduction to the left and learn all the topics I learn my. Than one variable put the dividend under the discussion 2 ( θ.. F, evaluate f ( x ) is a monomial all the.. Called a leading term be possible to write the expression without division be named for the degree of the.... Polynomials with degrees higher than three are n't usually named ( or the with. − yz + 1, 0 ) example in three variables is x + 2xyz yz! N – 1 n – 1 n – 1 turning points value one... Have more than one variable breaking apart of a matrix is a polynomial function of degree n... I will not take your quiz because I already know I will fail hehe math never was my.... Write the expression without division read this tomorrow matrix similarity, or the!, coefficients, exponents, and its roots are the expressions in Maths, I. The left than saying that thingy there ''!, that includes variables, constants, coefficients exponents! Talk about equations, there are many sections in later chapters where the step... On track in their math class polynomials of degree four and [ ].
/ Learn Python # Python Dictionary Tutorial Python offers a variety of data structures to hold our information — the dictionary being one of the most useful. Python dictionaries quick, easy to use, and flexible. As a beginning programmer, you can use this Python tutorial to become familiar with dictionaries and their common uses so that you can start incorporating them immediately into your own code. When performing data analysis, you'll often have data that is an unusable or hard-to-use form. Dictionaries can help here, by making it easier to read and change your data. For this tutorial, we will use the Craft Beers data sets from Kaggle. There is one data set describing beer characterstics, and another that stores geographical information on brewery companies. For the purposes of this article, our data will be stored in the beers and breweries variables, each as a list of lists. The tables below give a quick look at what the data look like. This table contains the first row from the beers data set. abv ibu id name style brewery_id ounces 0 0.05 1436 Pub Beer American Pale Lager 408 12.0 This table contains the first row from the breweries data set. name city state 0 Northgate Brewing Minneapolis MN ### Prerequisite knowledge This article assumes basic knowledge of Python. To fully understand the article, you should be comfortable working with lists and for loops. We'll cover: • Key terms and concepts to dictionaries • Dictionary rules • Basic dictionary operations • creation and deletion • access and insertion • membership checking • Looping techniques • Dictionary comprehensions ### Getting into our role We will assume the role of a reviewer for a beer enthusiast magazine. We want to know ahead of time what each brewery will have before we arrive to review, so that we can gather useful background information. Our data sets hold information on beers and breweries, but the data themselves are not immediately accessible. The data are currently in the form of a list of lists. To access individual data rows, you must use a numbered index. To get the first data row of breweries, you look at the 2nd item (the column names are first). breweries[1] breweries[1] is a list, so you can also index from it as well. Getting the third item in this list would look like: breweries[1][2] If you didn't know that breweries was data on breweries, you'd have a hard time understanding what the indexing is trying to do. Imagine writing this code and looking at it again 6 months in the future. You're more than likely to forget, so it merits us reformatting the data in a more readable way. # Key terms and concepts Dictionaries are made up of key-value pairs. Looking at the key in a Python dictionary is akin to the looking for a particular word in a physical dictionary. The value is the corresponding data that is associated with the key, comparable to the definition associated with the word in the physical dictionary. The key is what we look up, and it's the value that we're actually interested in. We say that values are mapped to keys. In the example above, if we look up the word "programmer" in the English dictionary, we'll see: "a person who writes computer programs." The word "programmer" is the key mapped to the definition of the word. ## Dictionary rules for keys and values Dictionaries are immensely flexible because they allow anything to be stored as a value, from primitive types like strings and floats to more complicated types like objects and even other dictionaries (more on this later). By contrast, there are limitations to what can be used as a key. A key is required to be an immutable object in Python, meaning that it cannot be alterable. This rule allows strings, integers, and tuples as keys, but excludes lists and dictionaries since they are mutable, or able to be altered. The rationale is simple: if any changes happen to a key without you knowing, you won't be able to access the value anymore, rendering the dictionary useless. Thus, only immutable objects are allowed to be keys. A key must also be unique within a dictionary. The key-value structuring of a dictionary is what makes it so powerful, and throughout this post we'll delve into its basic operations, use cases, and their advantages and disadvantages. # Basic dictionary operations ## Creation and deletion Let's start with how to create a dictionary. First, we will learn how to make an empty dictionary since you'll often find that you want to start with an empty one and populate with data as needed. To create an empty dictionary, we can either use the dict() function with no inputs, or assign a pair of curly brackets with nothing in between to a variable. We can confirm that both methods will produce the same result. empty = {} also_empty = dict() empty == also_empty >>> True Now, an empty dictionary isn't of much use to anybody, so we must add our own key-value pairs. We will cover this later in the article, but know that we are able to start with empty dictionaries and populate them after the fact. This will allow us to add in more information when we need it. empty["First key"] = "First value" empty["First key"] >>> "First value" Alternatively, you can also create a dictionary and pre-populate it with key-value pairs. There are two ways to do this. The first is to use brackets containing the key-value pairs. Each key and value are separated by a :, while individual pairs are separated by a comma. While you can fit everything on one line, it's better to split up your key-value pairs among different lines to improve readability. data = { "beer_data": beers, "brewery_data": breweries } The above code creates a single dictionary, data, where our keys are descriptive strings and the values are our data sets. This single dictionary allows us to access both data sets by name. The second way is through the dict() method. You can supply the keys and values either as keyword arguments or as a list of tuples. We will recreate the data dictionary from above using the dict() methods and providing the key-value pairs appropriately. # Using keyword arguments data2 = dict(beer_data=beers, brewery_data=breweries) # Using a list of tuples tuple_list = [("brewery_data", breweries), ("beer_data", beers)] data3 = dict(tuple_list) We can confirm that each of the data dictionaries are equivalent in Python's eyes. data == data2 == data3 >>> True With each option, the key and value pairs must be formatted in a particular way, so it's easy to get mixed up. The diagram below helps to sort out where keys and values are in each. We now have three dictionaries storing the exact same information, so it's best if we just keep one. Dictionaries themselves don't have a method for deletion, but Python provides the del statement for this purpose. del data2 del data3 After creating your dictionaries, you'll almost certainly need to add and remove items from them. Python provides a simple, readable syntax for these operations. ## Data access and insertion The current state of our beers and breweries dictionary is still dire — each of the data sets originally was a list of lists, which makes it difficult to access specific data rows. We can achieve better structure by reorganizing each of the data sets into its own dictionary and creating some helpful key-value pairs to describe the data within the dictionary. The raw data itself is mixed. The first row in each list of lists is a list of strings containing the column names, but the rest contains the actual data. It'll be better to separate the columns from the data so we can be more explicit. Thus, for each data set, we'll create a dictionary with three keys mapped to the following values: 1. The raw data itself 2. The list containing the column names 3. The list of lists containing the rest of the data beer_details = { "raw_data": beers, "columns": beers[0], "data": beers[1:] } brewery_details = { "raw_data": breweries, "columns": breweries[0], "data": breweries[1:] } Now, we can reference the columns key explicitly to list the column names of the data instead of indexing the first item in raw_data. Similarly, we are now able to explicitly ask for just the data from the data key. So far, we've learned how to create empty and prepopulated dictionaries, but we do not know how to read information from them once they've been made. To access items within a dictionary, we need bracket notation. We reference the dictionary itself followed by a pair of brackets with the key that we want to look up. For example below, we read the column names from brewery_details. brewery_details["columns"] >>> ['', 'name', 'city', 'state'] This action should feel similar to us looking up a word in a English dictionary. We "looked up" a key and got the information we wanted back in the mapped value. In addition to looking up key-value pairs, sometimes we'll actually want to change the value associated with a key in our dictionaries. This operation also uses bracket notation. To change a dictionary value, we first access the key and then reassign it using an = expression. We saw in the code above that one of the brewery columns is an empty string, but this first column actually contains a unique ID for each brewery! We will reassign the first column name to a more informative name. # reassigning the first column of the breweries data set brewery_details["columns"][0] = 'brewery_id' # confirming that our reassignment worked brewery_details["columns"][0] >>> "brewery_id" If the series of brackets looks confusing, don't fret. We have taken advantage of nesting. We know that the columns key in brewery_details is mapped to a list, so we can treat brewery_details["columns"] as a list (i.e. we can use list indexing). Nesting can get confusing if we lose track of what each level represents, but we visualize this nesting below to clarify. It's also common practice to nest dictionaries within dictionaries because it creates self-documenting code. That is to say, it is evident what the code is doing just by reading it, without any comments to help. Self-documenting code is immensely useful because it is easier and faster to understand at a moment's read through. We want to preserve this self-documenting quality, so we will nest the beer_details and brewery_details dictionaries into a centralized dictionary. The end result are nested dictionaries that are easier to read from than the original raw data itself. # datasets is now a dictionary whose values are other dictionaries datasets = { "beer": beer_details, "breweries": brewery_details } # This structure allows us to make self-documenting inquiries to both data sets datasets["beer"]["columns"] >>> ['', 'abv', 'ibu', 'id', 'name', 'style', 'brewery_id', 'ounces'] # Compare the above to how our older data dictionary would have been written data["beer_data"][0] >>> ['', 'abv', 'ibu', 'id', 'name', 'style', 'brewery_id', 'ounces'] The information embeded in the code is clear if we nest dictionaries within dictionaries. We've created a structure that easily describes the intent of the programmer. The following illustration breaks down the dictionary nesting. From here on out, we'll use datasets to manage our data sets and perform more data reorganization. The beer and brewery dictionaries we made are a good start, but we can do more. We'd like to create a new key-value pair to contain a string description of what each data set contains in case we forget. We can create dictionaries and read and change values of present key-value pairs, but we don't know how to insert a new key-value pair. Thankfully, inserting pairs is similar to reassigning dictionary values. If we assign a value to a key that doesn't exist in a dictionary, Python will take the new key and value and create the pair within the dictionary. # The description key currently does not exist in either the inner dictionary datasets["beer"]["description"] = "Contains data on beers and their qualities" datasets["breweries"]["description"] = "Contains data on breweries and their locations" While Python makes it easy to insert new pairs into the dictionary, it stops users if they try to access keys that don't exist. If you try to access a key that doesn't exist, Python will throw an error and stop your code from running. # The key best_beer doesn't currently exist, so we cannot access it datasets["beer"]["best_beer"] >>> KeyError: 'best_beer' As your dictionaries get more complex, it's easier to lose track of which keys are present. If you leave your code for a week and forget what's in your dictionary, you'll constantly run into KeyErrors. Thankfully, Python provides us with an easy way to check the present keys in a dictionary. This process is called membership checking. ## Membership checking If we want to check if a key exists within a dictionary, we can use the in operator. You can also check on whether a key doesn't exist by using not in. The resulting code reads almost like natural English, which also means it is easier to understand at first glance. "beer" in datasets >>> True "wine" in datasets >>> False "wine" not in datasets >>> True Using in for membership checking has great utility in conjunction with if-else statements. This combination allows you to set up conditional logic that will prevent you from getting KeyErrors and enable you to make more sophisticated code. We won't delve too deeply into this concept, but there's resources at the end for the curious. ## Section summary At this point, we know how to create, read, update, and delete data from our dictionaries. We transformed our two raw data sets into dictionaries with greater readability and ease of use. With these basic dictionary operations, we can start performing more complex operations. For example, it is extremely common to want to loop over the key-value pairs and perform some operation on each pair. # Looping techniques When we created the description key for each of the data sets, we made two individual statements to create each key-value pair. Since we performed the same operation, it would be more efficient to use loops. Python provides three main methods to use dictionaries in loops: keys(), values(), and items(). Using keys() and values() allows us to loop over those parts of the dictionary. for key in datasets.keys(): print(key) >>> beer >>> breweries for val in datasets.values(): print(type(val)) >>> <class 'dict'> >>> <class 'dict'> The items() method combines both into one. When used in a loop, items() returns the key-value pairs as tuples. The first element of this tuple is the key, while the second is the value. We can use destructuring to get these elements into properly informative variable names. The first variable key will take the key in the tuple, while val will get the mapped value. for key, val in datasets.items(): print(f'The {key} data set has {len(val["data"])} rows.') >>> The beer data set has 2410 rows. >>> The breweries data set has 558 rows. The above loop tells us that the beer data set is much bigger than the brewery data set. We would expect breweries to sell multiple types of beers, so there should be more beers than breweries overall. Our loop confirms this thought. Currently, each of the data rows are a list, so referencing these elements by number is undesirable. Instead, we'll turn each of the data rows into its own dictionary, with the column name mapped to its actual value. This would make analyzing the data easier in the long run. We should do this operation on both data sets, so we'll leverage our looping techniques. # Perform this operation for both beers and breweries data sets for k, v in datasets.items(): # Initialize a key-value pair to hold our reformatted data v["data_as_dicts"] = [] # For every data row, create a new dictionary based on column names for row in v["data"]: data_row_dict = dict(zip(v["columns"], row)) v["data_as_dicts"].append(data_row_dict) There's a lot going on above, so we'll slowly break it down. 1. We loop through datasets to ensure we transform both of the beer and breweries data. 2. Then, we create a new key called data_as_dicts mapped to an empty array which will hold our new dictionaries. 3. Then we start iterating over all the data, contained in the data key. zip() is a function that takes two or more lists and makes tuples based off these lists. 4. We take advantage of the zip() output and use dict() to create new data in our preferred form: column names mapped to their actual value. 5. Finally, we append it to data_as_dicts list. The end result is better formatted data that is easier to read and come back to repeatedly. We can look at the end result below. # The first data row in the beers data set datasets["beer"]["data_as_dicts"][0] >>> {'': '0', 'abv': '0.05', 'brewery_id': '408', 'ibu': '', 'id': '1436', 'name': 'Pub Beer', 'ounces': '12.0', 'style': 'American Pale Lager'} # The first data row in its original form datasets["beer"]["raw_data"][0] >>> ['0', '0.05', '408', '', '1436', 'Pub Beer', '12.0', 'American Pale Lager'] ## Section summary In this section, we learned how to use dictionaries with the for loop. Using loops, we reformatted each data row into dictionaries for enhanced readability. Our future selves will thank us later when we look back at the code we've written. We're now set up to perform our final operation: matching all the beers to their respective breweries. Each of the beers has a brewery that it originates from, given by the brewery_id key in both data sets. We will create a whole new data set that matches all the beers to their brewery. We could use loops to accomplish this, but we have access to an advanced dictionary operation that could turn this data transformation from a multi-line loop to a single line of code. # Dictionary comprehensions Each beer in the beers data set was associated with a brewery_id, which is linked to a single brewery in breweries. Using this ID, we can pair up all of the beers with their brewery. It's generally a better idea to transform the raw data and place it in a new variable, rather than alter the raw data itself. Thus, we'll create another dictionary within datasets to hold our pairing. In this new dicitonary, the brewery name itself is the key, the mapped value will be a list containing the names of all of the beers the brewery offers, and we will match them based on the brewery_id data element. We can perform this matching just fine with the looping techniques we learned previously, but there still remains one last dictionary aspect to teach. Instead of a loop, we can perform the matching succinctly using dictionary comprehension. A "comprehension" in computer science terms means to perform some task or function on all items of a collection (like a list). A dictionary comprehension is similar to a list comprehension in terms of syntax, but instead creates dictionaries from a base list. If you need a refresher on list comprehensions, you can check out this tutorial here. To give a quick example, we'll use a dictionary comprehension to create a dictionary from a list of numbers. nums = [1, 2, 3, 4, 5] dict_comprehension = { str(n) : "The corresponding key is" + str(n) for n in nums } for val in dict_comprehension.values(): print(val) >>> The corresponding key is 1 The corresponding key is 2 The corresponding key is 3 The corresponding key is 4 The corresponding key is 5 We will dissect the dictionary comprehension code below: To create a dictionary comprehension, we wrap 3 elements around an opening and closing bracket: 1. A base list 2. What the key should be for each item from the base list 3. What the value should be for each item from the base list nums forms the base list that the key-value pairs of dict_comprehension are based off of. The keys are stringified versions of each number (to differentiate it from list indexing), while the values are a string describing what the key is. This pet example is useless by itself, but serves to illustrate the somewhat complicated syntax of a dictionary comprehension. Now that we know how a dictionary comprehension is composed, we will see its real utility when we apply it to our beer and breweries data set. We only need a two aspects of the breweries data set to perform the matching: 1. The brewery name 2. The brewery ID To start off, we'll create a list of tuples containing the name and ID for each brewery. Thanks to the reformatted data in the data_as_dicts key, this code is easy to write in a list comprehension. # This list comprehension captures all of the brewery IDs and names from store brewery_id_name_pairs = [ (row["brewery_id"], row["name"]) for row in datasets["breweries"]["data_as_dicts"] ] brewery_id_name_pairs is now a list of tuples and will form the base list of the dictionary comprehension. With this base list, we will use to the name of the brewery name as our key and a list comprehension as the value. brewery_to_beers = { pair[1] : [b["name"] for b in datasets["beer"]["data_as_dicts"] if b["brewery_id"] == pair[0]] for pair in brewery_id_name_pairs } Before we discuss how this monster works, it's worth taking some time to see what the actual result is. # Confirming that a dictionary comprehension creates a dictionary type(brewery_to_beers) >>> <class 'dict'> # Let's see what the Angry Orchard Cider Company (a personal favorite) makes brewery_to_beers["Angry Orchard Cider Company"] >>> ["Angry Orchard Apple Ginger", "Angry Orchard Crisp Apple", "Angry Orchard Crisp Apple"] As we did with the simple example, we will highlight the crucial parts of this unwieldy (albeit interesting) dictionary comprehension. If we break apart the code and highlight the specific parts, the structure behind the code becomes more clear. The key is taken from the appropriate part of the brewery_id_name_pair. It is the mapped value that takes up most of the logic here. The value is a list comprehension with conditional logic. In plain English, the list comprehension will store any beers from the beer data when the beer's associated brewery_id matches the current brewery in the iteration. Another illustration below lays out the code for the list comprehension by its purpose. Since we based the dictionary comprehension off of a list of all the breweries, the end result is what we wanted: a new dictionary that maps brewery names to all the beers that it sells! Now, we can just consult brewery_to_beers when we arrive at a brewery and find out instantly what they have! This section had some complicated code, but it's wholly within your grasp. If you're still having trouble, keep reviewing the syntax and try to make your own dicitonary comprehensions. Before long, you'll have them in your coding arsenal. We've covered a lot of ground on how to use dictionaries in this tutorial, but it's important to take a step back and look at why we might want to use (or not use) them. We've mentioned many times throughout that dictionaries increase the readability of our code. Being able to write out our own keys gives us flexibility and adds a layer of self-documentation. The less time it takes to understand what your code is doing, the easier it is to understand and debug and the faster you can implement your analyses. Aside from the human-centered advantages, there are also speed advantages. Looking up a key in a dictionary is fast. Computer scientists can measure how long a computer task (ie looking up a key or running an algorithm) will take by seeing how many operations it will take to finish. They describe these times with Big-O notation. Some tasks are fast and are done in constant time while more hefty tasks may require an exponential amount of operations and are done in polynomial time. In this case, looking up a key is done in constant time. Compare this to searching for the same item in a large list. The computer must look through each item in the list, so the time taken will scale with the length of the list. We call this linear time. If your list is exceptionally large, then looking for one item will take much longer than just assigning it to a key-value pair in a dictionary and looking for the key. On a deeper level, a dictionary is an implementation of a hash table, an explanation of which is outside the scope of this article. What's important to know is that the benefits we dictionary are essentially the benefits of the hash table itself: speedy key look ups and membership checks. We mentioned earlier that dictionaries are unordered, making them unsuitable data structures for data where order matters. Relative to other Python data structures, dictionaries take up a lot more space, especially when you have a large amount of keys. Given how cheap memory is, this disadvantage doesn't usually make itself apparent, but it's good to know about the overhead produced by dictionaries. We've only discussed vanilla dictionaries, but there are other implementations in Python that add additional functionality. I've included a link for further reading at the end. I hope that after reading this article, you will be more comfortable using dictionaries and finding use for them in your own programming. Perhaps you have even found a beer you might want to try in the future!
# 2.6 Other types of equations  (Page 5/10) Page 5 / 10 Solve the following rational equation: $\text{\hspace{0.17em}}\frac{-4x}{x-1}+\frac{4}{x+1}=\frac{-8}{{x}^{2}-1}.$ We want all denominators in factored form to find the LCD. Two of the denominators cannot be factored further. However, $\text{\hspace{0.17em}}{x}^{2}-1=\left(x+1\right)\left(x-1\right).\text{\hspace{0.17em}}$ Then, the LCD is $\text{\hspace{0.17em}}\left(x+1\right)\left(x-1\right).\text{\hspace{0.17em}}$ Next, we multiply the whole equation by the LCD. $\begin{array}{ccc}\hfill \left(x+1\right)\left(x-1\right)\left[\frac{-4x}{x-1}+\frac{4}{x+1}\right]& =& \left[\frac{-8}{\left(x+1\right)\left(x-1\right)}\right]\left(x+1\right)\left(x-1\right)\hfill \\ \hfill -4x\left(x+1\right)+4\left(x-1\right)& =& -8\hfill \\ \hfill -4{x}^{2}-4x+4x-4& =& -8\hfill \\ \hfill -4{x}^{2}+4& =& 0\hfill \\ \hfill -4\left({x}^{2}-1\right)& =& 0\hfill \\ \hfill -4\left(x+1\right)\left(x-1\right)& =& 0\hfill \\ \hfill x& =& -1\hfill \\ \hfill x& =& 1\hfill \end{array}$ In this case, either solution produces a zero in the denominator in the original equation. Thus, there is no solution. Solve $\text{\hspace{0.17em}}\frac{3x+2}{x-2}+\frac{1}{x}=\frac{-2}{{x}^{2}-2x}.$ $x=-1,$ $x=0$ is not a solution. Access these online resources for additional instruction and practice with different types of equations. ## Key concepts • Rational exponents can be rewritten several ways depending on what is most convenient for the problem. To solve, both sides of the equation are raised to a power that will render the exponent on the variable equal to 1. See [link] , [link] , and [link] . • Factoring extends to higher-order polynomials when it involves factoring out the GCF or factoring by grouping. See [link] and [link] . • We can solve radical equations by isolating the radical and raising both sides of the equation to a power that matches the index. See [link] and [link] . • To solve absolute value equations, we need to write two equations, one for the positive value and one for the negative value. See [link] . • Equations in quadratic form are easy to spot, as the exponent on the first term is double the exponent on the second term and the third term is a constant. We may also see a binomial in place of the single variable. We use substitution to solve. See [link] and [link] . ## Verbal In a radical equation, what does it mean if a number is an extraneous solution? This is not a solution to the radical equation, it is a value obtained from squaring both sides and thus changing the signs of an equation which has caused it not to be a solution in the original equation. Explain why possible solutions must be checked in radical equations. Your friend tries to calculate the value $\text{\hspace{0.17em}}-{9}^{\frac{3}{2}}$ and keeps getting an ERROR message. What mistake is he or she probably making? He or she is probably trying to enter negative 9, but taking the square root of $\text{\hspace{0.17em}}-9\text{\hspace{0.17em}}$ is not a real number. The negative sign is in front of this, so your friend should be taking the square root of 9, cubing it, and then putting the negative sign in front, resulting in $\text{\hspace{0.17em}}-27.$ Explain why $\text{\hspace{0.17em}}|2x+5|=-7\text{\hspace{0.17em}}$ has no solutions. Explain how to change a rational exponent into the correct radical expression. A rational exponent is a fraction: the denominator of the fraction is the root or index number and the numerator is the power to which it is raised. ## Algebraic For the following exercises, solve the rational exponent equation. Use factoring where necessary. ${x}^{\frac{2}{3}}=16$ ${x}^{\frac{3}{4}}=27$ $x=81$ $2{x}^{\frac{1}{2}}-{x}^{\frac{1}{4}}=0$ ${\left(x-1\right)}^{\frac{3}{4}}=8$ $x=17$ ${\left(x+1\right)}^{\frac{2}{3}}=4$ ${x}^{\frac{2}{3}}-5{x}^{\frac{1}{3}}+6=0$ ${x}^{\frac{7}{3}}-3{x}^{\frac{4}{3}}-4{x}^{\frac{1}{3}}=0$ For the following exercises, solve the following polynomial equations by grouping and factoring. ${x}^{3}+2{x}^{2}-x-2=0$ $x=-2,1,-1$ $3{x}^{3}-6{x}^{2}-27x+54=0$ $4{y}^{3}-9y=0$ ${x}^{3}+3{x}^{2}-25x-75=0$ ${m}^{3}+{m}^{2}-m-1=0$ $m=1,-1$ $2{x}^{5}-14{x}^{3}=0$ $5{x}^{3}+45x=2{x}^{2}+18$ $x=\frac{2}{5}$ For the following exercises, solve the radical equation. Be sure to check all solutions to eliminate extraneous solutions. $\sqrt{3x-1}-2=0$ $\sqrt{x-7}=5$ $x=32$ $\sqrt{x-1}=x-7$ $\sqrt{3t+5}=7$ $t=\frac{44}{3}$ $\sqrt{t+1}+9=7$ $\sqrt{12-x}=x$ $x=3$ $\sqrt{2x+3}-\sqrt{x+2}=2$ $\sqrt{3x+7}+\sqrt{x+2}=1$ $x=-2$ $\sqrt{2x+3}-\sqrt{x+1}=1$ For the following exercises, solve the equation involving absolute value. $|3x-4|=8$ $x=4,\frac{-4}{3}$ $|2x-3|=-2$ $|1-4x|-1=5$ $x=\frac{-5}{4},\frac{7}{4}$ $|4x+1|-3=6$ $|2x-1|-7=-2$ $x=3,-2$ $|2x+1|-2=-3$ $|x+5|=0$ $x=-5$ $-|2x+1|=-3$ For the following exercises, solve the equation by identifying the quadratic form. Use a substitute variable and find all real solutions by factoring. ${x}^{4}-10{x}^{2}+9=0$ $x=1,-1,3,-3$ $4{\left(t-1\right)}^{2}-9\left(t-1\right)=-2$ ${\left({x}^{2}-1\right)}^{2}+\left({x}^{2}-1\right)-12=0$ $x=2,-2$ ${\left(x+1\right)}^{2}-8\left(x+1\right)-9=0$ ${\left(x-3\right)}^{2}-4=0$ $x=1,5$ ## Extensions For the following exercises, solve for the unknown variable. ${x}^{-2}-{x}^{-1}-12=0$ $\sqrt{{|x|}^{2}}=x$ All real numbers ${t}^{10}-{t}^{5}+1=0$ $|{x}^{2}+2x-36|=12$ $x=4,6,-6,-8$ ## Real-world applications For the following exercises, use the model for the period of a pendulum, $\text{\hspace{0.17em}}T,$ such that $\text{\hspace{0.17em}}T=2\pi \sqrt{\frac{L}{g}},$ where the length of the pendulum is L and the acceleration due to gravity is $\text{\hspace{0.17em}}g.$ If the acceleration due to gravity is 9.8 m/s 2 and the period equals 1 s, find the length to the nearest cm (100 cm = 1 m). If the gravity is 32 ft/s 2 and the period equals 1 s, find the length to the nearest in. (12 in. = 1 ft). Round your answer to the nearest in. 10 in. For the following exercises, use a model for body surface area, BSA, such that $\text{\hspace{0.17em}}BSA=\sqrt{\frac{wh}{3600}},$ where w = weight in kg and h = height in cm. Find the height of a 72-kg female to the nearest cm whose $\text{\hspace{0.17em}}BSA=1.8.$ Find the weight of a 177-cm male to the nearest kg whose $\text{\hspace{0.17em}}BSA=2.1.$ 90 kg the gradient function of a curve is 2x+4 and the curve passes through point (1,4) find the equation of the curve 1+cos²A/cos²A=2cosec²A-1 test for convergence the series 1+x/2+2!/9x3 a man walks up 200 meters along a straight road whose inclination is 30 degree.How high above the starting level is he? 100 meters Kuldeep Find that number sum and product of all the divisors of 360 Ajith exponential series Naveen what is subgroup Prove that: (2cos&+1)(2cos&-1)(2cos2&-1)=2cos4&+1 e power cos hyperbolic (x+iy) 10y Michael tan hyperbolic inverse (x+iy)=alpha +i bita prove that cos(π/6-a)*cos(π/3+b)-sin(π/6-a)*sin(π/3+b)=sin(a-b) why {2kπ} union {kπ}={kπ}? why is {2kπ} union {kπ}={kπ}? when k belong to integer Huy if 9 sin theta + 40 cos theta = 41,prove that:41 cos theta = 41 what is complex numbers Dua Yes ahmed Thank you Dua give me treganamentry question Solve 2cos x + 3sin x = 0.5
### Forces and Laws of Motion - Revision Notes CBSE Class 9 Science Revision Notes CHAPTER – 9 Force & Laws Of Motion Balanced and Unbalanced Forces Balanced Forces: The net force is when two or more forces are applied on the same object and at the same time. The applied forces combined are called the net force. Balanced Forces The force I apply in one direction plus the force you apply in the opposite direction are added together. 2 Because the forces are equal and balanced Unbalanced Forces: A force is applied in one direction and either another smaller or larger force is applied in the opposite direction or no force is applied at all in the opposite direction. Unbalanced Forces If I have a chair and I push on one side of it with a force of 50 N and you push on the other side, with a force of 25 N, will the chair move? Which way will it move? The direction in which the most force is applied. What is the net force? 50 N 25 N. 2. Laws of Motion Newton's First Law 1st Law – An object at rest will stay at rest, and an object in motion will stay in motion at constant velocity, unless acted upon by an unbalanced force. An object at rest will stay at rest, and an object in motion will stay in motion at constant velocity, unless acted upon by an unbalanced force. Newton's Second Law "If the net force on an object is not zero, the object will accelerate. The direction of the acceleration is the same as the direction of the net force. The magnitude of the acceleration is directly proportional to the net force applied, and inversely proportional to the mass of the object." Mathematical symbols provide a convenient shorthand for all of this: $a=\frac{{F}_{net}}{m}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}or\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{F}_{net}=ma$ The Effect of Mass A force applied to an automobile will not have the same effect as the same force applied to a pencil. An automobile resists accelerating much more than a pencil does, because it has more inertia, or mass. The acceleration of an object depends not only on how hard you push on it, but also on how much the object resists being pushed. What is the effect of mass on acceleration? This, too, turns out to be quite simple (I wonder why...). For the same force, an object with twice the mass will have half the acceleration. If it had three times the mass, the same force will produce one-third the acceleration. Four times the mass gives one-fourth of the acceleration, and so on. This type of relationship between quantities (double one, get half the other) is called an inverse proportion or inverse variation. In other words, then: Newton’s Second Law of Motion The acceleration of an object is dependent upon both force and mass. Thus, if the colliding objects have unequal mass, they will have unequal accelerations as a result of the contact force which results during the collision. Newton's Third Law Newton's Third Law is stated as: For every action there is an equal and opposite reaction. "Action...Reaction" means that forces always occur in pairs. (Forces are interactions between objects, like conversations are interactions between people.) Single, isolated forces never happen. The two forces involved are called the "action force" and the "reaction force." These names are unfortunate for a couple of reasons : Either force in an interaction can be the "action" force or the "reaction" force The action and reaction forces exist at the same time. "Equal" Means Both forces are exactly the same size. They are equal in magnitude. Both forces exist at exactly the same time. They both start at exactly the same instant, and they both stop at exactly the same instant. They are equal in time. "Opposite" means that the two forces always act in opposite directions - exactly ${180}^{0}$ apart. Newton's third law of motion In every interaction, there is a pair of forces acting on the two interacting objects. The size of the force on the first object equals the size of the force on the second object. The direction of the force on the first object is opposite to the direction of the force on the second object. Forces always come in pairs - equal and opposite action-reaction force pairs. Newton's third law of motion applied to collisions between two objects. In a collision between two objects, both objects experience forces which are equal in magnitude and opposite in direction. Such forces cause one object to speed up (gain momentum) and the other object to slow down (lose momentum). According to Newton's third law, the forces on the two objects are equal in magnitude. 3. Inertia and Mass Inertia is the tendency of an object to resist any change in its motion. An object will continue to move at the same speed in the same direction unless acted upon by an unbalanced force. Inertia & Mass Inertia & Mass of a bowling ball rolled down the road would eventually come to a stop. Friction is an unbalanced force that causes the ball to stop or slow down. Without friction, the ball would keep going. Mass is the amount of matter in an object. A bowling ball has more mass than a tennis ball. The greater the mass of an object the greater its inertia. Mass is the measurement of inertia. 4. Conservation of Momentum Law of Conservation of Momentum In a closed system, the vector sum of the momenta before and after an impact must be equal. Before After = ${m}_{1}{v}_{1}^{\prime }$ + ${m}_{2}{v}_{2}^{\prime }$ Internal and External Forces
#### Term 1 Coordinate Geometry 5 Marks Questions 9th Standard Reg.No. : • • • • • • Maths Time : 01:00:00 Hrs Total Marks : 50 10 x 5 = 50 1. Show that the points A(7,10), B(-2,5), C(3,-4) are the vertices of a right angled triangle. 2. Prove that the points A(3, 5), B(6, 2), C(3,-1), and D(0, 2) taken in order are the vertices of a square. 3. Let A(2, 2), B(8, –4) be two given points in a plane. If a point P lies on the X- axis (in positive side), and divides AB in the ratio 1: 2, then find the coordinates of P. 4. Show that (4, 3) is the centre of the circle passing through the points (9, 3), (7,–1), (–1,3). Find the radius. 5. Find the value of ‘a’ such that PQ = QR where P, Q, and R are the points whose coordinates are (6, –1), (1, 3) and (a, 8) respectively 6. Show that the given points (1, 1), (5, 4), (-2, 5) are the vertices of an isosceles right angled triangle. 7. Show that the point (3, -2), (3, 2), (-1, 2) and (-1, -2) taken in order are the vertices of a square. 8. Show that the point A (3,7) B (6, 5) and C (15, -1) are collinear. 9. Find the type of triangle formed by (-1, -1), (1, 1) and ($-\sqrt{13},\sqrt{13}$) 10. Find x such that PQ = QR where P(6, -1) Q (1, 3) and R (x, 8) respectively.
## What you’ll learn to do: Evaluate logarithms and covert between logarithmic to exponential form Devastation of March 11, 2011 earthquake in Honshu, Japan. (credit: Daniel Pierce) In 2010, a major earthquake struck Haiti destroying or damaging over 285,000 homes.[1] One year later, another, stronger earthquake devastated Honshu, Japan destroying or damaging over 332,000 buildings[2] like those shown in the picture below. Even though both caused substantial damage, the earthquake in 2011 was 100 times stronger than the earthquake in Haiti. How do we know? The magnitudes of earthquakes are measured on a scale known as the Richter Scale. The Haitian earthquake registered a 7.0 on the Richter Scale[3] whereas the Japanese earthquake registered a 9.0.[4] The Richter Scale is a base-ten logarithmic scale. In other words, an earthquake of magnitude 8 is not twice as great as an earthquake of magnitude 4. It is ${10}^{8 - 4}={10}^{4}=10,000$ times as great! In this lesson, we will investigate the nature of the Richter Scale and the base-ten function upon which it depends. ## Contribute! Did you have an idea for improving this content? We’d love your input.
# Quantum field theory "Relativistic quantum field theory" redirects here. For other uses, see Relativity. In theoretical physics, quantum field theory (QFT) is the theoretical framework for constructing quantum mechanical models of subatomic particles in particle physics and quasiparticles in condensed matter physics. QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields. These interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. ## History Even though QFT is an unavoidable consequence of the reconciliation of quantum mechanics with special relativity (Weinberg (2005)), historically, it emerged in the 1920s with the quantization of the electromagnetic field (the quantization being based on an analogy of the eigenmode expansion of a vibrating string with fixed endpoints). ### Early development Max Born (18821970), one of the founders of quantum field theory. He is also known for the Born rule that introduced the probabilistic interpretation in quantum mechanics. He received the 1954 Nobel Prize in Physics together with Walther Bothe. The first achievement of quantum field theory, namely quantum electrodynamics (QED), is “still the paradigmatic example of a successful quantum field theory” ( Weinberg (2005)). Ordinarily, quantum mechanics (QM) cannot give an account of photons which constitute the prime case of relativistic ‘particles’. Since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated; for instance, when one of an atom's electrons makes a transition between energy levels. The formalism of QFT is needed for an explicit description of photons. In fact most topics in the early development of quantum theory (the so-called old quantum theory, 1900–25) were related to the interaction of radiation and matter and thus should be treated by quantum field theoretical methods. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra and did not focus much on problems of radiation. As soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the famous paper by Born, Jordan & Heisenberg (1926). (P. Jordan was especially acquainted with the literature on light quanta and made seminal contributions to QFT.) The basic idea was that in QFT the electromagnetic field should be represented by matrices in the same way that position and momentum were represented in QM by matrices (matrix mechanics oscillator operators). The ideas of QM were thus extended to systems having an infinite number of degrees of freedom, so an infinite array of quantum oscillators. The inception of QFT is usually considered to be Dirac's famous 1927 paper on “The quantum theory of the emission and absorption of radiation”.[1] Here Dirac coined the name “quantum electrodynamics” (QED) for the part of QFT that was developed first. Dirac supplied a systematic procedure for transferring the characteristic quantum phenomenon of discreteness of physical quantities from the quantum-mechanical treatment of particles to a corresponding treatment of fields. Employing the theory of the quantum harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Dirac's procedure became a model for the quantization of other fields as well. These first approaches to QFT were further developed during the following three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics. These differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the quantum-mechanical (field-like) treatment of particles, e.g. the Dirac equation, the Klein–Gordon equation and the Maxwell equations. Schweber points out[2] that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927,[3] while the expression itself was coined by Dirac. Some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a general theory of quantum fields, in particular, the method of canonical quantization, was presented by Heisenberg & Pauli in 1929. Whereas Jordan's second quantization procedure applied to the coefficients of the normal modes of the field, Heisenberg & Pauli started with the fields themselves and subjected them to the canonical procedure. Heisenberg and Pauli thus established the basic structure of QFT as presented in modern introductions to QFT. Fermi and Dirac, as well as Fock and Podolsky, presented different formulations which played a heuristic role in the following years. Quantum electrodynamics rests on two pillars, see e.g., the short and lucid “Historical Introduction” of Scharf (2014). The first pillar is the quantization of the electromagnetic field, i.e., it is about photons as the quantized excitations or 'quanta' of the electromagnetic field. This procedure will be described in some more detail in the section on the particle interpretation. As Weinberg points out the “photon is the only particle that was known as a field before it was detected as a particle” so that it is natural that QED began with the analysis of the radiation field.[4] The second pillar of QED consists of the relativistic theory of the electron, centered on the Dirac equation. ### The problem of infinities #### The emergence of infinities Pascual Jordan (19021980), doctoral student of Max Born, was a pioneer in quantum field theory, coauthoring a number of seminal papers with Born and Heisenberg. Jordan algebras were introduced by him to formalize the notion of an algebra of observables in quantum mechanics. He was awarded the Max Planck medal 1954. Quantum field theory started with a theoretical framework that was built in analogy to quantum mechanics. Although there was no unique and fully developed theory, quantum field theoretical tools could be applied to concrete processes. Examples are the scattering of radiation by free electrons, Compton scattering, the collision between relativistic electrons or the production of electron-positron pairs by photons. Calculations to the first order of approximation were quite successful, but most people working in the field thought that QFT still had to undergo a major change. On the one side, some calculations of effects for cosmic rays clearly differed from measurements. On the other side and, from a theoretical point of view more threatening, calculations of higher orders of the perturbation series led to infinite results. The self-energy of the electron as well as vacuum fluctuations of the electromagnetic field seemed to be infinite. The perturbation expansions did not converge to a finite sum and even most individual terms were divergent. The various forms of infinities suggested that the divergences were more than failures of specific calculations. Many physicists tried to avoid the divergences by formal tricks (truncating the integrals at some value of momentum, or even ignoring infinite terms) but such rules were not reliable, violated the requirements of relativity and were not considered as satisfactory. Others came up with the first ideas for coping with infinities by a redefinition of the parameters of the theory and using a measured finite value, for example of the charge of the electron, instead of the infinite 'bare' value. This process is called renormalization. From the point of view of the philosophy of science, it is remarkable that these divergences did not give enough reason to discard the theory. The years from 1930 to the beginning of World War II were characterized by a variety of attitudes towards QFT. Some physicists tried to circumvent the infinities by more-or-less arbitrary prescriptions, others worked on transformations and improvements of the theoretical framework. Most of the theoreticians believed that QED would break down at high energies. There was also a considerable number of proposals in favor of alternative approaches. These proposals included changes in the basic concepts e.g. negative probabilities and interactions at a distance instead of a field theoretical approach, and a methodological change to phenomenological methods that focusses on relations between observable quantities without an analysis of the microphysical details of the interaction, the so-called S-matrix theory where the basic elements are amplitudes for various scattering processes. Despite the feeling that QFT was imperfect and lacking rigor, its methods were extended to new areas of applications. In 1933 Fermi's theory of the beta decay started with conceptions describing the emission and absorption of photons, transferred them to beta radiation and analyzed the creation and annihilation of electrons and neutrinos described by the weak interaction. Further applications of QFT outside of quantum electrodynamics succeeded in nuclear physics with the strong interaction. In 1934 Pauli & Weisskopf showed that a new type of field (the scalar field), described by the Klein–Gordon equation, could be quantized. This is another example of second quantization. This new theory for matter fields could be applied a decade later when new particles, pions, were detected. #### The taming of infinities Werner Heisenberg (19011976), doctoral student of Arnold Sommerfeld, was one of the founding fathers of quantum mechanics and QFT. In particular, he introduced the version of quantum mechanics known as matrix mechanics, but is now more known for the Heisenberg uncertainty relations. He was awarded the Nobel prize in physics 1932 together with Erwin Schrödinger and Paul Dirac. After the end of World War II more reliable and effective methods for dealing with infinities in QFT were developed, namely coherent and systematic rules for performing relativistic field theoretical calculations, and a general renormalization theory. On three famous conferences, the Shelter Island Conference 1947, the Pocono Conference 1948, and the 1949 Oldstone Conference, developments in theoretical physics were confronted with relevant new experimental results. In the late forties, there were two different ways to address the problem of divergences. One of these was discovered by Richard Feynman, the other one (based on an operator formalism) by Julian Schwinger and, independently, by Sin-Itiro Tomonaga. In 1949, Freeman Dyson showed that the two approaches are in fact equivalent and fit into an elegant field-theoretic framework. Thus, Freeman Dyson, Feynman, Schwinger, and Tomonaga became the inventors of renormalization theory. The most spectacular successes of renormalization theory were the calculations of the anomalous magnetic moment of the electron and the Lamb shift in the spectrum of hydrogen. These successes were so outstanding because the theoretical results were in better agreement with high-precision experiments than anything in physics encountered before. Nevertheless, mathematical problems lingered on and prompted a search for rigorous formulations (discussed below). The rationale behind renormalization is to avoid divergences that appear in physical predictions by shifting them into a part of the theory where they do not influence empirical statements. Dyson could show that a rescaling of charge and mass ('renormalization') is sufficient to remove all divergences in QED consistently, to all orders of perturbation theory. A QFT is called renormalizable if all infinities can be absorbed into a redefinition of a finite number of coupling constants and masses. A consequence is that the physical charge and mass of the electron must be measured and cannot be computed from first principles. Perturbation theory yields well-defined predictions only in renormalizable quantum field theories; luckily, QED, the first fully developed QFT, belonged to this class of renormalizable theories. There are various technical procedures to renormalize a theory. One way is to cut off the integrals in the calculations at a certain value Λ of the momentum which is large but finite. This cut-off procedure is successful if, after taking the limit Λ → ∞, the resulting quantities are independent of Λ.[5] Richard Feynman (19181988) His 1945 PhD thesis developed the path integral formulation of ordinary quantum mechanics. This was later generalized to field theory. Feynman's formulation of QED is of special interest from a philosophical point of view. His so-called space-time approach is visualized by the celebrated Feynman diagrams that look like depicting paths of particles. Feynman's method of calculating scattering amplitudes is based on the functional integral formulation of field theory.[6] A set of graphical rules can be derived so that the probability of a specific scattering process can be calculated by drawing a diagram of that process and then using that diagram to write down the precise mathematical expressions for calculating its amplitude in relativistically covariant perturbation theory. The diagrams provide an effective way to organize and visualize the various terms in the perturbation series, and they naturally account for the flow of electrons and photons during the scattering process. External lines in the diagrams represent incoming and outgoing particles, internal lines are connected with virtual particles and vertices with interactions. Each of these graphical elements is associated with mathematical expressions that contribute to the amplitude of the respective process. The diagrams are part of Feynman's very efficient and elegant algorithm for computing the probability of scattering processes. The idea of particles traveling from one point to another was heuristically useful in constructing the theory. This heuristics, based on Huygen's principle, is useful for concrete calculations and actually give the correct particle propagators as derived more rigorously.[7] Nevertheless, an analysis of the theoretical justification of the space-time approach shows that its success does not imply that particle paths need be taken seriously. General arguments against a particle interpretation of QFT clearly exclude that the diagrams represent actual paths of particles in the interaction area. Feynman himself was not particularly interested in ontological questions. ### The golden age: Gauge theory and the standard model Chen-Ning Yang (b.1922), co-inventor of nonabelian gauge field theories. Murray Gell-Mann (b. 1929) articulator and pioneer of group symmetry in QFT In 1933, Enrico Fermi had already established that the creation, annihilation and transmutation of particles in the weak interaction beta decay could best be described in QFT,[8] specifically his quartic fermion interaction. As a result, field theory had become a prospective tool for other particle interactions. In the beginning of the 1950s, QED had become a reliable theory which no longer counted as preliminary. However, it took two decades from writing down the first equations until QFT could be applied successfully to important physical problems in a systematic way. The theories explored relied on—indeed, were virtually fully specified by—a rich variety of symmetries pioneered and articulated by Murray Gell-Mann.[9] The new developments made it possible to apply QFT to new particles and new interactions and fully explain their structure. In the following decades, QFT was extended to well-describe not only the electromagnetic force but also weak and strong interaction so that new Lagrangians were found which contain new classes of particles or quantum fields. The search still continues for a more comprehensive theory of matter and energy, a unified theory of all interactions. Yoichiro Nambu (19212015), co-discoverer of field theoretic spontaneous symmetry breaking. The new focus on symmetry led to the triumph of non-Abelian gauge theories (the development of such theories was pioneered in 1954 with the work of Yang and Mills) and spontaneous symmetry breaking (by Yoichiro Nambu). Today, there are reliable theories of the strong, weak, and electromagnetic interactions of elementary particles which have an analogous structure to QED: They are the dominant framework of particle physics. A combined renormalizable theory associated with the gauge group SU(3) × SU(2) × U(1) is dubbed the standard model of elementary particle physics (even though it is a full theory, and not just a model) and was assembled by Sheldon Glashow, Steven Weinberg and Abdul Salam in 1968, and Frank Wilczek, David Gross and David Politzer in 1973, on the basis of conceptual breakthroughs by Peter Higgs, François Englert, Robert Brout, Martin Veltman, and Gerard 't Hooft. Gerard 't Hooft (b.1946) proved gauge field theories are renormalizable. According to the standard model, there are, on the one hand, six types of leptons (e.g. the electron and its neutrino) and six types of quarks, where the members of both groups are all fermions with spin 1/2. On the other hand, there are spin 1 particles (thus bosons) that mediate the interaction between elementary particles and the fundamental forces, namely the photon for electromagnetic interaction, two W and one Z-boson for weak interaction, and the gluons for strong interaction. [10] The linchpin of the symmetry breaking mechanism of the theory is the spin 0 Higgs boson, discovered 40 years after its prediction. ### Renormalization group Ken Wilson (19362013), Nobel laureate. He constructed the over-arching picture of the renormalization group which underlies the workings of all QFTs across all scales. Parallel breakthroughs in the understanding of phase transitions in condensed matter physics led to novel insights based on the renormalization group. They involved the work of Leo Kadanoff (1966) and Michael Fisher (1973), which led to the seminal reformulation of quantum field theory by Ken Wilson in 1975. This reformulation provided insights into the evolution of effective field theories with scale, which classified all field theories, renormalizable or not (cf. subsequent section). The remarkable conclusion is that, in general, most observables are "irrelevant", i.e., the macroscopic physics is dominated by only a few observables in most systems. During the same period, Kadanoff (1969) introduced an operator algebra formalism for the two-dimensional Ising model, a widely studied mathematical model of ferromagnetism in statistical physics. This development suggested that quantum field theory describes its scaling limit. Later, there developed the idea that a finite number of generating operators could represent all the correlation functions of the Ising model. #### Conformal field theory The existence of a much stronger symmetry for the scaling limit of two-dimensional critical systems was suggested by Alexander Belavin, Alexander Polyakov and Alexander Zamolodchikov in 1984, which eventually led to the development of conformal field theory,[11] a special case of quantum field theory, which is presently utilized in different areas of particle physics and condensed matter physics. ### Historiography The first chapter in Weinberg (2005) is a very good short description of the earlier history of QFT. Detailed accounts of the historical development of QFT can be found, e.g., in Darrigol 1986, Schweber (1994) and Cao 1997a. Various historical and conceptual studies of the standard model are gathered in Hoddeson et al. 1997 and of renormalization theory in Brown 1993. ## Varieties of approaches There is currently no complete quantum theory of the remaining fundamental force, gravity. Many of the proposed theories to describe gravity as a QFT postulate the existence of a graviton particle that mediates the gravitational force. Presumably, the as yet unknown correct quantum field-theoretic treatment of the gravitational field will behave like Einstein's general theory of relativity in the low-energy limit. Quantum field theory of the fundamental forces itself has been postulated to be the low-energy effective field theory limit of a more fundamental theory such as superstring theory. Most theories in standard particle physics are formulated as relativistic quantum field theories, such as QED, QCD, and the Standard Model. QED, the quantum field-theoretic description of the electromagnetic field, approximately reproduces Maxwell's theory of electrodynamics in the low-energy limit, with small non-linear corrections to the Maxwell equations required due to virtual electron–positron pairs. In the perturbative approach to quantum field theory, the full field interaction terms are approximated as a perturbative expansion in the number of particles involved. Each term in the expansion can be thought of as forces between particles being mediated by other particles. In QED, the electromagnetic force between two electrons is caused by an exchange of photons. Similarly, intermediate vector bosons mediate the weak force and gluons mediate the strong force in QCD. The notion of a force-mediating particle comes from perturbation theory, and does not make sense in the context of non-perturbative approaches to QFT, such as with bound states. ## Definition Quantum electrodynamics (QED) has one electron field and one photon field; quantum chromodynamics (QCD) has one field for each type of quark; and, in condensed matter, there is an atomic displacement field that gives rise to phonon particles. Edward Witten describes QFT as "by far" the most difficult theory in modern physics[12] – "so difficult that nobody fully believed it for 25 years."[13] ### Dynamics Ordinary quantum mechanical systems have a fixed number of particles, with each particle having a finite number of degrees of freedom. In contrast, the excited states of a quantum field can represent any number of particles. This makes quantum field theories especially useful for describing systems where the particle count/number may change over time, a crucial feature of relativistic dynamics. A QFT is thus an organized infinite array of oscillators. ### States QFT interaction terms are similar in spirit to those between charges with electric and magnetic fields in Maxwell's equations. However, unlike the classical fields of Maxwell's theory, fields in QFT generally exist in quantum superpositions of states and are subject to the laws of quantum mechanics. Because the fields are continuous quantities over space, there exist excited states with arbitrarily large numbers of particles in them, providing QFT systems with effectively an infinite number of degrees of freedom. Infinite degrees of freedom can easily lead to divergences of calculated quantities (e.g., the quantities become infinite). Techniques such as renormalization of QFT parameters or discretization of spacetime, as in lattice QCD, are often used to avoid such infinities so as to yield physically plausible results. The gravitational field and the electromagnetic field are the only two fundamental fields in nature that have infinite range and a corresponding classical low-energy limit, which greatly diminishes and hides their "particle-like" excitations. Albert Einstein in 1905, attributed "particle-like" and discrete exchanges of momenta and energy, characteristic of "field quanta", to the electromagnetic field. Originally, his principal motivation was to explain the thermodynamics of radiation. Although the photoelectric effect and Compton scattering strongly suggest the existence of the photon, it might alternatively be explained by a mere quantization of emission; more definitive evidence of the quantum nature of radiation is now taken up into modern quantum optics as in the antibunching effect.[14] ## Principles ### Classical and quantum fields A classical field is a function defined over some region of space and time.[15] Two physical phenomena which are described by classical fields are Newtonian gravitation, described by Newtonian gravitational field g(x, t), and classical electromagnetism, described by the electric and magnetic fields E(x, t) and B(x, t). Because such fields can in principle take on distinct values at each point in space, they are said to have infinite degrees of freedom.[15] Classical field theory does not, however, account for the quantum-mechanical aspects of such physical phenomena. For instance, it is known from quantum mechanics that certain aspects of electromagnetism involve discrete particlesphotonsrather than continuous fields. The business of quantum field theory is to write down a field that is, like a classical field, a function defined over space and time, but which also accommodates the observations of quantum mechanics. This is a quantum field. To write down such a quantum field, one promotes the infinity of classical oscillators representing the modes of the classical fields to quantum harmonic oscillators. They thus become operator-valued functions (actually, distributions).[16] (In its most general formulation, quantum mechanics is a theory of abstract operators (observables) acting on an abstract state space (Hilbert space), where the observables represent physically observable quantities and the state space represents the possible states of the system under study.[17] For instance, the fundamental observables associated with the motion of a single quantum mechanical particle are the position and momentum operators and . Field theory, by sharp contrast, treats x as a label, an index of the field rather than as an operator.[18]) There are two common ways of handling a quantum field: canonical quantization and the path integral formalism.[19] The latter of these is pursued in this article. #### Lagrangian formalism Quantum field theory relies on the Lagrangian formalism from classical field theory. This formalism is analogous to the Lagrangian formalism used in classical mechanics to solve for the motion of a particle under the influence of a field. In classical field theory, one writes down a Lagrangian density, , involving a field, φ(x,t), and possibly its first derivatives (∂φ/∂t and ∇φ), and then applies a field-theoretic form of the Euler–Lagrange equation. Writing coordinates (t, x) = (x0, x1, x2, x3) = xμ, this form of the Euler–Lagrange equation is[15] where a sum over μ is performed according to the rules of Einstein notation. By solving this equation, one arrives at the "equations of motion" of the field.[15] For example, if one begins with the Lagrangian density and then applies the Euler–Lagrange equation, one obtains the equation of motion This equation is Newton's law of universal gravitation, expressed in differential form in terms of the gravitational potential φ(t, x) and the mass density ρ(t, x). Despite the nomenclature, the "field" under study is the gravitational potential, φ, rather than the gravitational field, g. Similarly, when classical field theory is used to study electromagnetism, the "field" of interest is the electromagnetic four-potential (V/c, A), rather than the electric and magnetic fields E and B. Quantum field theory uses this same Lagrangian procedure to determine the equations of motion for quantum fields. These equations of motion are then supplemented by commutation relations derived from the canonical quantization procedure described below, thereby incorporating quantum mechanical effects into the behavior of the field. ### Single- and many-particle quantum mechanics In non-relativistic quantum mechanics, a particle (such as an electron or proton) is described by a complex wavefunction, ψ(x, t), whose time-evolution is governed by the Schrödinger equation: Here m is the particle's mass and V(x) is the applied potential. Physical information about the behavior of the particle is extracted from the wavefunction by constructing expected values for various quantities; for example, the expected value of the particle's position is given by integrating ψ*(x) x ψ(x) over all space, and the expected value of the particle's momentum is found by integrating ψ*(x)dψ/dx. The quantity ψ*(x)ψ(x) is itself in the Copenhagen interpretation of quantum mechanics interpreted as a probability density function. This treatment of quantum mechanics, where a particle's wavefunction evolves against a classical background potential V(x), is sometimes called first quantization. This description of quantum mechanics can be extended to describe the behavior of multiple particles, so long as the number and the type of particles remain fixed. The particles are described by a wavefunction ψ(x1, x2, , xN, t), which is governed by an extended version of the Schrödinger equation. Often one is interested in the case where N particles are all of the same type (for example, the 18 electrons orbiting a neutral argon nucleus). As described in the article on identical particles, this implies that the state of the entire system must be either symmetric (bosons) or antisymmetric (fermions) when the coordinates of its constituent particles are exchanged. This is achieved by using a Slater determinant as the wavefunction of a fermionic system (and a Slater permanent for a bosonic system), which is equivalent to an element of the symmetric or antisymmetric subspace of a tensor product. For example, the general quantum state of a system of N bosons is written as where are the single-particle states, Nj is the number of particles occupying state j, and the sum is taken over all possible permutations p acting on N elements. In general, this is a sum of N! (N factorial) distinct terms. is a normalizing factor. There are several shortcomings to the above description of quantum mechanics, which are addressed by quantum field theory. First, it is unclear how to extend quantum mechanics to include the effects of special relativity.[20] Attempted replacements for the Schrödinger equation, such as the Klein–Gordon equation or the Dirac equation, have many unsatisfactory qualities; for instance, they possess energy eigenvalues that extend to ∞, so that there seems to be no easy definition of a ground state. It turns out that such inconsistencies arise from relativistic wavefunctions not having a well-defined probabilistic interpretation in position space, as probability conservation is not a relativistically covariant concept. The second shortcoming, related to the first, is that in quantum mechanics there is no mechanism to describe particle creation and annihilation;[21] this is crucial for describing phenomena such as pair production, which result from the conversion between mass and energy according to the relativistic relation E = mc2. ### Second quantization Main article: Second quantization In this section, we will describe a method for constructing a quantum field theory called second quantization. This basically involves choosing a way to index the quantum mechanical degrees of freedom in the space of multiple identical-particle states. It is based on the Hamiltonian formulation of quantum mechanics. Several other approaches exist, such as the Feynman path integral,[22] which uses a Lagrangian formulation. For an overview of some of these approaches, see the article on quantization. #### Bosons For simplicity, we will first discuss second quantization for bosons, which form perfectly symmetric quantum states. Let us denote the mutually orthogonal single-particle states which are possible in the system by and so on. For example, the 3-particle state with one particle in state and two in state is The first step in second quantization is to express such quantum states in terms of occupation numbers, by listing the number of particles occupying each of the single-particle states etc. This is simply another way of labelling the states. For instance, the above 3-particle state is denoted as An N-particle state belongs to a space of states describing systems of N particles. The next step is to combine the individual N-particle state spaces into an extended state space, known as Fock space, which can describe systems of any number of particles. This is composed of the state space of a system with no particles (the so-called vacuum state, written as ), plus the state space of a 1-particle system, plus the state space of a 2-particle system, and so forth. States describing a definite number of particles are known as Fock states: a general element of Fock space will be a linear combination of Fock states. There is a one-to-one correspondence between the occupation number representation and valid boson states in the Fock space. At this point, the quantum mechanical system has become a quantum field in the sense we described above. The field's elementary degrees of freedom are the occupation numbers, and each occupation number is indexed by a number indicating which of the single-particle states it refers to: The properties of this quantum field can be explored by defining creation and annihilation operators, which add and subtract particles. They are analogous to ladder operators in the quantum harmonic oscillator problem, which added and subtracted energy quanta. However, these operators literally create and annihilate particles of a given quantum state. The bosonic annihilation operator and creation operator are easily defined in the occupation number representation as having the following effects: It can be shown that these are operators in the usual quantum mechanical sense, i.e. linear operators acting on the Fock space. Furthermore, they are indeed Hermitian conjugates, which justifies the way we have written them. They can be shown to obey the commutation relation where stands for the Kronecker delta. These are precisely the relations obeyed by the ladder operators for an infinite set of independent quantum harmonic oscillators, one for each single-particle state. Adding or removing bosons from each state is, therefore, analogous to exciting or de-exciting a quantum of energy in a harmonic oscillator. Applying an annihilation operator followed by its corresponding creation operator returns the number of particles in the kth single-particle eigenstate: The combination of operators is known as the number operator for the kth eigenstate. The Hamiltonian operator of the quantum field (which, through the Schrödinger equation, determines its dynamics) can be written in terms of creation and annihilation operators. For instance, for a field of free (non-interacting) bosons, the total energy of the field is found by summing the energies of the bosons in each energy eigenstate. If the kth single-particle energy eigenstate has energy and there are bosons in this state, then the total energy of these bosons is . The energy in the entire field is then a sum over : This can be turned into the Hamiltonian operator of the field by replacing with the corresponding number operator, . This yields #### Fermions It turns out that a different definition of creation and annihilation must be used for describing fermions. According to the Pauli exclusion principle, fermions cannot share quantum states, so their occupation numbers Ni can only take on the value 0 or 1. The fermionic annihilation operators c and creation operators are defined by their actions on a Fock state thus These obey an anticommutation relation: One may notice from this that applying a fermionic creation operator twice gives zero, so it is impossible for the particles to share single-particle states, in accordance with the exclusion principle. #### Field operators We have previously mentioned that there can be more than one way of indexing the degrees of freedom in a quantum field. Second quantization indexes the field by enumerating the single-particle quantum states. However, as we have discussed, it is more natural to think about a "field", such as the electromagnetic field, as a set of degrees of freedom indexed by position. To this end, we can define field operators that create or destroy a particle at a particular point in space. In particle physics, these operators turn out to be more convenient to work with, because they make it easier to formulate theories that satisfy the demands of relativity. Single-particle states are usually enumerated in terms of their momenta (as in the particle in a box problem.) We can construct field operators by applying the Fourier transform to the creation and annihilation operators for these states. For example, the bosonic field annihilation operator is The bosonic field operators obey the commutation relation where stands for the Dirac delta function. As before, the fermionic relations are the same, with the commutators replaced by anticommutators. The field operator is not the same thing as a single-particle wavefunction. The former is an operator acting on the Fock space, and the latter is a quantum-mechanical amplitude for finding a particle in some position. However, they are closely related and are indeed commonly denoted with the same symbol. If we have a Hamiltonian with a space representation, say where the indices i and j run over all particles, then the field theory Hamiltonian (in the non-relativistic limit and for negligible self-interactions) is This looks remarkably like an expression for the expectation value of the energy, with playing the role of the wavefunction. This relationship between the field operators and wave functions makes it very easy to formulate field theories starting from space projected Hamiltonians. ### Dynamics Once the Hamiltonian operator is obtained as part of the canonical quantization process, the time dependence of the state is described with the Schrödinger equation, just as with other quantum theories. Alternatively, the Heisenberg picture can be used where the time dependence is in the operators rather than in the states. Probability amplitudes of observables in such systems are quite hard to evaluate, an enterprise which has absorbed considerable ingenuity in the last three quarters of a century. In practice, most often, expectation values of operators are computed systematically through covariant perturbation theory, formulated through Feynman diagrams, but path integral computer simulations have also produced important results. Contemporary particle physics relies on extraordinarily accurate predictions of such techniques. ### Implications #### Unification of fields and particles The "second quantization" procedure outlined in the previous section takes a set of single-particle quantum states as a starting point. Sometimes, it is impossible to define such single-particle states, and one must proceed directly to quantum field theory. For example, a quantum theory of the electromagnetic field must be a quantum field theory, because it is impossible (for various reasons) to define a wavefunction for a single photon.[23] In such situations, the quantum field theory can be constructed by examining the mechanical properties of the classical field and guessing the corresponding quantum theory. For free (non-interacting) quantum fields, the quantum field theories obtained in this way have the same properties as those obtained using second quantization, such as well-defined creation and annihilation operators obeying commutation or anticommutation relations. Quantum field theory thus provides a unified framework for describing "field-like" objects (such as the electromagnetic field, whose excitations are photons) and "particle-like" objects (such as electrons, which are treated as excitations of an underlying electron field), so long as one can treat interactions as "perturbations" of free fields. #### Physical meaning of particle indistinguishability The second quantization procedure relies crucially on the particles being identical. We would not have been able to construct a quantum field theory from a distinguishable many-particle system, because there would have been no way of separating and indexing the degrees of freedom. Many physicists prefer to take the converse interpretation, which is that quantum field theory explains what identical particles are. In ordinary quantum mechanics, there is not much theoretical motivation for using symmetric (bosonic) or antisymmetric (fermionic) states, and the need for such states is simply regarded as an empirical fact. From the point of view of quantum field theory, particles are identical if and only if they are excitations of the same underlying quantum field. Thus, the question "why are all electrons identical?" arises from mistakenly regarding individual electrons as fundamental objects, when in fact it is only the electron field that is fundamental. #### Particle conservation and non-conservation During second quantization, we started with a Hamiltonian and state space describing a fixed number of particles (N), and ended with a Hamiltonian and state space for an arbitrary number of particles. Of course, in many common situations N is an important and perfectly well-defined quantity, e.g. if we are describing a gas of atoms sealed in a box. From the point of view of quantum field theory, such situations are described by quantum states that are eigenstates of the number operator , which measures the total number of particles present. As with any quantum mechanical observable, is conserved if it commutes with the Hamiltonian. In that case, the quantum state is trapped in the N-particle subspace of the total Fock space, and the situation could equally well be described by ordinary N-particle quantum mechanics. (Strictly speaking, this is only true in the noninteracting case or in the low energy density limit of renormalized quantum field theories) For example, we can see that the free boson Hamiltonian described above conserves particle number. Whenever the Hamiltonian operates on a state, each particle destroyed by an annihilation operator is immediately put back by the creation operator . On the other hand, it is possible, and indeed common, to encounter quantum states that are not eigenstates of , which do not have well-defined particle numbers. Such states are difficult or impossible to handle using ordinary quantum mechanics, but they can be easily described in quantum field theory as quantum superpositions of states having different values of N. For example, suppose we have a bosonic field whose particles can be created or destroyed by interactions with a fermionic field. The Hamiltonian of the combined system would be given by the Hamiltonians of the free boson and free fermion fields, plus a "potential energy" term such as where and denotes the bosonic creation and annihilation operators, and denotes the fermionic creation and annihilation operators, and is a parameter that describes the strength of the interaction. This "interaction term" describes processes in which a fermion in state k either absorbs or emits a boson, thereby being kicked into a different eigenstate . (In fact, this type of Hamiltonian is used to describe the interaction between conduction electrons and phonons in metals. The interaction between electrons and photons is treated in a similar way, but is a little more complicated because the role of spin must be taken into account.) One thing to notice here is that even if we start out with a fixed number of bosons, we will typically end up with a superposition of states with different numbers of bosons at later times. The number of fermions, however, is conserved in this case. In condensed matter physics, states with ill-defined particle numbers are particularly important for describing the various superfluids. Many of the defining characteristics of a superfluid arise from the notion that its quantum state is a superposition of states with different particle numbers. In addition, the concept of a coherent state (used to model the laser and the BCS ground state) refers to a state with an ill-defined particle number but a well-defined phase. ## Associated phenomena Beyond the most general features of quantum field theories, special aspects such as renormalizability, gauge symmetry, and supersymmetry are outlined below. ### Renormalization Main article: Renormalization Early in the history of quantum field theory, as detailed above, it was found that many seemingly innocuous calculations, such as the perturbative shift in the energy of an electron due to the presence of the electromagnetic field, yield infinite results. The reason is that the perturbation theory for the shift in an energy involves a sum over all other energy levels, and there are infinitely many levels at short distances, so that each gives a finite contribution which results in a divergent series. Many of these problems are related to failures in classical electrodynamics that were identified but unsolved in the 19th century, and they basically stem from the fact that many of the supposedly "intrinsic" properties of an electron are tied to the electromagnetic field that it carries around with it. The energy carried by a single electronits self-energyis not simply the bare value, but also includes the energy contained in its electromagnetic field, its attendant cloud of photons. The energy in a field of a spherical source diverges in both classical and quantum mechanics, but as discovered by Weisskopf with help from Furry, in quantum mechanics the divergence is much milder, going only as the logarithm of the radius of the sphere. The solution to the problem, presciently suggested by Stueckelberg, independently by Bethe after the crucial experiment by Lamb, implemented at one loop by Schwinger, and systematically extended to all loops by Feynman and Dyson, with converging work by Tomonaga in isolated postwar Japan, comes from recognizing that all the infinities in the interactions of photons and electrons can be isolated into redefining a finite number of quantities in the equations by replacing them with the observed values: specifically the electron's mass and charge: this is called renormalization. The technique of renormalization recognizes that the problem is tractable and essentially purely mathematical; and that, physically, extremely short distances are at fault. In order to define a theory on a continuum, one may first place a cutoff on the fields, by postulating that quanta cannot have energies above some extremely high value. This has the effect of replacing continuous space by a structure where very short wavelengths do not exist, as on a lattice. Lattices break rotational symmetry, and one of the crucial contributions made by Feynman, Pauli and Villars, and modernized by 't Hooft and Veltman, is a symmetry-preserving cutoff for perturbation theory (this process is called regularization). There is no known symmetrical cutoff outside of perturbation theory, so for rigorous or numerical work people often use an actual lattice. On a lattice, every quantity is finite but depends on the spacing. When taking the limit to zero spacing, one makes sure that the physically observable quantities like the observed electron mass stay fixed, which means that the constants in the Lagrangian defining the theory depend on the spacing. By allowing the constants to vary with the lattice spacing, all the results at long distances become insensitive to the lattice, defining a continuum limit. The renormalization procedure only works for a certain limited class of quantum field theories, called renormalizable quantum field theories. A theory is perturbatively renormalizable when the constants in the Lagrangian only diverge at worst as logarithms of the lattice spacing for very short spacings. The continuum limit is then well defined in perturbation theory, and even if it is not fully well defined non-perturbatively, the problems only show up at distance scales that are exponentially small in the inverse coupling for weak couplings. The Standard Model of particle physics is perturbatively renormalizable, and so are its component theories (quantum electrodynamics/electroweak theory and quantum chromodynamics). Of the three components, quantum electrodynamics is believed to not have a continuum limit by itself, while the asymptotically free SU(2) and SU(3) weak and strong color interactions are nonperturbatively well defined. The renormalization group as developed along Wilson's breakthrough insights relates effective field theories at a given scale to such at contiguous scales. It thus describes how renormalizable theories emerge as the long distance low-energy effective field theory for any given high-energy theory. As a consequence, renormalizable theories are insensitive to the precise nature of the underlying high-energy short-distance phenomena (the macroscopic physics is dominated by only a few "relevant" observables). This is a blessing in practical terms, because it allows physicists to formulate low energy theories without detailed knowledge of high-energy phenomena. It is also a curse, because once a renormalizable theory such as the standard model is found to work, it provides very few clues to higher-energy processes. The only way high-energy processes can be seen in the standard model is when they allow otherwise forbidden events, or else if they reveal predicted compelling quantitative relations among the coupling constants of the theories or models. On account of renormalization, the couplings of QFT vary with scale, thereby confining quarks into hadrons, allowing the study of weakly-coupled quarks inside hadrons, and enabling speculation on ultra-high energy behavior. ### Gauge freedom A gauge theory is a theory that admits a symmetry with a local parameter. For example, in every quantum theory, the global phase of the wave function is arbitrary and does not represent something physical. Consequently, the theory is invariant under a global change of phases (adding a constant to the phase of all wave functions, everywhere); this is a global symmetry. In quantum electrodynamics, the theory is also invariant under a local change of phase, that is – one may shift the phase of all wave functions so that the shift may be different at every point in space-time. This is a local symmetry. However, in order for a well-defined derivative operator to exist, one must introduce a new field, the gauge field, which also transforms in order for the local change of variables (the phase in our example) not to affect the derivative. In quantum electrodynamics, this gauge field is the electromagnetic field. The change of local gauge of variables is termed gauge transformation. By Noether's theorem, for every such symmetry there exists an associated conserved current. The aforementioned symmetry of the wavefunction under global phase changes implies the conservation of electric charge. Since the excitations of fields represent particles, the particle associated with excitations of the gauge field is the gauge boson, e.g., the photon in the case of quantum electrodynamics. The degrees of freedom in quantum field theory are local fluctuations of the fields. The existence of a gauge symmetry reduces the number of degrees of freedom, simply because some fluctuations of the fields can be transformed to zero by gauge transformations, so they are equivalent to having no fluctuations at all, and they, therefore, have no physical meaning. Such fluctuations are usually called "non-physical degrees of freedom" or gauge artifacts; usually, some of them have a negative norm, making them inadequate for a consistent theory. Therefore, if a classical field theory has a gauge symmetry, then its quantized version (the corresponding quantum field theory) will have this symmetry as well. In other words, a gauge symmetry cannot have a quantum anomaly.[24] In general, the gauge transformations of a theory consist of several different transformations, which may not be commutative. These transformations are combine into the framework of a gauge group; infinitesimal gauge transformations are the gauge group generators. Thus, the number of gauge bosons is the group dimension (i.e., the number of generators forming the basis of the corresponding Lie algebra). All the known fundamental interactions in nature are described by gauge theories (possibly barring the Higgs multiplet couplings, if considered in isolation). These are: ### Supersymmetry Main article: Supersymmetry Supersymmetry assumes that every fundamental fermion has a superpartner that is a boson and vice versa. Its gauge theory, Supergravity, is an extension of general relativity. Supersymmetry is a key ingredient for the consistency of string theory. It was utilized in order to solve the so-called Hierarchy Problem of the standard model, that is, to explain why particles not protected by any symmetry (like the Higgs boson) do not receive radiative corrections to their mass, driving it to the larger scales such as that of GUTs, or the Planck mass of gravity. The way supersymmetry protects scale hierarchies is the following: since for every particle there is a superpartner with the same mass but different statistics, any loop in a radiative correction is cancelled by the loop corresponding to its superpartner, rendering the theory more UV finite. Since, however, no super partners have been observed, if supersymmetry existed it should be broken severely (through a so-called soft term, which breaks supersymmetry without ruining its helpful features). The simplest models of this breaking require that the energy of the superpartners not be too high; in these cases, supersymmetry could be observed by experiments at the Large Hadron Collider. However, to date, after the observation of the Higgs boson there, no such superparticles have been discovered. ## Axiomatic approaches The preceding description of quantum field theory follows the spirit in which most physicists approach the subject. However, it is not mathematically rigorous. Over the past several decades, there have been many attempts to put quantum field theory on a firm mathematical footing by formulating a set of axioms for it. These attempts fall into two broad classes. The first class of axioms, first proposed during the 1950s, include the Wightman, Osterwalder–Schrader, and Haag–Kastler systems. They attempted to formalize the physicists' notion of an "operator-valued field" within the context of functional analysis and enjoyed limited success. It was possible to prove that any quantum field theory satisfying these axioms satisfied certain general theorems, such as the spin-statistics theorem and the CPT theorem. Unfortunately, it proved extraordinarily difficult to show that any realistic field theory, including the Standard Model, satisfied these axioms. Most of the theories that could be treated with these analytic axioms were physically trivial, being restricted to low-dimensions and lacking interesting dynamics. The construction of theories satisfying one of these sets of axioms falls in the field of constructive quantum field theory. Important work was done in this area in the 1970s by Segal, Glimm, Jaffe and others. During the 1980s, the second set of axioms based on geometric ideas was proposed. This line of investigation, which restricts its attention to a particular class of quantum field theories known as topological quantum field theories, is associated most closely with Michael Atiyah and Graeme Segal, and was notably expanded upon by Edward Witten, Richard Borcherds, and Maxim Kontsevich. However, most of the physically relevant quantum field theories, such as the Standard Model, are not topological quantum field theories; the quantum field theory of the fractional quantum Hall effect is a notable exception. The main impact of axiomatic topological quantum field theory has been on mathematics, with important applications in representation theory, algebraic topology, and differential geometry. Finding the proper axioms for quantum field theory is still an open and difficult problem in mathematics. One of the Millennium Prize Problemsproving the existence of a mass gap in Yang–Mills theoryis linked to this issue. ### Haag's theorem Main article: Haag's theorem From a mathematically rigorous perspective, there exists no interaction picture in a Lorentz-covariant quantum field theory. This implies that the perturbative approach of Feynman diagrams in QFT is not strictly justified, despite producing vastly precise predictions validated by experiment. This is called Haag's theorem, but most particle physicists relying on QFT largely shrug it off, as not really limiting the power of the theory. ## Notes 1. Dirac 1927 2. Schweber 1994, p. 28 3. See references in Schweber (1994, pp. 695f) 4. Weinberg 2005, p. 15 5. Part II of Peskin & Schroeder (1995) gives an extensive description of renormalization. 6. Peskin & Schroeder (1995, Chapter4) 7. Greiner & Reinhardt 1996 8. Yang, C. N. (2012). "Fermi's β-decay Theory". Asia Pacific Physics Newsletter 1 (01), 27-30. online copy 9. Special unitary groups in the Eightfold way (physics) completely determined the form of the theories, and Current algebras implemented these symmetries in QFT without particular cognizance of dynamics, still producing a plethora of predictive correlations. 10. Altogether, there is outstanding agreement with experimental data; for example, the masses of W+ and W bosons confirmed the theoretical prediction within one percent deviation. 11. Clément Hongler, "Conformal invariance of Ising model correlations", Ph.D. thesis, Université of Geneva, 2010, p. 9. 12. "Beautiful Minds, Vol. 20: Ed Witten". la Repubblica. 2010. Retrieved 22 June 2012. See here. 13. Cole, K. C. (18 October 1987). "A Theory of Everything". The New York Times Magazine. Retrieved 15 September 2016. 14. Thorn et al. 2004 15. Tong 2015, Chapter 1 16. Brown, Lowell S. (1994). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-46946-3. 17. Srednicki 2007, p. 19 18. Srednicki 2007, pp. 2526 19. Zee 2010, p. 61 20. Tong 2015, Introduction 21. Zee 2010, p. 3 22. Pais 1994. Pais recounts how his astonishment at the rapidity with which Feynman could calculate using his method. Feynman's method is now part of the standard methods for physicists. 23. Newton & Wigner 1949, pp. 400406 24. If a gauge symmetry is anomalous (i.e. not kept in the quantum theory) then the theory is inconsistent: for example, in quantum electrodynamics, had there been a gauge anomaly, this would require the appearance of photons with longitudinal polarization and polarization in the time direction, the latter having a negative norm, rendering the theory inconsistent; another possibility would be for these photons to appear only in intermediate processes but not in the final products of any interaction, making the theory non-unitary and again inconsistent (see optical theorem). 25. However, it is non-renormalizable. ## References Historical references
favorite We need a little bit of your help to keep things running, click on this banner to learn more Problems # Partial matrix sum The matrix of integers aij is given, where 1in, 1jm. For all i, j find #### Input The first line contains the matrix size n, m (1n, m1000). Each of the next n lines contains m integers aij (1aij1000). #### Output Print n rows each containing m numbers Sij. Time limit 1 second Memory limit 128 MiB Input example #1 3 5 1 2 3 4 5 5 4 3 2 1 2 3 1 5 4 Output example #1 1 3 6 10 15 6 12 18 24 30 8 17 24 35 45 Author В.Гольдштейн Source Зимние сборы в Харькове 2010 День 2
# Cochran's theorem June 30, 2021 18 min read Here I discuss the Cochran's theorem that is used to prove independence of quadratic forms of random variables, such as sample variance and sample mean. ## Tearing through the unintelligible formulation Cochran’s theorem is a field-specific result, sometimes used in the analysis of chi-squared and multivariate normal distributions. Probably, the main two reasons to be concerned about it are the proof of independence between the sample variance and sample mean in derivation of t-Student distribution and ANOVA, where total variance can be split into multiple sources of variance, which are usually expressed as quadratic forms. As an example of the last case (maybe not them most relevant one), consider the famous bias-variance tradeoff, where total expected error of the regression model is divided into a sum of two sources of variance, squared bias (the systematic error due to crude model being unable to fit the more complex nature of data) and variance (the error created by the fact that the amount of data available is limited, and is somewhat insufficient for the model to learn to fit the data perfectly). Anyway, the formulation of the theorem sounds really technical, and is hard to digest. Basically, it says that if you had a sum of squares of $n$ independent identically distributed normal variables and managed to split it into a several non-full-rank quadratic forms where sum of their ranks equals $n$, each of those quadratic forms is an independent random variable, distributed as $\chi^2_{r_i}$, a chi-square with $r_i$ degrees of freedom, where, where $r_i$ is the rank of corresponding quadratic form. For instance, if we’ve split our sum of square of i.i.d. normal variables with 0 mean (so that $X^T X \sim \chi_{n}^2$) into 2 quadratic forms $X^T X = X^T B_1 X + X^T B_2 X$ with the matrix $B_1$ having rank $r_1$ and $B_2$ having rank $r_2 = n - r_1$, $X^T B_1 X \sim \chi_{r_1}^2$, $X^T B_2 X \sim \chi_{r_2}^2$ and $X^T B_1 X$ and $X^T B_2 X$ are independent. See the examples below to see, why this result is valuable. ## Cochran’s theorem proof Here I’ll outline the proof of Cochran’s theorem. The central piece of the proof is the following lemma, which is a result from pure linear algebra, not probabilities - it deals with matrices and real numbers, not random variables. When we are done with the lemma, the proof of the theorem itself gets pretty straightforward. ### Lemma Suppose that we have an n-dimensional vector $\bm{X}$ and a quadratic form ${\bm{X}}^T \bm{I} \bm{X} = {\bm{X}}^T \bm{X}$. Suppose that we found a way to split this quadratic form into several: ${\bm{X}}^T \bm{X} = {\bm{X}}^T \bm{B_1} \bm{X} + {\bm{X}}^T \bm{B_2} \bm{X} + ... + {\bm{X}}^T \bm{B_k} \bm{X}$, where the matrices $\bm{B_1}, \bm{B_2}, ... \bm{B_k}$ have lower ranks $r_1, r_2, ... r_k$, so that the sum of those ranks equals n: $\sum \limits_{i=1}^{k} r_i = n$. Then all those matrices can be simultaneously diagonalized. There exists an orthogonal matrix $\bm{E}$ of their joint eigenvectors, and after diagonalising, we get: ${\bm{X}}^T \bm{X} = {\bm{X}}^T \bm{B_1} \bm{X} + {\bm{X}}^T \bm{B_2} \bm{X} + ... + {\bm{X}}^T \bm{B_k} \bm{X} = {\bm{X}}^T \bm{E} \bm{\Lambda_1} \bm{E}^T \bm{X} + {\bm{X}}^T \bm{E} \bm{\Lambda_2} \bm{E}^T \bm{X} + ... + {\bm{X}}^T \bm{E}^T \bm{\Lambda_k} \bm{E}^T \bm{X} =$ $= \bm{Y}^T \bm{\Lambda_1} \bm{Y} + \bm{Y}^T \bm{\Lambda_2} \bm{Y} + ... + \bm{Y}^T \bm{\Lambda_k} \bm{Y}$, where $\bm{Y}$ are such transforms of $\bm{X}$ vector, that in the resulting quadratic forms $\bm{Y^T} \bm{\Lambda_i} \bm{Y}$ matrices $\bm{\Lambda_i}$ are diagonal. As matrices $\Lambda_i$ contain only $r_i$ non-zero eigenvalues: $\begin{pmatrix} 0 & 0 & \cdots & 0 & 0 & 0 \\ \cdots & \cdots & \ddots & \ddots & \cdots & \cdots \\ 0 & \cdots & \lambda_{j} & 0 & \cdots & 0 \\ 0 & \cdots & 0 & \lambda_{j+1} & 0 & 0 \\ \cdots & \cdots & \ddots & \ddots & \cdots & \cdots \\ 0 & 0 & \cdots & 0 & 0 & 0 \\ \end{pmatrix}$, where $j$ starts with $r_0 + r_1 + ... + r_{i-1} + 1$ and ends with $r_0 + r_1 + ... + r_{i-1} + r_i$, in each expression $\bm{Y}^T \bm{\Lambda_i} \bm{Y}$ only $j$-th coordinates of $\bm{Y}$ actually matter. Moreover, importantly all the eigenvalues $\lambda_j$ equal to 1, so each quadratic form quadratic forms $\bm{Y^T} \bm{\Lambda_i} \bm{Y}$ actually end up being just a sum of squares of i.i.d normal variables $\bm{Y^T} \bm{Y}$, which means it is chi-square-distributed. #### Preparations for the lemma: eigenvalues and eigenvectors of lower-rank matrices As you see the statement of the lemma deals with lower-rank matrices and their eigen decomposition. We need to learn how to work with them in order to understand the proof of the lemma. For starters, let us first consider a rank 1 matrix, e.g. $A = \begin{pmatrix} 1 & 2 \\ 1 & 2 \\ \end{pmatrix}$ It has to have 2 eigenvalues. We can represent it as an outer product of two vectors, $u$ and $v$: $A = uv^T$. Then one of its eigenvalues is $\lambda_1 = u^T v$ because $Au = (uv^T)u = u(v^Tu) = u \lambda_1 = \lambda_1 u$, and u is the eigenvector. As for the rest of eigenvalues, they equal to zero, and the corresponding eigenvectors have to cover the remaining space of dimensionality (n-1). For instance, it is clear that (2, -1) is an eigenvector with 0 eigenvalue. Obviously, in case of matrix of rank 1, we have just one linearly independent equation, that makes all but one coordinates of eigenvectors, corresponding to eigenvalue 0, arbitrary, and the last one is determined by that row. Now, suppose that you have dimension of your matrix $n=4$ and rank of your matrix $r=2$, for instance: $A = \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 2 \\ 1 & 1 & 1 & 2 \\ 1 & 1 & 1 & 2 \\ \end{pmatrix}$. Then 2 eigenvectors are fixed and correspond to non-zero eigenvalues $\lambda_1$ and $\lambda_2$, while a 2-dimensional subspace is left, corresponding to the eigenvalues $\lambda_3 = \lambda_4 = 0$. Indeed, if you try solving $A x = \lambda x$ with $\lambda=0$ or $Ax = 0$, you can clearly see, that you’ll have just 2 constraints on solutions: $1x_1 + 1x_2 + 1x_3 + 1x_4 = 0$ and $1x_1 + 1x_2 + 1x_3 + 2x_4 = 0$, which allows for eigenspace of dimensionality 2. You can choose arbitrary values for $x_1$ and $x_2$, then $x_3 = - (x_1 + x_2)$ and $x_4 = 0$. #### Preparations for the lemma: simultaneous diagonalization Sometimes two lower-rank matrices can be simultaneously diagonalized. Example: say, you have 2 matrices, $B_1 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}$, $B_2 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \\ \end{pmatrix}$. For matrix $B_1$ eigenvectors are $E_1 = (1, 0, 0, 0)^T$ with eigenvalue $\lambda_1 = 1$, $E_2 = (0, 1, 0, 0)^T$ with eigenvalue $\lambda_2 = 2$; $\lambda_3 = \lambda_4 = 0$, and corresponding eigenspace is $V = (0, 0, c_1, c_2)^T$, where $c_1, c_2$ - arbitrary values. For matrix $B_2$ eigenvectors are $E_1 = (0, 0, 1, 0)^T$ with eigenvalue $\lambda_1 = 3$, $E_2 = (0, 0, 0, 1)^T$ with eigenvalue $\lambda_2 = 4$; $\lambda_3 = \lambda_4 = 0$, and corresponding eigenspace is $V = (c_1, c_2, 0, 0)^T$, where $c_1, c_2$ - arbitrary values. Thus, we can simultaneously diagonalize these two matrices, because eigenvalues of matrix $B_1$ are compatible with the eigenspace of $B_2$ and vice versa. Now, we won’t be using the following results below, but I’ll still mention them. Simultaneous diagonalization of lower-rank matrices in general is NOT always possible. However, there is always a way for two symmetric, positive-definite matrices of the same size - Cholesky decomposition. This result is well-known, because it has important practical applications for Lagrangian and Quantum mechanics. In quantum mechanics two operators can be observed simultaneously, if they commute (i.e. their eigen functions are the same, or they are simultaneously diagonalizable). #### Proof of the lemma $X^T X = X^T B_1 X + X^T B_2 X$ Do an eigen decomposition of $B_1$: $B_1 = E_1 \Lambda_1 E_1^T$ or $E_1^T B_1 E_1 = \Lambda_1$. Note that as the rank of matrix $B_1$ is not full, the matrix E_1 of eigenvectors has en eigenspace, corresponding to 0 eigenvalue, where we can choose an arbitrary basis. Let us do this in such a way that the resulting matrix E_1 is full-rank orthogonal (this is possible because $B_1$ is symmetric). $X^T X = X^T E_1 \Lambda_1 E_1^T X + X^T B_2 X$ Now, denote $Y = E_1^T X$ and recall that $Y^T Y = X^T E_1 E_1^T X = X^T X$. $Y^TY = Y^T \Lambda_1 Y + X^T B_2 X$ or, equivalently, $Y^TY = Y^T \Lambda_1 Y + Y^T E_1^T B_2 E_1 Y$ Now, if $rank(\Lambda_1) = r_1$ (for instance, 2), we can re-arrange this as follows: $\begin{pmatrix} y_1 & y_2 & ... & y_n \\ \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \cdots & \cdots & \ddots & \cdots \\ 0 & 0 & 0 & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} y_1 \\ y_2 \\ ... \\ y_n \\ \end{pmatrix} =$ $\begin{pmatrix} y_1 & y_2 & ... & y_n \\ \end{pmatrix} \cdot \begin{pmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \cdots & \cdots & 0 & \cdots \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} y_1 \\ y_2 \\ ... \\ y_n \\ \end{pmatrix} +Y^T E_1^T B_2 E_1 Y$ Rearrange the terms to get: $\sum \limits_{i=1}^{r_1} (1-\lambda_i) y_i^2 + \sum \limits_{j=r_1+1}^{n} y_j^2 = Y^T E_1^T B_2 E_1 Y$ Now that we know that the $rank(B_2) = r_2 = n - r_1$, we can come to the conclusion that $\lambda_1, ..., \lambda_{r_1} = 1$. Indeed, recall that $rank(AB) \leq \min(rank(A), rank(B)$. So, $rank(E_1^T B_2 E_1) \leq rank(B_2) = n - r_1$ (see wiki). As a result we have: $\begin{pmatrix} y_1 & y_2 & ... & y_n \\ \end{pmatrix} \cdot \begin{pmatrix} 1-\lambda_1 & 0 & \cdots & 0 \\ 0 & 1-\lambda_2 & \cdots & 0 \\ \cdots & \cdots & \ddots & \cdots \\ 0 & 0 & 0 & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} y_1 \\ y_2 \\ ... \\ y_n \\ \end{pmatrix} = \begin{pmatrix} y_1 & y_2 & ... & y_n \\ \end{pmatrix} \cdot E_1^T B_2 E_1 \cdot \begin{pmatrix} y_1 \\ y_2 \\ ... \\ y_n \\ \end{pmatrix}$ There is only one way for the matrix $E_1^T B_2 E_1$ to have rank $n-r_1$ - all the eigenvalues should equal to 1: $\lambda_1 = \lambda_2 = ... = \lambda_{r_i} = 1$. Now, what if n > 2, e.g. n=3? The key observation for this case is the fact that rank is subadditive: $rank(A+B) \leq rank(A) + rank(B)$. So we can be sure that $B_2 + B_3$ is a matrix of rank no greater than $n-r_1$. Hence, we can disregard the first $y_i$ from $r_1$ to $r_i$ and apply the same argument again for the remaining $y_j$. ### Theorem proof Now that we have proven the lemma, which constitutes the core of Cochran’s theorem, we can apply it to our random variables. By analogy to the lemma, we apply an orthogonal transform $C$ to our random variable $\bm{X} = C \bm{Y}$, so that our sum of quadratic forms takes the following form: $Q_1 = Y_1^2 + Y_2^2 + ... + Y_{r_1}^2$ $Q_2 = Y_{r_1+1}^2 + Y_{r_1+2}^2 + ... + Y_{r_1+r_2}^2$ $Q_k = Y_{(r_1 + ... + r_{k-1})+1}^2 + Y_{(r_1 + ... + r_{k-1})+2}^2 + ... + Y_{n}^2$ Let us show now that all $Y_i^2$ random variables are independent. Recall that covariance matrix of $\bm{X}$ is $\bm{\Sigma_X} = \begin{pmatrix} \sigma_1^2 & 0 & ... & 0 \\ 0 & \sigma_2^2 & ... & 0 \\ ... & ... & \ddots & ... \\ 0 & 0 & ... & \sigma_n^2 \\ \end{pmatrix} = \sigma^2 I$, since all the $\sigma_1 = \sigma_2 = ... = \sigma_n = \sigma$ Now, if $\bm{Y} = C \bm{X}$, where $C$ is orthogonal matrix, the covariance matrix of Y is an outer product: $\bm{\Sigma_Y} = \mathrm{Cov}[Y, Y^T] = \mathrm{Cov}[C X, X^T C^T] = C \bm{\Sigma_X} C^T = C \sigma^2 I C^T = \sigma^2 I$. So, all $Y_i$ are independent identically distributed random variables. Since every $Y_i^2$ occurs in exactly one $Q_j$ and the $Y_i$’s are all independent random variables $\in \mathcal{N}(0, \sigma^2)$ (because $C$ is an orthogonal matrix), Cochran’s theorem follows. ## Example: Application to ANOVA Total sum of squares (SSTO) can be split into two terms: $SSTO = \sum \limits_{i=1}^{n} (Y_i - \bar{Y})^2 = \sum \limits_{i=1}^{n} (Y_i^2 - 2Y_i\bar{Y} + \bar{Y}^2) = \sum \limits_{i=1}^{n} {Y_i}^2 - 2\bar{Y} n \bar{Y} + n\bar{Y}^2 = \sum \limits_{i=1}^{n} {Y_i}^2 - n\bar{Y}^2 = \sum \limits_{i=1}^{n} {Y_i}^2 - \frac{(\sum Y_i)^2}{n}$. Thus, $\sum \limits_{i=1}^{n} {Y_i}^2 = \sum \limits_{i=1}^{n} (Y_i - \bar{Y})^2 + \frac{(\sum Y_i)^2}{n}$. Now, both terms of the sum can be represented in matrix notation as quadratic forms: $\sum \limits_{i=1}^{n} {Y_i}^2 = \begin{pmatrix} Y_1 & Y_2 & Y_3 \\ \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \\ \end{pmatrix}$, $\sum \limits_{i=1}^{n} (Y_i - \bar{Y})^2 = \begin{pmatrix} Y_1 & Y_2 & Y_3 \\ \end{pmatrix} \cdot (\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} - \frac{1}{n} \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix}) \cdot \begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \\ \end{pmatrix}$, $\frac{(\sum Y_i)^2}{n} = \begin{pmatrix} Y_1 & Y_2 & Y_3 \\ \end{pmatrix} \cdot \frac{1}{n} \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \\ \end{pmatrix}$ ## Application to sample mean and sample variance TODO; see wikipedia ## References Written by Boris Burkov who lives in Moscow, Russia, loves to take part in development of cutting-edge technologies, reflects on how the world works and admires the giants of the past. You can follow me in Telegram
Alan James Cain  ·  Curriculum Vitæ Address Centro de Matemática e Aplicações Faculdade de Ciência e Tecnologia Universidade Nova de Lisboa 2829–516 Caparica Portugal Email address a DOT cain AT fct DOT unl DOT ptReplace ‘AT’ with ‘@’ and ‘DOT’ with ‘.’ Web page http://www-groups.st-and.ac.uk/~alanc Languages English (Native)Japanese (Intermediate)Portuguese (Basic)French (Basic) ORCiD 0000-0002-0706-1354 ResearcherID G-7986-2014 Scopus Author ID 13608823100 Positions held 2014/01–Present ‘Investigador FCT’ Senior Research Fellow Centro de Matemática e Aplicações, Universidade Nova de Lisboa2829–516 Caparica, Portugal FCT-funded five-year personal research fellowship. Awarded an associated grant of € 50000. Taught courses ‘Semigroups, Automata, and Languages’, ‘Algebraic Theory of Automata’, ‘Rewriting Systems’, ‘Combinatorial Group Theory’. Supervised two Ph.D. students (one unofficially, one officially) and one master’s student. Organizer of the seminar of the Algebra & Logic group (2015–Present). 2009/07–2014/01 ‘Ciência 2008’ Research Fellow Centro de Matemática da Universidade do PortoRua do Campo Alegre 687, 4169–007 Porto, Portugal FCT-funded five-year personal research fellowship. Developed and taught a master’s course ‘Semigroups’. 2008/09–2009/06 Postdoctoral researcher Centro de Álgebra da Universidade de LisboaAv. Prof. Gama Pinto 2, 1649–003 Lisboa, Portugal Research within the project ‘Semigroups and Languages’, funded by FCT and PIDDAC (PTDC/MAT/69514/2006). 2005/09–2008/08 Research Fellow School of Mathematics & Statistics, University of St AndrewsSt Andrews, Fife KY16 9AJ, United Kingdom Offered this post before the completion of my Ph.D. based on my achievements during my doctoral studies. Member of Staff Council (March 2007–August 2008). 2002/09–2005/05 Tutor & Computer laboratory demonstrator School of Mathematics & Statistics, University of St AndrewsSt Andrews, Fife KY16 9AJ, United Kingdom Tutored the first-year mathematics courses MT1002 and MT1003, required for all students proceeding to a degree in mathematics. Worked as a computer laboratory demonstrator, aiding students in using MAPLE to solve practical exercises related to their work in pre-honours lecture courses, and marking their work. Education 2002/09–2005/07 Ph.D. in Mathematics University of St AndrewsSt Andrews, Fife KY16 9AJ, United Kingdom Thesis: Presentations for Subsemigroups of Groups Supervisors: Prof. E.F. Robertson, Prof. Nik Ruškuc Examiners: Dr M. Quick (Internal), Prof. D.F. Holt (External, Univ. of Warwick) Funded by a prestigious Carnegie Doctoral Scholarship. Thesis accepted without corrections. 1998/09–2002/07 M.Sci. in Mathematics with First-class Honours University of GlasgowGlasgow G12 8QQ, United Kingdom Cunninghame Prize in Mathematics; Class prizes in Mathematics in first and second year; Class prize in Computer Science in second year. President of the Maclaurin Society, the University of Glasgow student society for mathematics and statistics. Membership of Professional Societies • London Mathematical Society (Elected 2006) • Edinburgh Mathematical Society (Elected 2003) Research funding 2002/10–2005/09 Carnegie Doctoral Scholarship Carnegie Trust for the Universities of Scotland Ph.D. scholarship covering tuition fees and a stipend. These scholarships are prestigious and highly competitive, with fewer than 10% of applications funded. 2009/07–2014/01 ‘Ciência 2008’ fellowship Fundação para a Ciência e a Tecnologia (FCT) [National science agency] Five-year fellowship to pursue research at a university in Portugal. Funding for 90% of the salary, with the remaining 10% paid by the host institution. 2010/06–2014/05 Project ‘Automata, Languages, Decidability in Algebra’ EP/H011978/1Engineering and Physical Sciences Research Council (EPSRC) Prof. Nik Ruškuc, Dr Martyn Quick, and I wrote the application for this grant, which was funded to a value of £ 348646. The original plan was for me to be employed as a researcher on this project at the University of St Andrews, but I declined in favour of the ‘Ciência 2008’ fellowship. I remained an active participant in the project and made frequent research visits to St Andrews. 2014/01–Present ‘Investigador FCT’ fellowship Fundação para a Ciência e a Tecnologia (FCT) [National science agency] Five-year advanced fellowship to pursue research at a university in Portugal. This programme is a much more selective than the ‘Ciência’ scheme. The evaluation panel rated my application at the highest grade 9, defined as ‘exceptionally strong with essentially no weaknesses’. 2014/07–Present Exploratory Project (Principal investigator) IF/01622/2013/CP1161/CT0001Fundação para a Ciência e a Tecnologia (FCT) [National science agency] Special grant scheme, only open to ‘Investigador FCT’ fellows. Value € 50000, with freedom to allocate this to travel, visitors, equipment, etc., and with the usual deductions for administrative overheads being waived. 2015/04 ‘Research in Pairs’ grant London Mathematical Society Small grant (value £ 1000) to visit the University of St Andrews to work with Dr Markus Pfeiffer and to speak at a conference. 2016/06–Present Project ‘Hilbert’s 24th Problem’ (Subproject leader) PTDC/MHC-FIL/2583/2014Fundação para a Ciência e a Tecnologia (FCT) [National science agency] Value € 199902. I am leader of one of the three subprojects, and a member of another subproject. Research visits University of St Andrews, United Kingdom 2009/05; 2010/08; 2011/05; 2011/08; 2012/06; 2013/05; 2015/04; 2015/09. Centro de Álgebra da Universidade de Lisboa, Portugal 2010/09; 2012/05; 2012/10; 2013/11. Sultan Qaboos University, Muscat, Oman 2013/04. University of York, United Kingdom 2013/11. Uniwersytet Warszawski, Poland 2016/10; 2017/04. Université Paris-Diderot, France 2017/04. Publications Presentations for Subsemigroups of Groups Ph.D. thesis, University of St Andrews, St Andrews, 2005. Subsemigroups of virtually free groups: finite Malcev presentations and testing for freeness [with E. F. Robertson & N. Ruškuc] Mathematical Proceedings of the Cambridge Philosophical Society, 141, no. 1 (2006), pp. 57–66. DOI: 10.1017/s0305004106009236. MR: 2238642. ZBL: 1115.20043. Subsemigroups of groups: presentations, Malcev presentations, and automatic structures [with E. F. Robertson & N. Ruškuc] Journal of Group Theory, 9, no. 3 (2006), pp. 397–426. DOI: 10.1515/jgt.2006.027. MR: 2226621. ZBL: 1151.20044. A group-embeddable non-automatic semigroup whose universal group is automatic Glasgow Mathematical Journal, 48, no. 2 (2006), pp. 337–342. DOI: 10.1017/s0017089506003107. MR: 2256982. ZBL: 1108.20056. Cancellativity is undecidable for automatic semigroups Quarterly Journal of Mathematics, 57, no. 3 (2006), pp. 285–295. DOI: 10.1093/qmath/hai023. MR: 2253587. ZBL: 1126.20039. Malcev presentations for subsemigroups of groups — a survey In C. M. Campbell, M. Quick, E. F. Robertson, & G. C. Smith, eds, Groups St Andrews 2005 (Vol. 1), no. 339 in London Mathematical Society Lecture Note Series, pp. 256–268 (Cambridge: Cambridge University Press, 2007). DOI: 10.1017/CBO9780511721212.018. MR: 2328165. Cancellative and Malcev presentations for finite Rees index subsemigroups and extensions [with E. F. Robertson & N. Ruškuc] Journal of the Australian Mathematical Society, 84, no. 1 (2008), pp. 39–61. DOI: 10.1017/s1446788708000086. MR: 2469266. ZBL: 1156.20048. Automatic presentations for cancellative semigroups [with G. Oliver, N. Ruškuc & R. M. Thomas] In C. Martín-Vide, H. Fernau, & F. Otto, eds, Language and Automata Theory and Applications: Second International Conference, Tarragona, Spain, March 13–19, 2008, no. 5196 in Lecture Notes in Computer Science, pp. 149–159 (Springer, 2008). DOI: 10.1007/978-3-540-88282-4_15. MR: 2540320. ZBL: 1157.20332. Malcev presentations for subsemigroups of direct products of coherent groups Journal of Pure and Applied Algebra, 213, no. 6 (2009), pp. 977–990. DOI: 10.1016/j.jpaa.2008.10.006. MR: 2498789. ZBL: 1178.20050. Automaton semigroups Theoretical Computer Science, 410, no. 47–49 (2009), pp. 5022–5038. DOI: 10.1016/j.tcs.2009.07.054. MR: 2583696. ZBL: 1194.68133. Automatic presentations for semigroups [with G. Oliver, N. Ruškuc & R. M. Thomas] Information and Computation, 207, no. 11 (2009), pp. 1156–1168. DOI: 10.1016/j.ic.2009.02.005. MR: 2566948. ZBL: 1192.20040. Decision problems for finitely presented and one-relation semigroups and monoids [with V. Maltcev] International Journal of Algebra and Computation, 19, no. 6 (2009), pp. 747–770. DOI: 10.1142/s0218196709005366. MR: 2572873. ZBL: 1201.20055. Monoids presented by rewriting systems and automatic structures for their submonoids International Journal of Algebra and Computation, 19, no. 6 (2009), pp. 771–790. DOI: 10.1142/s0218196709005317. MR: 2572874. ZBL: 1201.20054. Automatic semigroups and Bruck–Reilly extensions Acta Mathematica Hungarica, 126, no. 1–2 (2010), pp. 1–15. DOI: 10.1007/s10474-009-8063-8. MR: 2593314. ZBL: 1259.20061. Automatic presentations and semigroup constructions [with G. Oliver, N. Ruškuc & R. M. Thomas] Theory of Computing Systems, 47, no. 2 (2010), pp. 568–592. DOI: 10.1007/s00224-009-9216-4. MR: 2652030. ZBL: 1204.68118. Deus ex machina and the aesthetics of proof Mathematical Intelligencer, 32, no. 3 (September 2010), pp. 7–11. DOI: 10.1007/s00283-010-9141-z. MR: 2721302. ZBL: 1247.00009. An annotated translation of Yves Marie André’s Essay on Beauty (1741) (Ebook, 2010). Unary FA-presentable semigroups [with N. Ruškuc & R. M. Thomas] International Journal of Algebra and Computation, 22, no. 4 (2012). DOI: 10.1142/S0218196712500385. MR: 2946303. ZBL: 1285.03048. Green index in semigroup theory: generators, presentations, and automatic structures [with R. Gray & N. Ruškuc] Semigroup Forum, 85, no. 3 (2012), pp. 448–476. DOI: 10.1007/s00233-012-9406-2. MR: 3001595. ZBL: 1270.20059. Context-free rewriting systems and word-hyperbolic structures with uniqueness [with V. Maltcev] International Journal of Algebra and Computation, 22, no. 7 (2012). DOI: 10.1142/S0218196712500610. MR: 2999367. ZBL: 1284.68320. Nine Chapters on the Semigroup Art (Lecture notes, 2012). Hyperbolicity of monoids presented by confluent monadic rewriting systems Beiträge zur Algebra und Geometrie, 54, no. 2 (October 2013), pp. 593–608. DOI: 10.1007/s13366-012-0116-4. MR: 3095744. ZBL: 1326.20056. Automatic structures for subsemigroups of Baumslag–Solitar semigroups Semigroup Forum, 87, no. 3 (2013), pp. 537–552. DOI: 10.1007/s00233-013-9490-y. MR: 3128706. ZBL: 1326.20057. Finitely presented monoids with linear Dehn function need not have regular cross-sections [with V. Maltcev] Semigroup Forum, 88, no. 2 (2014), pp. 300–315. DOI: 10.1007/s00233-013-9531-6. MR: 3189098. ZBL: 1300.20057. Markov semigroups, monoids, and groups [with V. Maltcev] International Journal of Algebra and Computation, 24, no. 5 (August 2014). DOI: 10.1142/S021819671450026X. MR: 3254716. ZBL: 1325.20055. Subalgebras of FA-presentable algebras [with N. Ruškuc] Algebra Universalis, 72, no. 2 (October 2014), pp. 101–123. DOI: 10.1007/s00012-014-0293-0. MR: 3257650. ZBL: 1321.08001. Hopfian and co-hopfian subsemigroups and extensions [with V. Maltcev] Demonstratio Mathematica, 47, no. 4 (2014), pp. 791–804. DOI: 10.2478/dema-2014-0064. MR: 3290386. ZBL: 1312.20051. Finite Gröbner–Shirshov bases for Plactic algebras and biautomatic structures for Plactic monoids [with R. D. Gray & A. Malheiro] Journal of Algebra, 423 (Febuary 2015), pp. 37–53. DOI: 10.1016/j.jalgebra.2014.09.037. MR: 3283708. ZBL: 1311.20055. A simple non-bisimple congruence-free finitely presented monoid [with V. Maltcev] Semigroup Forum, 90, no. 1 (Febuary 2015), pp. 184–188. DOI: 10.1007/s00233-014-9607-y. MR: 3297818. ZBL: 1317.20049. Rewriting systems and biautomatic structures for Chinese, hypoplactic, and sylvester monoids [with R. D. Gray & A. Malheiro] International Journal of Algebra and Computation, 25, no. 1-2 (2015). DOI: 10.1142/S0218196715400044. MR: 3325877. ZBL: 1326.20058. Automaton semigroup constructions [with T. Brough] Semigroup Forum, 90, no. 3 (June 2015), pp. 763–774. DOI: 10.1007/s00233-014-9632-x. MR: 3345953. ZBL: 1336.20062. Deciding conjugacy in sylvester monoids and other homogeneous monoids [with A. Malheiro] International Journal of Algebra and Computation, 25, no. 5 (August 2015). DOI: 10.1142/S0218196715500241. MR: 3384086. ZBL: 06481131. A countable family of congruence-free finitely presented monoids [with F. Al-Kharousi, V. Maltcev & A. Umar] Acta Universitatis Szegediensis: Acta Scientiarum Mathematicarum, 81, no. 3–4 (2015), pp. 437–445. DOI: 10.14232/actasm-013-028-z. MR: 3443762. ZBL: 1363.20044. Decision problems for word-hyperbolic semigroups [with M. Pfeiffer] Journal of Algebra, 465 (November 2016), pp. 287–321. DOI: 10.1016/j.jalgebra.2016.07.007. MR: 3537824. ZBL: 06621117. Crystallizing the hypoplactic monoid: from quasi-Kashiwara operators to the Robinson–Schensted–Knuth-type correspondence for quasi-ribbon tableaux [with A. Malheiro] Journal of Algebraic Combinatorics, 45, no. 2 (March 2017), pp. 475–524. DOI: 10.1007/s10801-016-0714-6. MR: 3604065. ZBL: 1359.05135. Growths of endomorphisms of finitely generated semigroups [with V. Maltcev] Journal of the Australian Mathematical Society, 102, no. 2 (April 2017), pp. 163–184. DOI: 10.1017/S1446788716000264. MR: 3621637. ZBL: 06714329. Automaton semigroups: new constructions results and examples of non-automaton semigroups [with T. Brough] Theoretical Computer Science, 674 (2017), pp. 1–15. DOI: 10.1016/j.tcs.2017.02.003. MR: 3634703. ZBL: 06715025. On finite complete rewriting systems, finite derivation type, and automaticity for homogeneous monoids [with R. D. Gray & A. Malheiro] Information and Computation, 255, no. 1 (August 2017), pp. 68–93. Combinatorics of cyclic shifts in plactic, hypoplactic, sylvester, and related monoids [with A. Malheiro] In S. Brlek, F. Dolce, C. Reutenauer, & É. Vandomme, eds, Combinatorics on Words: 11th International Conference, Montréal, Canada, September 11–15, 2017, vol. 10432 of Lecture Notes in Computer Science, pp. 190–202 (Springer, 2017). Crystals and trees: quasi-Kashiwara operators, monoids of binary trees, and Robinson–Schensted-type correspondences [with A. Malheiro] Journal of Algebra, 502 (May 2018), pp. 347–381. The monoids of the patience sorting algorithm [with A. Malheiro & F. Silva] International Journal of Algebra and Computation, Forthcoming. arXiv: 1706.06884. Identities in plactic, hypoplactic, sylvester, Baxter, and related monoids [with A. Malheiro] Electronic Journal of Combinatorics, Forthcoming. arXiv: 1611.04151. Crystal monoids & crystal bases: rewriting systems and biautomatic structures for plactic monoids of types ${A}_n$, ${B}_n$, ${C}_n$, ${D}_n$, and ${G}_2$ [with R. D. Gray & A. Malheiro] Submitted. arXiv: 1412.7040. A note on identities in plactic monoids and monoids of upper-triangular tropical matrices [with G. Klein, Ł. Kubat, A. Malheiro & J. Okniński] Submitted. arXiv: 1705.04596. Combinatorics of cyclic shifts in plactic, hypoplactic, sylvester, Baxter, and related monoids [with A. Malheiro] Submitted. arXiv: 1709.03974. Combinatorics of patience sorting monoids [with A. Malheiro & F. Silva] Submitted. arXiv: 1801.05591. Visual Thinking and Simplicity in Proof Submitted. arXiv: 1803.00038. Conjugacy in patience sorting monoids [with A. Malheiro & F. Silva] Submitted. arXiv: 1803.00361. Two applications of monoid actions to cross-sections [with T. Brough & V. Maltcev] Submitted. arXiv: 1803.10747. Invited seminars & conference talks ‘Malcev presentations for subsemigroups of groups’ Invited seminar: University of Glasgow, 13th October 2004. ‘Decidability and undecidability for automatic semigroups’ Invited seminar: University of Edinburgh & Heriot–Watt University (Joint), 15th November 2005. ‘Automaton semigroups’ Invited conference talk: North Britain Semigroups and Applications Network, University of St Andrews, 16th April 2009. ‘Automatic presentations and semigroups’ Invited seminar: Centro de Álgebra da Universidade de Lisboa, 10th September 2010. ‘Hyperbolic and word-hyperbolic semigroups’ Invited conference talk: North Britain Semigroups and Applications Network, University of St Andrews, 19th May 2010. ‘Hyperbolicity for semigroups’ Invited seminar: Centro de Álgebra da Universidade de Lisboa, 11th May 2012. ‘Automatic presentations for algebraic and combinatorial structures’ Invited seminar: Centro de Matemática da Universidade de Coimbra, 20th March 2013. ‘Plactic monoids and rewriting systems’ Invited seminar: Sultan Qaboos University, 23rd April 2013. ‘Unary FA-presentable algebraic and combinatorial structures’ Invited conference talk: Workshop on Algebraic Structures and Semigroups, Centro de Álgebra da Universidade de Lisboa, 5th July 2013. ‘Endomorphisms of semigroups: growth and interactions with subsemigroups’ Invited conference talk: North Britain Semigroups and Applications Network, University of York, 20th November 2013. ‘Automaticity, finite complete rewriting systems, and finite derivation type for homogeneous monoids’ Invited seminar: Centro de Matemática da Universidade do Porto, 17th January 2014. ‘Computing with automatic and word-hyperbolic semigroups’ Invited conference talk: Workshop on Computational Algebra, Centro de Álgebra da Universidade de Lisboa, 22nd July 2014. ‘Endomorphisms of semigroups: growth and interactions with subsemigroups’ Invited seminar: Centro de Álgebra da Universidade de Lisboa, 11th November 2014. ‘Plactic, hypoplactic, and sylvester monoids, and other homogeneous monoids’ Invited seminar: Centro de Matemática da Universidade de Coimbra, 25th February 2015. ‘Computation and conjugacy in hypoplactic and sylvester monoids and other homogenous monoids’ Invited conference talk: North Britain Semigroups and Applications Network, University of St Andrews, 25th April 2015. ‘Crystal bases, finite convergent presentations, and automaticity for plactic monoids’ Invited conference talk: International meeting of the American Mathematical Society, European Mathematical Society, and Sociedade Portuguesa de Matemática: Special Session on Algebraic Theory of Semigroups and Applications, University of Porto, 11th June 2015. ‘Quasi-ribbons & quasi-crystals’ Invited seminar: Pure Mathematics Colloquium, University of St Andrews, 24th September 2015. ‘Combinatorial and computational properties of the sylvester monoid’ Invited conference talk: 6th Iberian Mathematical Meeting, University of Santiago de Compostela, 6th October 2016. ‘Crystals, tableaux, and the (hypo)plactic monoid’ Invited seminar: Uniwersytet Warszawski, Poland, 27th October 2016. ‘Simplicity and the aesthetics of mathematics’ Invited conference talk: Third Lisbon International Conference on Philosophy of Science, 14th December 2016. ‘Combinatorics of cyclic shifts in the plactic, hypoplactic, sylvester, and related monoids’ Invited seminar: Uniwersytet Warszawski, Poland, 20th April 2017. ‘Identities in plactic, hypoplactic, sylvester, Baxter, and related monoids’ Invited conference talk: 7th Combinatorics Day, Universidade de Évora, 26th May 2017.
### Appendix D (Loss in Precision of Estimate Due to Weighting in Household Surveys) While overall survey weights help decrease three different sources of bias (coverage, nonresponse, and sampling), the variability of the weights also can increase the sampling variance in household surveys. The following formula is a simple model to measure the loss in precision ($$L_{w}$$) due to weighting. It assumes that the weights and the variable of interest are not related: $$(L_{w}=\left [ \frac{\sum\limits_{i=1}^n w_{i}^{2}}{\left (\sum\limits_{i=1}^n w_{i} \right )^2} \right ](n)-1)$$. • For example, if $$L_{w}$$ = .156, then the sampling variance of the estimate increased by 15.6% due to differential weighting. • $$L_{w}$$ can also be calculated for subgroups. • Note: This formula does not apply to surveys of institutions or business establishments where differential weighting can be efficient. • This is only one method for measuring the variability of the weights.
# Structures on a bird feather act like a reflection grating having 8000 lines per centimeter. What... ## Question: Structures on a bird feather act like a reflection grating having 8000 lines per centimeter. What is the angle of the first-order maximum for 452 nm light? ## Gratings: Gratings are special devices that are used for diffraction experiments. Compared to just a one slit or two slit system, gratings have thousands of slits. Gratings have a quantity called the diffraction ruling, which can be interpreted as how many slits the grating has per unit length of measurement. Given: • {eq}\displaystyle \frac{1}{d} = 8000\ cm^{-1} {/eq} is the diffraction ruling • {eq}\displaystyle \lambda = 452\ nm = 452\ \times\ 10^{-9}\ m {/eq} is the light wavelength Let us first note that the diffraction equation is given as: {eq}\displaystyle d\sin\theta = n\lambda {/eq} So for a first-order maximum, we have: • {eq}\displaystyle n = 1 {/eq} This leaves us with: {eq}\displaystyle d\sin\theta = \lambda {/eq} We isolate the angle here: {eq}\displaystyle \sin\theta = \frac{\lambda}{d} {/eq} {eq}\displaystyle \theta = \sin^{-1} \left(\frac{\lambda}{d} \right) {/eq} Let us first make sure that we are substituting the same units so that the units cancel out. We convert our diffraction ruling to meters: {eq}\displaystyle \frac{1}{d} = 8000\ \frac{1}{cm} \left(\frac{100\ cm}{1\ m} \right) {/eq} We get: {eq}\displaystyle \frac{1}{d} = 8000\ \frac{1}{\require{cancel}\cancel{cm}} \left(\frac{100\ \require{cancel}\cancel{cm}}{1\ m} \right) {/eq} {eq}\displaystyle \frac{1}{d} = 800,000\ m^{-1} {/eq} We substitute: {eq}\displaystyle \theta = \sin^{-1} \Big[(800,000\ m^{-1})(452\ \times\ 10^{-9}\ m)\Big] {/eq} We thus get: {eq}\displaystyle \theta = \sin^{-1} (0.3616) {/eq} {eq}\displaystyle \boxed{\theta = 21.2^\circ} {/eq}
# April, 2014 ## Simplifying algebraic fractions (GCSE algebra) Towards the end of a GCSE paper, you're quite frequently asked to simplify an algebraic fraction like: $\frac{4x^2 + 12x - 7}{2x^2 + 5x - 3}$ Hold back the tears, dear students, hold back the tears. These are easier than they look. There's one thing you need to know: algebraic ## Simplifying algebraic fractions (GCSE algebra) Towards the end of a GCSE paper, you're quite frequently asked to simplify an algebraic fraction like: $\frac{4x^2 + 12x - 7}{2x^2 + 5x - 3}$ Hold back the tears, dear students, hold back the tears. These are easier than they look. There's one thing you need to know: algebraic ## A student asks: Why does variance do THAT? A student asks: The mark scheme says $Var(2 - 3X) = 9 Var(X)$. Where on earth does that come from? Great question, which I'm going to answer in two ways. Firstly, there's the instinctive reasoning; secondly, there's the maths behind it, just to make sure. Instinctively Well, instinctively, you'd think ## Book review: Vedic Mathematics I wanted - I really, really wanted - to like this book. On the surface, it's exactly my cup of tea: a whole book of tricks to make mental arithmetic easy. Sadly, there's so much about it that's dreadful that the nuggets inside it are hardly worth the effort. The ## The Sausage Rule: Secrets of the Mathematical Ninja The student stared, blankly, at the sine rule problem in front of him. $\frac{15}{\sin(A)} = \frac{20}{\sin(50^º)}$ "I don't know where to st," he started whining as something flew past his head. He knew better than to turn and look at whatever implement of death and destruction he had dodged. "I ## Dealing with nasty powers There's nearly always a question on the non-calculator GCSE paper about Nasty Powers. I'm not talking about the Evil Empire or anything, I just mean powers that aren't nice - we can all deal with positive integer powers, it's the zeros, the negatives and the fractions that get us down. ## The square root of three: Secrets of the Mathematical Ninja "$45 \cos($ thir... I mean $\frac{\pi}{6})$," said the student, catching himself just before the axe reached his shoulder." "Thirty-nine," said the Mathematical Ninja, without a pause. "A tiny bit less." The student raised an eyebrow as a request to check on the calculator, and the Mathematical Ninja nodded in assent. ## The semicircle puzzle In a recent episode of Wrong, But Useful, I asked: A square is inscribed within a circle of radius $r$. A second square is inscribed within a semicircle of the same radius. What is the ratio of the areas of the squares? It's easy enough to find the side length ## How to remember which way the skewness goes OK, this is a quick and dirty trick of the sort that I love and the Mathematical Ninja hates. He doesn't have much time for stats at all, truth be told, least of all skewness. However, I've had several students struggle to remember 'which way is which' when it comes ## How The Mathematical Ninja Divides By 49 "... which works out to be $\frac{13}{49}$," said the student, carefully avoiding any calculator use. "Which is $0.265306122...$", said the Mathematical Ninja, with the briefest of pauses after the 5. "I presume you could go on?" "$...448979591...$" "All right, all right, all right. I suppose you're going to tell me
# Smoothness in Mersenne numbers? The $n$-th Mersenne number $M_n$ is defined as $$M_n=2^n-1$$ A great deal of research focuses on Mersenne primes. What is known in the opposite direction about Mersenne numbers with only small factors (i.e. smooth numbers)? In particular, if we let $P_n$ denote the largest prime factor of $M_n$, are any results known of the form $$\liminf_{n\rightarrow \infty}\frac{P_n}{f(n)}= 1$$ for some function $f$? I've only come across two (fairly distant) bounds so far. If we consider even-valued $n$, then $M_n=M_{n/2}(M_{n/2}+2)$, so: $$\liminf_{n\rightarrow \infty}\frac{P_n}{2^{n/2}}\leq 1$$ In the other direction, [1] shows that $P_n\geq 2n+1$ for $n>12$, so $$\liminf_{n\rightarrow \infty}\frac{P_n}{2n}\geq 1$$ [1] A. Schinzel, On primitive prime factors of $a^n-b^n$, Proc. Cambridge Philos. Soc. 58 (1962), 555-562. - I can give you a slightly better upper bound. Recall that $2^n - 1 = \prod_{d | n} \Phi_d(2)$ where $\Phi_d$ is a cyclotomic polynomial. Now, $$\Phi_d(2) = \prod_{(k, d) = 1} (2 - \zeta_d^k) \le 3^{\varphi(d)}$$ so that in particular the largest prime factor of $2^n - 1$ is at most $3^{\varphi(n)}$. By taking $n$ to be a product of the first $k$ primes and letting $k$ tend to infinity we have $\liminf_{n \to \infty} \frac{\varphi(n)}{n} = 0$, hence $$\liminf_{n \to \infty} \frac{P_n}{c^n} = 0$$ for any $c > 1$. In fact if $n$ is the product of the first $k$ primes then we should expect something like $3^{\varphi(n)} \approx 3^{ \frac{n}{\log k} }$ but this doesn't seem like a big improvement to me. - I guess lower bounds on the largest prime factor of Mersenne numbers are not only interesting in number theory but also in coding theory (see this article of K. Kedlaya and S. Yekhanin here). They say the current strongest lower bound is $$P_n>\epsilon(n)n\log^2(n)/\log\log(n)$$ for all $n$ except for a set of asymptotic density zero and all functions $\epsilon$ that tend to zero monotonically and arbitraily slowly, and is due to C. Stewart. See his articles "The greatest prime factor of $a^n-b^n$" and "On divisors of Fermat, Fibonacci, Lucas, and Lehmer numbers". -
# 7.E: Trigonometric Functions (Exercises) - Mathematics We are searching data for your request: Forums and discussions: Manuals and reference books: Data from registers: Wait the end of the search in all databases. Upon completion, a link will appear to access the found materials. ## 5.1: Angles In this section, we will examine properties of angles. ### Verbal 1) Draw an angle in standard position. Label the vertex, initial side, and terminal side. 2) Explain why there are an infinite number of angles that are coterminal to a certain angle. 3) State what a positive or negative angle signifies, and explain how to draw each. Whether the angle is positive or negative determines the direction. A positive angle is drawn in the counterclockwise direction, and a negative angle is drawn in the clockwise direction. 4) How does radian measure of an angle compare to the degree measure? Include an explanation of (1) radian in your paragraph. 5) Explain the differences between linear speed and angular speed when describing motion along a circular path. Linear speed is a measurement found by calculating distance of an arc compared to time. Angular speed is a measurement found by calculating the angle of an arc compared to time. ### Graphical For the exercises 6-21, draw an angle in standard position with the given measure. 6) (30^{circ}) 7) (300^{circ}) 8) (-80^{circ}) 9) (135^{circ}) 10) (-150^{circ}) 11) (dfrac{2π}{3}) 12) (dfrac{7π}{4}) 13) (dfrac{5π}{6}) 14) (dfrac{π}{2}) 15) (−dfrac{π}{10}) 16) (415^{circ}) 17) (-120^{circ}) (240^{circ}) 18) (-315^{circ}) 19)(dfrac{22π}{3}) (dfrac{4π}{3}) 20) (−dfrac{π}{6}) 21) (−dfrac{4π}{3}) (dfrac{2π}{3}) For the exercises 22-23, refer to Figure below. Round to two decimal places. 22) Find the arc length. 23) Find the area of the sector. (dfrac{27π}{2}≈11.00 ext{ in}^2) For the exercises 24-25, refer to Figure below. Round to two decimal places. 24) Find the arc length. 25) Find the area of the sector. (dfrac{81π}{20}≈12.72 ext{ cm}^2) ### Algebraic For the exercises 26-32, convert angles in radians to degrees. (20^{circ}) (60^{circ}) (-75^{circ}) For the exercises 33-39, convert angles in degrees to radians. 33) (90^{circ}) 34) (100^{circ}) 35) (-540^{circ}) 36) (-120^{circ}) 37) (180^{circ}) 38) (-315^{circ}) 39) (150^{circ}) For the exercises 40-45, use to given information to find the length of a circular arc. Round to two decimal places. 40) Find the length of the arc of a circle of radius (12) inches subtended by a central angle of (dfrac{π}{4}) radians. 41) Find the length of the arc of a circle of radius (5.02) miles subtended by the central angle of (dfrac{π}{3}). (dfrac{5.02π}{3}≈5.26) miles 42) Find the length of the arc of a circle of diameter (14) meters subtended by the central angle of (dfrac{5pi }{6}). 43) Find the length of the arc of a circle of radius (10) centimeters subtended by the central angle of (50^{circ}). (dfrac{25π}{9}≈8.73) centimeters 44) Find the length of the arc of a circle of radius (5) inches subtended by the central angle of (220^{circ}). 45) Find the length of the arc of a circle of diameter (12) meters subtended by the central angle is (63^{circ}). (dfrac{21π}{10}≈6.60) meters For the exercises 46-49, use the given information to find the area of the sector. Round to four decimal places. 46) A sector of a circle has a central angle of (45^{circ}) and a radius (6) cm. 47) A sector of a circle has a central angle of (30^{circ}) and a radius of (20) cm. (104.7198; cm^2) 48) A sector of a circle with diameter (10) feet and an angle of (dfrac{π}{2}) radians. 49) A sector of a circle with radius of (0.7) inches and an angle of (π) radians. (0.7697; in^2) For the exercises 50-53, find the angle between (0^{circ}) and (360^{circ}) that is coterminal to the given angle. 50) (-40^{circ}) 51) (-110^{circ}) (250^{circ}) 52) (700^{circ}) 53) (1400^{circ}) (320^{circ}) For the exercises 54-57, find the angle between (0) and (2pi ) in radians that is coterminal to the given angle. 54) (−dfrac{π}{9}) 55) (dfrac{10π}{3}) (dfrac{4π}{3}) 56) (dfrac{13π}{6}) 57) (dfrac{44π}{9}) (dfrac{8π}{9}) ### Real-World Applications 58) A truck with (32)-inch diameter wheels is traveling at (60) mi/h. Find the angular speed of the wheels in rad/min. How many revolutions per minute do the wheels make? 59) A bicycle with (24)-inch diameter wheels is traveling at (15) mi/h. How many revolutions per minute do the wheels make? 60) A wheel of radius (8) inches is rotating (15^{circ}/s). What is the linear speed (v), the angular speed in RPM, and the angular speed in rad/s? 61) A wheel of radius (14) inches is rotating (0.5 ext{rad/s}). What is the linear speed (v), the angular speed in RPM, and the angular speed in deg/s? (7) in./s, (4.77) RPM, (28.65) deg/s 62) A CD has diameter of (120) millimeters. When playing audio, the angular speed varies to keep the linear speed constant where the disc is being read. When reading along the outer edge of the disc, the angular speed is about (200) RPM (revolutions per minute). Find the linear speed. 63) When being burned in a writable CD-R drive, the angular speed of a CD is often much faster than when playing audio, but the angular speed still varies to keep the linear speed constant where the disc is being written. When writing along the outer edge of the disc, the angular speed of one drive is about (4800) RPM (revolutions per minute). Find the linear speed if the CD has diameter of (120) millimeters. (1,809,557.37 ext{ mm/min}=30.16 ext{ m/s}) 64) A person is standing on the equator of Earth (radius (3960) miles). What are his linear and angular speeds? 65) Find the distance along an arc on the surface of Earth that subtends a central angle of (5) minutes ((1 ext{ minute}=dfrac{1}{60} ext{ degree})). The radius of Earth is (3960) miles. (5.76) miles 66) Find the distance along an arc on the surface of Earth that subtends a central angle of (7) minutes ((1 ext{ minute}=dfrac{1}{60} ext{ degree})). The radius of Earth is (3960) miles. 67) Consider a clock with an hour hand and minute hand. What is the measure of the angle the minute hand traces in (20) minutes? (120°) ### Extensions 68) Two cities have the same longitude. The latitude of city A is (9.00) degrees north and the latitude of city B is (30.00) degree north. Assume the radius of the earth is (3960) miles. Find the distance between the two cities. 69) A city is located at (40) degrees north latitude. Assume the radius of the earth is (3960) miles and the earth rotates once every (24) hours. Find the linear speed of a person who resides in this city. (794) miles per hour 70) A city is located at (75) degrees north latitude. Find the linear speed of a person who resides in this city. 71) Find the linear speed of the moon if the average distance between the earth and moon is (239,000) miles, assuming the orbit of the moon is circular and requires about (28) days. Express answer in miles per hour. (2,234) miles per hour 72) A bicycle has wheels (28) inches in diameter. A tachometer determines that the wheels are rotating at (180) RPM (revolutions per minute). Find the speed the bicycle is traveling down the road. 73) A car travels (3) miles. Its tires make (2640) revolutions. What is the radius of a tire in inches? (11.5) inches 74) A wheel on a tractor has a (24)-inch diameter. How many revolutions does the wheel make if the tractor travels (4) miles? ## 5.2: Unit Circle - Sine and Cosine Functions ### Verbal 1) Describe the unit circle. The unit circle is a circle of radius (1) centered at the origin. 2) What do the (x)- and (y)-coordinates of the points on the unit circle represent? 3) Discuss the difference between a coterminal angle and a reference angle. Coterminal angles are angles that share the same terminal side. A reference angle is the size of the smallest acute angle, (t), formed by the terminal side of the angle (t) and the horizontal axis. 4) Explain how the cosine of an angle in the second quadrant differs from the cosine of its reference angle in the unit circle. 5) Explain how the sine of an angle in the second quadrant differs from the sine of its reference angle in the unit circle. The sine values are equal. ### Algebraic For the exercises 6-9, use the given sign of the sine and cosine functions to find the quadrant in which the terminal point determined by (t) lies. 6) ( sin (t)<0) and ( cos (t)<0) 7) ( sin (t)>0) and ( cos (t)>0) ( extrm{I}) 8) ( sin (t)>0 ) and ( cos (t)<0) 9) ( sin (t)<0 ) and ( cos (t)>0) ( extrm{IV}) For the exercises 10-22, find the exact value of each trigonometric function. 10) (sin dfrac{π}{2}) 11) (sin dfrac{π}{3}) (dfrac{sqrt{3}}{2}) 12) ( cos dfrac{π}{2}) 13) ( cos dfrac{π}{3}) (dfrac{1}{2}) 14) ( sin dfrac{π}{4}) 15) ( cos dfrac{π}{4}) (dfrac{sqrt{2}}{2}) 16) ( sin dfrac{π}{6}) 17) ( sin π) (0) 18) ( sin dfrac{3π}{2}) 19) ( cos π) (−1) 20) ( cos 0) 21) (cos dfrac{π}{6}) (dfrac{sqrt{3}}{2}) 22) ( sin 0) ### Numeric For the exercises 23-33, state the reference angle for the given angle. 23) (240°) (60°) 24) (−170°) 25) (100°) (80°) 26) (−315°) 27) (135°) (45°) 28) (dfrac{5π}{4}) 29) (dfrac{2π}{3}) (dfrac{π}{3}) 30) (dfrac{5π}{6}) 31) (−dfrac{11π}{3}) (dfrac{π}{3}) 32) (dfrac{−7π}{4}) 33) (dfrac{−π}{8}) (dfrac{π}{8}) For the exercises 34-49, find the reference angle, the quadrant of the terminal side, and the sine and cosine of each angle. If the angle is not one of the angles on the unit circle, use a calculator and round to three decimal places. 34) (225°) 35) (300°) (60°), Quadrant IV, ( sin (300°)=−dfrac{sqrt{3}}{2}, cos (300°)=dfrac{1}{2}) 36) (320°) 37) (135°) (45°), Quadrant II, ( sin (135°)=dfrac{sqrt{2}}{2}, cos (135°)=−dfrac{sqrt{2}}{2}) 38) (210°) 39) (120°) (60°), Quadrant II, (sin (120°)=dfrac{sqrt{3}}{2}), (cos (120°)=−dfrac{1}{2}) 40) (250°) 41) (150°) (30°), Quadrant II, ( sin (150°)=frac{1}{2}), (cos(150°)=−dfrac{sqrt{3}}{2}) 42) (dfrac{5π}{4}) 43) (dfrac{7π}{6}) (dfrac{π}{6}), Quadrant III, (sin left( dfrac{7π}{6} ight )=−dfrac{1}{2}), (cos left (dfrac{7π}{6} ight)=−dfrac{sqrt{3}}{2}) 44) (dfrac{5π}{3}) 45) (dfrac{3π}{4}) (dfrac{π}{4}), Quadrant II, (sin left(dfrac{3π}{4} ight)=dfrac{sqrt{2}}{2}), (cosleft(dfrac{4π}{3} ight)=−dfrac{sqrt{2}}{2}) 46) (dfrac{4π}{3}) 47) (dfrac{2π}{3}) (dfrac{π}{3}), Quadrant II, ( sin left(dfrac{2π}{3} ight)=dfrac{sqrt{3}}{2}), ( cos left(dfrac{2π}{3} ight)=−dfrac{1}{2}) 48) (dfrac{5π}{6}) 49) (dfrac{7π}{4}) (dfrac{π}{4}), Quadrant IV, ( sin left(dfrac{7π}{4} ight)=−dfrac{sqrt{2}}{2}), ( cos left(dfrac{7π}{4} ight)=dfrac{sqrt{2}}{2}) For the exercises 50-59, find the requested value. 50) If (cos (t)=dfrac{1}{7}) and (t) is in the (4^{th}) quadrant, find ( sin (t)). 51) If ( cos (t)=dfrac{2}{9}) and (t) is in the (1^{st}) quadrant, find (sin (t)). (dfrac{sqrt{77}}{9}) 52) If (sin (t)=dfrac{3}{8}) and (t) is in the (2^{nd}) quadrant, find ( cos (t)). 53) If ( sin (t)=−dfrac{1}{4}) and (t) is in the (3^{rd}) quadrant, find (cos (t)). (−dfrac{sqrt{15}}{4}) 54) Find the coordinates of the point on a circle with radius (15) corresponding to an angle of (220°). 55) Find the coordinates of the point on a circle with radius (20) corresponding to an angle of (120°). ((−10,10sqrt{3})) 56) Find the coordinates of the point on a circle with radius (8) corresponding to an angle of (dfrac{7π}{4}). 57) Find the coordinates of the point on a circle with radius (16) corresponding to an angle of (dfrac{5π}{9}). ((–2.778,15.757)) 58) State the domain of the sine and cosine functions. 59) State the range of the sine and cosine functions. ([–1,1]) ### Graphical For the exercises 60-79, use the given point on the unit circle to find the value of the sine and cosine of (t). 60) 61) ( sin t=dfrac{1}{2}, cos t=−dfrac{sqrt{3}}{2}) 62) 63) ( sin t=− dfrac{sqrt{2}}{2}, cos t=−dfrac{sqrt{2}}{2}) 64) 65) ( sin t=dfrac{sqrt{3}}{2},cos t=−dfrac{1}{2}) 66) 67) ( sin t=− dfrac{sqrt{2}}{2}, cos t=dfrac{sqrt{2}}{2}) 68) 69) ( sin t=0, cos t=−1) 70) 71) ( sin t=−0.596, cos t=0.803) 72) 73) (sin t=dfrac{1}{2}, cos t= dfrac{sqrt{3}}{2}) 74) 75) ( sin t=−dfrac{1}{2}, cos t= dfrac{sqrt{3}}{2} ) 76) 77) ( sin t=0.761, cos t=−0.649 ) 78) 79) ( sin t=1, cos t=0) ### Technology For the exercises 80-89, use a graphing calculator to evaluate. 80) ( sin dfrac{5π}{9}) 81) (cos dfrac{5π}{9}) (−0.1736) 82) ( sin dfrac{π}{10}) 83) ( cos dfrac{π}{10}) (0.9511) 84) ( sin dfrac{3π}{4}) 85) (cos dfrac{3π}{4}) (−0.7071) 86) ( sin 98° ) 87) ( cos 98° ) (−0.1392) 88) ( cos 310° ) 89) ( sin 310° ) (−0.7660) ### Extensions For the exercises 90-99, evaluate. 90) ( sin left(dfrac{11π}{3} ight) cos left(dfrac{−5π}{6} ight)) 91) ( sin left(dfrac{3π}{4} ight) cos left(dfrac{5π}{3} ight) ) (dfrac{sqrt{2}}{4}) 92) ( sin left(− dfrac{4π}{3} ight) cos left(dfrac{π}{2} ight)) 93) ( sin left(dfrac{−9π}{4} ight) cos left(dfrac{−π}{6} ight)) (−dfrac{sqrt{6}}{4}) 94) ( sin left(dfrac{π}{6} ight) cos left(dfrac{−π}{3} ight) ) 95) ( sin left(dfrac{7π}{4} ight) cos left(dfrac{−2π}{3} ight) ) (dfrac{sqrt{2}}{4}) 96) ( cos left(dfrac{5π}{6} ight) cos left(dfrac{2π}{3} ight)) 97) ( cos left(dfrac{−π}{3} ight) cos left(dfrac{π}{4} ight) ) (dfrac{sqrt{2}}{4}) 98) ( sin left(dfrac{−5π}{4} ight) sin left(dfrac{11π}{6} ight)) 99) ( sin (π) sin left(dfrac{π}{6} ight) ) (0) ### Real-World Applications For the exercises 100-104, use this scenario: A child enters a carousel that takes one minute to revolve once around. The child enters at the point ((0,1)), that is, on the due north position. Assume the carousel revolves counter clockwise. 100) What are the coordinates of the child after (45) seconds? 101) What are the coordinates of the child after (90) seconds? ((0,–1)) 102) What is the coordinates of the child after (125) seconds? 103) When will the child have coordinates ((0.707,–0.707)) if the ride lasts (6) minutes? (There are multiple answers.) (37.5) seconds, (97.5) seconds, (157.5) seconds, (217.5) seconds, (277.5) seconds, (337.5) seconds 104) When will the child have coordinates ((−0.866,−0.5)) if the ride last (6) minutes? ## 5.3: The Other Trigonometric Functions ### Verbal 1) On an interval of ([ 0,2π )), can the sine and cosine values of a radian measure ever be equal? If so, where? Yes, when the reference angle is (dfrac{π}{4}) and the terminal side of the angle is in quadrants I and III. Thus, at (x=dfrac{π}{4},dfrac{5π}{4}), the sine and cosine values are equal. 2) What would you estimate the cosine of (pi ) degrees to be? Explain your reasoning. 3) For any angle in quadrant II, if you knew the sine of the angle, how could you determine the cosine of the angle? Substitute the sine of the angle in for (y) in the Pythagorean Theorem (x^2+y^2=1). Solve for (x) and take the negative solution. 4) Describe the secant function. 5) Tangent and cotangent have a period of (π). What does this tell us about the output of these functions? The outputs of tangent and cotangent will repeat every (π) units. ### Algebraic For the exercises 6-17, find the exact value of each expression. 6) ( an dfrac{π}{6}) 7) (sec dfrac{π}{6}) (dfrac{2sqrt{3}}{3}) 8) ( csc dfrac{π}{6}) 9) ( cot dfrac{π}{6}) (sqrt{3}) 10) ( an dfrac{π}{4}) 11) ( sec dfrac{π}{4}) (sqrt{2}) 12) ( csc dfrac{π}{4}) 13) ( cot dfrac{π}{4}) (1) 14) ( an dfrac{π}{3}) 15) ( sec dfrac{π}{3}) (2) 16) ( csc dfrac{π}{3}) 17) ( cot dfrac{π}{3}) (dfrac{sqrt{3}}{3}) For the exercises 18-48, use reference angles to evaluate the expression. 18) ( an dfrac{5π}{6}) 19) ( sec dfrac{7π}{6}) (−dfrac{2sqrt{3}}{3}) 20) ( csc dfrac{11π}{6}) 21) ( cot dfrac{13π}{6}) (sqrt{3}) 22) ( an dfrac{7π}{4}) 23) ( sec dfrac{3π}{4}) (−sqrt{2}) 24) ( csc dfrac{5π}{4}) 25) ( cot dfrac{11π}{4}) (−1) 26) ( an dfrac{8π}{3}) 27) ( sec dfrac{4π}{3}) (−2) 28) ( csc dfrac{2π}{3}) 29) ( cot dfrac{5π}{3}) (−dfrac{sqrt{3}}{3}) 30) ( an 225°) 31) ( sec 300°) (2) 32) ( csc 150°) 33) ( cot 240°) (dfrac{sqrt{3}}{3}) 34) ( an 330°) 35) ( sec 120°) (−2) 36) ( csc 210°) 37) ( cot 315°) (−1) 38) If ( sin t= dfrac{3}{4}), and (t) is in quadrant II, find ( cos t, sec t, csc t, an t, cot t ). 39) If ( cos t=−dfrac{1}{3},) and (t) is in quadrant III, find ( sin t, sec t, csc t, an t, cot t). If (sin t=−dfrac{2sqrt{2}}{3}, sec t=−3, csc t=−csc t=−dfrac{3sqrt{2}}{4}, an t=2sqrt{2}, cot t= dfrac{sqrt{2}}{4}) 40) If ( an t=dfrac{12}{5},) and (0≤t< dfrac{π}{2}), find ( sin t, cos t, sec t, csc t,) and (cot t). 41) If ( sin t= dfrac{sqrt{3}}{2}) and ( cos t=dfrac{1}{2},) find ( sec t, csc t, an t,) and ( cot t). ( sec t=2, csc t=csc t=dfrac{2sqrt{3}}{3}, an t= sqrt{3}, cot t= dfrac{sqrt{3}}{3}) 42) If ( sin 40°≈0.643 ; cos 40°≈0.766 ; sec 40°,csc 40°, an 40°, ext{ and } cot 40°). 43) If ( sin t= dfrac{sqrt{2}}{2},) what is the ( sin (−t))? (−dfrac{sqrt{2}}{2}) 44) If ( cos t= dfrac{1}{2},) what is the ( cos (−t))? 45) If ( sec t=3.1,) what is the ( sec (−t))? (3.1) 46) If ( csc t=0.34,) what is the ( csc (−t))? 47) If ( an t=−1.4,) what is the ( an (−t))? (1.4) 48) If ( cot t=9.23,) what is the ( cot (−t))? ### Graphical For the exercises 49-51, use the angle in the unit circle to find the value of the each of the six trigonometric functions. 49) ( sin t= dfrac{sqrt{2}}{2}, cos t= dfrac{sqrt{2}}{2}, an t=1,cot t=1,sec t= sqrt{2}, csc t= csc t= sqrt{2} ) 50) 51) ( sin t=−dfrac{sqrt{3}}{2}, cos t=−dfrac{1}{2}, an t=sqrt{3}, cot t= dfrac{sqrt{3}}{3}, sec t=−2, csc t=−csc t=−dfrac{2sqrt{3}}{3} ) ### Technology For the exercises 52-61, use a graphing calculator to evaluate. 52) ( csc dfrac{5π}{9}) 53) ( cot dfrac{4π}{7}) (–0.228) 54) ( sec dfrac{π}{10}) 55) ( an dfrac{5π}{8}) (–2.414) 56) ( sec dfrac{3π}{4}) 57) ( csc dfrac{π}{4}) (1.414) 58) ( an 98°) 59) ( cot 33°) (1.540) 60) ( cot 140°) 61) ( sec 310° ) (1.556) ### Extensions For the exercises 62-69, use identities to evaluate the expression. 62) If ( an (t)≈2.7,) and ( sin (t)≈0.94,) find ( cos (t)). 63) If ( an (t)≈1.3,) and ( cos (t)≈0.61), find ( sin (t)). ( sin (t)≈0.79 ) 64) If ( csc (t)≈3.2,) and ( csc (t)≈3.2,) and ( cos (t)≈0.95,) find ( an (t)). 65) If ( cot (t)≈0.58,) and ( cos (t)≈0.5,) find ( csc (t)). ( csc (t)≈1.16) 66) Determine whether the function (f(x)=2 sin x cos x) is even, odd, or neither. 67) Determine whether the function (f(x)=3 sin ^2 x cos x + sec x) is even, odd, or neither. even 68) Determine whether the function (f(x)= sin x −2 cos ^2 x ) is even, odd, or neither. 69) Determine whether the function (f(x)= csc ^2 x+ sec x) is even, odd, or neither. even For the exercises 70-71, use identities to simplify the expression. 70) ( csc t an t) 71) ( dfrac{sec t}{ csc t}) ( dfrac{ sin t}{ cos t}= an t) ### Real-World Applications 72) The amount of sunlight in a certain city can be modeled by the function (h=15 cos left(dfrac{1}{600}d ight),) where (h) represents the hours of sunlight, and (d) is the day of the year. Use the equation to find how many hours of sunlight there are on February 10, the (42^{nd}) day of the year. State the period of the function. 73) The amount of sunlight in a certain city can be modeled by the function (h=16 cos left(dfrac{1}{500}d ight)), where (h) represents the hours of sunlight, and (d) is the day of the year. Use the equation to find how many hours of sunlight there are on September 24, the (267^{th}) day of the year. State the period of the function. (13.77) hours, period: (1000π) 74) The equation (P=20 sin (2πt)+100) models the blood pressure, (P), where (t) represents time in seconds. 1. Find the blood pressure after (15) seconds. 2. What are the maximum and minimum blood pressures? 75) The height of a piston, (h), in inches, can be modeled by the equation (y=2 cos x+6,) where (x) represents the crank angle. Find the height of the piston when the crank angle is (55°). (7.73) inches 76) The height of a piston, (h),in inches, can be modeled by the equation (y=2 cos x+5,) where (x) represents the crank angle. Find the height of the piston when the crank angle is (55°). ## 5.4: Right Triangle Trigonometry ### Verbal 1) For the given right triangle, label the adjacent side, opposite side, and hypotenuse for the indicated angle. 2) When a right triangle with a hypotenuse of (1) is placed in the unit circle, which sides of the triangle correspond to the (x)- and (y)-coordinates? 3) The tangent of an angle compares which sides of the right triangle? The tangent of an angle is the ratio of the opposite side to the adjacent side. 4) What is the relationship between the two acute angles in a right triangle? 5) Explain the cofunction identity. For example, the sine of an angle is equal to the cosine of its complement; the cosine of an angle is equal to the sine of its complement. ### Algebraic For the exercises 6-9, use cofunctions of complementary angles. 6) ( cos (34°)= sin (\_\_°)) 7) ( cos (dfrac{π}{3})= sin (\_\_\_) ) (dfrac{π}{6}) 8) ( csc (21°) = sec (\_\_\_°)) 9) ( an (dfrac{π}{4})= cot (\_\_)) (dfrac{π}{4}) For the exercises 10-16, find the lengths of the missing sides if side (a) is opposite angle (A), side (b) is opposite angle (B), and side (c) is the hypotenuse. 10) ( cos B= dfrac{4}{5},a=10) 11) ( sin B= dfrac{1}{2}, a=20) (b= dfrac{20sqrt{3}}{3},c= dfrac{40sqrt{3}}{3}) 12) ( an A= dfrac{5}{12},b=6) 13) ( an A=100,b=100) (a=10,000,c=10,000.5) 14) (sin B=dfrac{1}{sqrt{3}}, a=2 ) 15) (a=5, ∡ A=60^∘) (b=dfrac{5sqrt{3}}{3},c=dfrac{10sqrt{3}}{3}) 16) (c=12, ∡ A=45^∘) ### Graphical For the exercises 17-22, use Figure below to evaluate each trigonometric function of angle (A). 17) (sin A) (dfrac{5sqrt{29}}{29}) 18) ( cos A ) 19) ( an A ) (dfrac{5}{2}) 20) (csc A ) 21) ( sec A ) (dfrac{sqrt{29}}{2}) 22) ( cot A ) For the exercises 23-,28 use Figure below to evaluate each trigonometric function of angle (A). 23) ( sin A) (dfrac{5sqrt{41}}{41}) 24) ( cos A) 25) ( an A ) (dfrac{5}{4}) 26) ( csc A) 27) ( sec A) (dfrac{sqrt{41}}{4}) 28) (cot A) For the exercises 29-31, solve for the unknown sides of the given triangle. 29) (c=14, b=7sqrt{3}) 30) 31) (a=15, b=15 ) ### Technology For the exercises 32-41, use a calculator to find the length of each side to four decimal places. 32) 33) (b=9.9970, c=12.2041) 34) 35) (a=2.0838, b=11.8177) 36) 37) (b=15, ∡B=15^∘) (a=55.9808,c=57.9555) 38) (c=200, ∡B=5^∘) 39) (c=50, ∡B=21^∘) (a=46.6790,b=17.9184) 40) (a=30, ∡A=27^∘) 41) (b=3.5, ∡A=78^∘) (a=16.4662,c=16.8341) ### Extensions 42) Find (x). 43) Find (x). (188.3159) 44) Find (x). 45) Find (x). (200.6737) 46) A radio tower is located (400) feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is (36°), and that the angle of depression to the bottom of the tower is (23°). How tall is the tower? 47) A radio tower is located (325) feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is (43°), and that the angle of depression to the bottom of the tower is (31°). How tall is the tower? (498.3471) ft 48) A (200)-foot tall monument is located in the distance. From a window in a building, a person determines that the angle of elevation to the top of the monument is (15°), and that the angle of depression to the bottom of the tower is (2°). How far is the person from the monument? 49) A (400)-foot tall monument is located in the distance. From a window in a building, a person determines that the angle of elevation to the top of the monument is (18°), and that the angle of depression to the bottom of the monument is (3°). How far is the person from the monument? (1060.09) ft 50) There is an antenna on the top of a building. From a location (300) feet from the base of the building, the angle of elevation to the top of the building is measured to be (40°). From the same location, the angle of elevation to the top of the antenna is measured to be (43°). Find the height of the antenna. 51) There is lightning rod on the top of a building. From a location (500) feet from the base of the building, the angle of elevation to the top of the building is measured to be (36°). From the same location, the angle of elevation to the top of the lightning rod is measured to be (38°). Find the height of the lightning rod. (27.372) ft ### Real-World Applications 52) A (33)-ft ladder leans against a building so that the angle between the ground and the ladder is (80°). How high does the ladder reach up the side of the building? 53) A (23)-ft ladder leans against a building so that the angle between the ground and the ladder is (80°). How high does the ladder reach up the side of the building? (22.6506) ft 54) The angle of elevation to the top of a building in New York is found to be (9) degrees from the ground at a distance of (1) mile from the base of the building. Using this information, find the height of the building. 55) The angle of elevation to the top of a building in Seattle is found to be (2) degrees from the ground at a distance of (2) miles from the base of the building. Using this information, find the height of the building. (368.7633) ft 56) Assuming that a (370)-foot tall giant redwood grows vertically, if I walk a certain distance from the tree and measure the angle of elevation to the top of the tree to be (60°), how far from the base of the tree am I? ## A Graphical Approach to Algebra & Trigonometry, 7th edition A digital version of the text you can personalize and read online or offline. If your instructor has invited you to join a specific Pearson eText course for your class, you will need to purchase your eText through the course invite link they provide. Search by keyword or page number ### What's included A digital platform that offers help when and where you need it, lets you focus your study time, and provides practical learning experiences. ### What's included A digital platform that offers help when and where you need it, lets you focus your study time, and provides practical learning experiences. ### What's included A digital platform that offers help when and where you need it, lets you focus your study time, and provides practical learning experiences. ## 7.E: Trigonometric Functions (Exercises) - Mathematics $large< extbf <1.>>$ egin mbox<(g)>&y= cos^2 x,&y'=2cos x imes(-sin x) = -sin 2x.crcr mbox<(h)>&y= 5 an^2 x,&y'= 10 an x.sec^2 x.crcr mbox<(i)>&y= sin^2 2x,&y'=2sin 2x imescos 2x imes2 = 2sin 4x.crcr mbox<(j)>&y= sinpi x + cospi x,&y'=pi(cospi x - sinpi x).crcr mbox<(k)>&y= sin 3x + 2cos 4x,&y'=3 cos 3x - 8sin 4x.crcr mbox<(l)>&y= an (3x + 2),&y'=3 sec^2(3x +2). end $large< extbf <2.>>$ egin &y'&=&(x^2)'sin x + x^2 (sin x)' = 2x sin x + x^2cos x.crcr &y'&=&(3x)' an x + 3x ( an x)' = 3 an x + 3xsec^2 x.crcr & &=&(2sin x cos x)cos^2 x + sin^2 x[2 cos x (-sin x)]cr & &=&(2sin x cos x)(cos^2 x - sin^2 x) = sin 2x cos 2x = frac<2>. end $large< extbf <3.>>$ egin $quad$ The slope of the tangent to the curve $y=3cos x^2$ at $x = <2>>$ is $quad$ The values of $t$ where the tangent to the curve $y=5sin 2t$ is horizontal are solutions of the equation $qquadquad y'= 10cos 2t =0 Rightarrow cos 2t=0$ $qquadquad Rightarrow 2t = <2>> + kpi, k = 0, pm1, pm2, dots$ $quad$ The velocity of the given particle at the time $t$ is egin &=&-3sin 2t + sin 2t + 2tcos 2t = -2sin 2t + 2tcos 2t end $quad$ So, the particle's velocity after $2$ seconds is $qquad f'(b) = -sin b = 0 Rightarrow b_1 = 0, b_2 = pi, b_3 = 2pi.$ $large< extbf <8.>>$ egin mbox<(c)>&y = 3sin(2pi t +3),&y' = 6picos(2pi t +3). end $quad$ The turning points of $I$ can be found from the equation (note that the time $t ge 0$) $quad$ Hence, the smallest turning point is $quad$ $I$ obtains its first maximum value at the time $t_0=1.18$ (second). $quad$ The maximum value is $I_ = 13 sin <2>> = 13.$ $large< extbf <10.>>quad$ $V(t) = 250 sin (0.03pi t+1.7) Rightarrow V'(t) = 250 imes 0.03pi cos (0.03pi t+1.7)$ $quad$ The slope of the tangent to the curve $V(t)$ at $t= 7.3$ (second) is $qquad V'(7.3) = 250 imes 0.03pi imes cos(0.03 imespi imes7.3 +1.7) = 7.5 imes pi imes cos (2.388) approx -17.182$ ## Solve Trigonometric Equations - Problems 10 problems, with their answers, on solving trigonometric equations are presented here and more in the applet below. This may be used as a self test on solving trigonometric equations and ,indirectly, on properties of trigonometric functions and identities. Make use of the unit circle as it helps in locating the solutions once you have the reference angle. Solve Trigonometric Equations, problesm with answers Problem 1: Solve the trigonometric equation and find ALL solutions. 2 cos x + 1 = 0 a: Pi / 3 + 2n*Pi , 5Pi / 3 + 2n*Pi d: 2Pi / 3 + 2n*Pi , 4Pi / 3 + 2n*Pi Problem 2: Solve the trigonometric equation and find ALL solutions. 3 sec 2 x - 4 = 0 b: Pi / 3 + 2n*Pi , 5Pi / 3 + 2n*Pi c: Pi / 6 + 2n*Pi , 11 Pi / 6 + 2n*Pi d: Pi / 3 + n*Pi , 5Pi / 3 + n*Pi e: Pi / 6 + n*Pi , 11 Pi / 6 + n*Pi Problem 3: Solve the trigonometric equation and find ALL solutions. (3 cos x + 7) (-2 sin x - 1) = 0 b: 7Pi / 6 + 2n*Pi , 11Pi / 6 + 2n*Pi c: Pi / 3 + 2n*Pi , 2Pi / 3 + 2n*Pi d: 7Pi / 6 + n*Pi , 11Pi / 6 + n*Pi Problem 4: Solve the trigonometric equation and find ALL solutions. (6 tan 2 x - 2) (2 tan 2 x - 6) = 0 a: Pi / 6 + n*Pi , 5Pi / 6 + 2n*Pi , Pi / 3 + n*Pi , 2Pi / 3 + n*Pi e: Pi / 3 + n*Pi , 2Pi / 3 + n*Pi Problem 5: Solve the trigonometric equation and find ALL solutions in the interval [0 , 2Pi). -2 sec 2 x + 4 = -2sec x Problem 6: Solve the trigonometric equation and find ALL solutions in the interval [0 , 2Pi). 2sin (x) cos (-x) = 2 sin (-x) sin (x) Problem 7: Solve the trigonometric equation and find ALL solutions in the interval [0 , 2Pi). sin 2x = -sin (-x) Problem 8: Which of these equations does not have a solution? Problem 9: Which of these equations does not have a solution? Problem 10: Solve the trigonometric equation and find ALL solutions in the interval [0 , 2Pi). sin 2 x + sin x = 6 More Trigonometric Equations Problems - Using Applet 1 - click on the button above "click here to start" to start the test and MAXIMIZE the window obtained. 2 - click on "start" on the main menu. 3 - answer the question by checking a,b,c,d or e in the lower part of the window. You can review your answers and change them by checking the desired letter. Once you have finished, press "finish" and you get a table with your answers and the right answers to compare with. To start the test with another set of questions, press "reset". More references on trigonometric equations Trigonometric Equations and The Unit Circle. ## Pythagorean Trigonometric Identity We know the Pythagoras theorem that relates the length of three sides of a right triangle. We also have gone through the famous trigonometric functions that relate the angle of a right triangle with the length of its sides. Now let us play around and relate the two to derive some exciting Pythagorean Trigonometric Identities. Let us write the trigonometric functions in the form of ‘a’ (Hypotenuse), ‘b’ (Perpendicular) and ‘c’ (Base). Remember that the perpendicular and base are defined with respect to the angle in consideration. Change the angle of consideration, the perpendicular and base will change accordingly. Hypotenuse remains the same, however. The trigonometric functions for the angle θ can be thus written as: Rearranging the above relations yields, After plugging in the above in the Pythagoras theorem, we get, To satisfy the above relation, the following identity must hold. Congratulations! we just derived the trigonometric identity which is widely used while solving complex problems that involve trigonometric functions. Take a moment and appreciate the elegance of the equation. It is fascinating to note that the sum of squares of sine and cosine of an angle is always a constant! Interestingly, we can further manipulate the above identity by dividing it by cos 2 θ on both sides. This will result in another trigonometric identity. Exercise: As an exercise derive the following trigonometric identity. You can use any of the aforementioned trigonometric identities. ## Math 141-142: Unit 6 Note : The information on this page is for the 7th edition of the textbook. This unit consists of two parts. The first part finishes the study of trigonometric identities begun in Unit 5. In this section you will use the various trigonometric identities to help solve equations involving trigonometric functions.The second part is a study of methods for solving general triangles, using the Law of Sines and the Law of Cosines. Included are many different applications, along with a short section on two new formulas for the area of a triangle. • Finding exact and/or approximate solutions to trigonometric equations using an algebraic approach. (6.7-8) • Finding approximate solutions of trigonometric equations using a graphical approach. (6.7-8) • Solving triangles using the Law of Sines (7.2) • Solving triangles using the Law of Cosines (7.3) • Applications of the Laws of Sines and Cosines (7.2 & 7.3) • Formulas for the area of a triangle (7.4) This is the final unit in the Math 141 course, which concludes with Trigonometry Final Exam. Math 142 students must also take the Trigonometry Final Exam, and then continue with Units 7, 8, & 9. ### Study Guidelines for the 7th edition of Sullivan's Precalculus These reading and problem assignments are designed to help you learn the course material. You should complete all of these problems, check your answers in the back of the textbook, and get help with the problems that you missed. Most of the problems are odd-numbered, so you can check the solutions in the Solutions Manual . The only way to learn mathematics is to do mathematics, so while these problems will not be collected or graded, you will probably not do well in the course if you do not complete these and check your work as described above. After completing these problems, go on to the Unit Exam Description below and follow directions. • Section 6.7: Trigonometric Equations (I) Read and work through examples 1-6 and their matched problems. • Practice Problems : 6.7 #1, 2, 7-51 odd, 61, 65 Read and work through examples 1-8 and their matched problems. • Practice Problems : 6.8 #1, 2, 3, 5, 7, 11, 13, 17, 21, 25, 33, 37, 39, 41, 43, 47, 49, 51, 55, 61, 63, 67 Read and work through examples 1-7 and their matched problems. • Try out the Law of Sines SSA applet. You can experiment with the construction to see how to get 0, 1, or 2 solutions in the SSA case. • Practice Problems : 7.2 #1, 2, 3, 9, 11, 15, 17, 21-45 odd, 49, 51 Read and work through examples 1-3 and their matched problems. • Practice Problems : 7.3 #1, 2, 9, 11, 13, 15, 17, 21, 25, 29, 33-47 odd Read and work through examples 1-2 and their matched problems. • Practice Problems : 7.4 #1, 5, 7, 9, 11, 13, 17, 19, 23 &emsp&emsp sin x = 1 2 , cos x = 3 2 , tan x = 1 3 , cot x = 3 , sec x = 2 3 , csc x = 2 ### Explanation of Solution The values of expressions, sin x = 1 2 , cos x = 3 2 Concept Used: Reciprocal identities of trigonometric functions: Quotient Identities: Calculation: In order tofind the values of the six trigonometric functions, use the reciprocal and quotient identities of trigonometric functions and simplify further as shown below: &emsp&emsp tan x = sin x cos x = 1 / 2 3 / 2 = 1 2 ⋅ 2 3 = 1 3 , cot x = 1 tan x = 1 1 / 3 = 3 , sec x = 1 cos x = 1 3 / 2 = 2 3 , csc x = 1 sin x = 1 1 / 2 = 2 Thus, the values of the six trigonometric function is given by, &emsp&emsp sin x = 1 2 , cos x = 3 2 , tan x = 1 3 , cot x = 3 , sec x = 2 3 , csc x = 2 ## Lesson 13 The goal of this lesson is to introduce the midline and amplitude of trigonometric functions in context. The midline is given by the average of the maximum and minimum values taken by the function, while the amplitude is the length between the maximum value and the midline or, equivalently, the length between the midline and the minimum value. The function (f) given by (f(x) = cos(x)) has a midline of (y=0) (since the maximum value is 1 and the minimum value is -1) and an amplitude of 1. The function (g(x) = 5sin(x) -1) has a midline of (y= ext-1) and an amplitude of 5. In general, the function (h) given by (h(x) = acos(x) + b) has a midline of (y=b) and an amplitude of (|a|) . The midline is a new feature of trigonometric functions. The amplitude, on the other hand, relates to work students have done in a previous unit using vertical scale factors. For example, the amplitude of the function (g) given by (g(x) = 5 sin(x)) is 5. The graph of (g) is the graph of (h(x) = sin(x)) after it has been stretched vertically by a factor of 5. Said another way, the outputs of (g(x)) are 5 times farther from the (x) -axis than the outputs of (h(x)) for the same input values. Students reason abstractly and quantitatively when they interpret a trigonometric function in the context of a rotating windmill blade (MP2). They represent the function in three different ways including a table, a graph, and an equation. Students make use of repeated reasoning to determine the effect of different parameters on the amplitude and midline of trigonometric functions (MP8). Throughout this lesson, students should have access to their unit circle and graph of sine and cosine displays. ## 7.E: Trigonometric Functions (Exercises) - Mathematics Note : The information on this page is for the 7th edition of the textbook. Topics Unit 1 begins with a discussion of angles and various ways to measure angles: radians, decimal degrees, and degrees-minutes-seconds. Then the trigonometric functions are defined in terms of the unit circle, rather than in terms of right triangles (which you may have seen before). The connection with right triangles will appear in Unit 2. It is Math Department policy that students should be able to compute the exact values of all the trigonometric functions at the "standard" angles, i.e., all multiples of pi/6 and pi/4 radians and 30 and 45 degrees. Therefore, no calculators will be allowed on the Unit 1 Exam . Problems from 5.1-3 involving calculators will be tested in Unit 2. • Angles and their Measure (5.1) • (Decimal) degrees • Degrees-minutes-seconds • Conversions between radians and degrees for standard angles (all multiples of pi/6 and pi/4 radians and 30 and 45 degrees) • Exact values for particular real numbers (angles) • Evaluation of trigonometric functions using a circle • Basic properties and identities • Solving "inverse" problems: given the value of a trig function, what is the corresponding angle? ### Study Guidelines for the 7th edition of Sullivan's Precalculus These reading and problem assignments are designed to help you learn the course material. You should complete all of these problems, check your answers in the back of the textbook, and get help with the problems that you missed. Most of the problems are odd-numbered, so you can check the solutions in the Solutions Manual . The only way to learn mathematics is to do mathematics, so while these problems will not be collected or graded, you will probably not do well in the course if you do not complete these and check your work as described above. After completing these problems, go on to the Unit Exam Description below and follow directions. • Section 5.1: Angles and Their Measure • Reading : section 5.1, pages 324-331 (through example 5) Read and work through examples 1-5 and their corresponding matched problems. • Problems that require calculators will be tested in Unit 2. • Problems involving arc length, areas of sectors, and circular motion will also be tested in Unit 2. • Practice Problems : 5.1 #1, 2, 11-21 odd, 35-57 odd • Reading : section 5.2, pages 338-349, 350-351 (skip the section titled "Using a Calculator to Find Values of Trigonometric Functions") Read and work through examples 1-10 and example 12 and their corresponding matched problems. • Problems that require calculators will be tested in Unit 2. • You can download a copy of the unit circle with the values of sin and cos for the standard angles. • Practice Problems : 5.2 #1-6, 11-65 odd, 83-111 odd Read and work through examples 1-7 and their corresponding matched problems. • Practice Problems : 5.3 #1-4, 11-57 odd, 59, 63, 67, 71, 75, 79, 83, 87, 91, 93 • Student Solutions Manual • Algebra Review booklet • CD lecture series (step-by-step video examples on CD) • For tutoring help, visit the Prentice Hall Tutor Center. Tutors can be contacted by phone, fax, or e-mail. To register, you will need the access code that came with your textbook. • Graphing Calculator Help Unit 1 Pretest and Exam Description After completing the above work, do the following: 1. Before taking the Unit 1 Exam, you should have completed the Online Testing Practice . If not, then do so now. 2. Read the exam description : • This exam has 25 questions, and will count 20 points toward your grade. • This exam has a one hour time limit. • It is Math Department policy that students should be able to compute the values of all the trigonometric functions at the standard angles, i.e., all multiples of pi/6 and pi/4 radians and 30 and 45 degrees. Therefore, calculators will not be allowed on the Unit 1 Exam . Thus, for example, there will be no problems like #23-34 or #59-70 of section 5.1, or #67-82 of section 5.2 (these will be tested on the Unit 2 Exam). • All of your answers in the Unit 1 Exam must be exact. You can type pi for the number pi and sqrt(2) for the square root of 2, etc. • Be sure to look under the entry box for the expected format of the answer. • Some problems expect an ordered pair, such as (1/2,sqrt(3)/2) . • None of the problems in this course require answers in terms of units (for example, "5 cm" or "3 ft") . In particular, questions asking for radians or degrees do not expect units (in fact, as noted on page 329, radian measure is a unitless number). Thus, you should not write answers like "pi/4 radians" or "45 degrees". Just write "pi/4" or "45" instead (the problem will tell you if you are supposed to use radians or degress). 3. Complete the online Unit 1 Pretest assignment for Math 141 or Math 142 . You may use your book if you wish, and redo the pretest as many times as you like. Your pretest score will be scaled to 5 points maximum. • Directions : Click on the link above for your class, then choose the Unit 1 Pretest . • The pretest must be completed by the deadline date listed at the top of this page. However, you may redo the pretest as many times as you like before the due date. Your best score counts, and it will be rescaled to 5 points maximum. 4. If you are having trouble with any of the problems listed above or on the pretest or practice exams, make use of the help resources listed on the Help page. 5. Arrange with your proctor to take the online proctored Unit 1 Exam assignment for Math 141 or Math 142 . Remember to bring identification, and remember that you will not be able to take the unit exam after the deadline date given at the top of this page. You may NOT use your book or notes or calculator on this exam. • Directions : Click on the link above for your class, then choose Unit 1 Exam . • The proctored unit exam must be completed by the deadline date listed at the top of this page, and may be repeated under certain conditions. See the Detailed Schedule page for Math 141 or Math 142 for specific rules. • Directions : Click on the link above for your class, then choose Unit 1 Practice Exam . After the deadline has passed, this exam will be available in practice mode. Make sure that you have completed the following items to complete Unit 1: ## 7.E: Trigonometric Functions (Exercises) - Mathematics ∫ x tan 2 x dx  =  ∫ x(sec 2 x - 1) dx =  x (tan x) - ∫ tan x dx - ∫ x dx =  x (tan x) - ∫ (sin x/cos x) dx - ∫ x dx =  x (tan x) - log (cos x) - (x 2 /2) + C =  (1/2) [(x²/2)+ (x/2) (sin 2 x) + (1/4) (cos 2x) + C now we are going to apply the trigonometric formula 2 cos A cos B ∫ x cos 5 x cos 2 x dx  =  (2/2)∫ x cos 5 x cos 2 x dx =  (1/2)∫ x 2 cos 5 x cos 2 x dx =  ( x 2  sin 3x/3) - ∫ [sin 3x/3] 2 x dx =  ( x 2 /3)sin 3x - (2/3)∫ x [sin 3 x] dx =  ( x 2 /3)sin 3x - (2/3)[x (-cos 3 x/3)] - (2/3)∫ [cos 3x/3] dx =  ( x 2 /3)sin 3x + (2/9)[x cos 3 x] - (2/9)∫ [cos 3x] dx =  ( x 2 /3)sin 3x + (2/9)[x cos 3 x] - (2/27)[sin 3x] + C ∫ cosec 3 x dx = ∫ cosec x (cosec 2 x) dx now we are going to apply partial differentiation =  (cosec x)(- cot x) - ∫ - cot x (- cosec x cot x) dx =  -cosec x cot x - ∫ cosec x cot 2 x dx =  -cosec x cot x - ∫ cosec x (cosec 2 x - 1) dx =  -cosec x cot x - ∫ cosec 3 x dx + ∫ cosec x dx ∫cosec 3 x dx = -cosec x cot x - ∫ cosec³x dx + ∫ cosec x dx ∫cosec 3 xdx + ∫ cosec 3 x dx = -cosec x cot x + log tan (x/2) + C 2∫cosec 3 x dx = -cosec x cot x + log tan (x/2) + C ∫ cosec 3 x dx = (1/2)[-cosec x cot x + log tan (x/2)] + C ∫ cosec 3 x dx = (1/2)[-cosec x cot x] + (1/2)[log tan (x/2)] + C =  (cos b x)(e ax /a) - ∫(e ax /a) (- b sin bx) dx =  (cos b x)(e ax /a) + (b/a) ∫ e ax  (sin bx) dx--------(1) =  (sin bx)(e ax /a) - ∫(e ax /a)(b cos bx) dx =  (sin bx)(e ax /a) - (b/a) ∫e ax ਌os bx dx = (cos b x)(e ax /a) + (b/a) [(sin bx)(e ax /a) - (b/a) ∫e ax ਌os bx dx] ∫e ax  cos bx dx+ (b²/a²) ∫e ax ਌os bx dx = (cos b x)(e ax /a)+(b/a)(sin bx)(e ax /a) ∫(1 + (b²/a²)) e ax  cos bx dx = (cos b x)(e ax /a)+(b/a)(sin bx)(e ax /a) ∫((a²+b²)/a²) e ax ਌os bx dx = (cos b x)(e ax /a)+(b/a)(sin bx)(e ax /a) ∫e ax cos bx dx  =  [a²/(a²+b²)](cos b x)(e ax /a)+(b/a)(sin bx(e ax  /a) Apart from the stuff given in this section, if you need any other stuff in math, please use our google custom search here. If you have any feedback about our math content, please mail us :
# Union of Indexed Family of Sets Equal to Union of Disjoint Sets ## Theorem Let $\family {E_n}_{n \mathop \in \N}$ be a countable indexed family of sets where at least two $E_n$ are distinct. Then there exists a countable indexed family of disjoint sets $\family {F_n}_{n \mathop \in \N}$ defined by: $\ds F_k = E_k \setminus \paren {\bigcup_{j \mathop = 0}^{k \mathop - 1} E_j}$ satisfying: $\ds \bigsqcup_{n \mathop \in \N} F_n = \bigcup_{n \mathop \in \N} E_n$ where $\bigsqcup$ denotes disjoint union. ### Corollary The countable family $\family {F_k}_{k \mathop \in \N}$ can be constructed by: $\ds F_k = \bigcap_{j \mathop = 0}^{k \mathop - 1} \paren {E_k \setminus E_j}$ ### General Result Let $I$ be a set which can be well-ordered by a well-ordering $\preccurlyeq$. Let $\family {E_\alpha}_{\alpha \mathop \in I}$ be a countable indexed family of sets indexed by $I$ where at least two $E_\alpha$ are distinct. Then there exists a countable indexed family of disjoint sets $\family {F_\alpha}_{\alpha \mathop \in I}$ defined by: $\ds F_\beta = E_\beta \setminus \paren {\bigcup_{\alpha \mathop \prec \beta} E_\alpha}$ satisfying: $\ds \bigsqcup_{\alpha \mathop \in I} F_n = \bigcup_{\alpha \mathop \in I} E_n$ where: $\bigsqcup$ denotes disjoint union. $\alpha \prec \beta$ denotes that $\alpha \preccurlyeq \beta$ and $\alpha \ne \beta$. ## Proof Denote: $\ds E$ $=$ $\ds \bigcup_{k \mathop \in \N} E_k$ $\ds F$ $=$ $\ds \bigcup_{k \mathop \in \N} F_k$ where: $\ds F_k = E_k \setminus \paren {\bigcup_{j \mathop = 0}^{k \mathop - 1} E_j}$ We first show that $E = F$. That $x \in E \implies x \in F$ follows from the construction of $F$ from subsets of $E$. Thus $E \subseteq F$. Then: $\ds x$ $\in$ $\ds \bigcup_{k \mathop \in \N} F_k$ $\ds \leadsto \ \$ $\ds \exists k \in \N: \,$ $\ds x$ $\in$ $\ds F_k$ $\ds \leadsto \ \$ $\ds \exists k \in \N: \,$ $\ds x$ $\in$ $\ds E_k$ $\, \ds \land \,$ $\ds x$ $\notin$ $\ds \paren {E_0 \cup E_1 \cup E_2 \cup \cdots \cup E_{k - 1} }$ $\ds \leadsto \ \$ $\ds \exists k \in \N: \,$ $\ds x$ $\in$ $\ds E_k$ Rule of Simplification so $F \subseteq E$. Thus $E = F$ by definition of set equality. To show that the sets in $F$ are (pairwise) disjoint, consider an arbitrary $x \in F$. Then $x \in F_k$ for some $F_k$. By the Well-Ordering Principle, there exists a smallest such $k$. Then: $\forall j < k: x \notin F_j$ Choose any distinct $\ell, m \in \N$. We have: If $m > \ell$, then: $\ds x \in F_\ell$ $\implies$ $\ds x \in E_\ell$ $\ds x \in F_m$ $\implies$ $\ds x \notin E_m$ If $m < \ell$, then: $\ds x \in F_m$ $\implies$ $\ds x \in E_m$ $\ds x \in F_\ell$ $\implies$ $\ds x \notin E_\ell$ So the sets $F_\ell, F_m$ are disjoint. Thus $F$ is the disjoint union of sets equal to $E$: $\ds \bigcup_{k \mathop \in \N} E_k = \bigsqcup_{k \mathop \in \N} F_k$ $\blacksquare$
# Is the injectivity of the operator equivalent to the surjectivity of its adjoint Let $X$ and $Y$ be two normed linear spaces. Let $T:X \to Y^*$ be a linear operator (not necessarily continuous) and let $T^*$ be its adjoint, i.e. $T^*:Y \to X^*$ is defined by $\langle T^*y,x \rangle := \langle Tx,y \rangle ,$ where $\langle \cdot , \cdot \rangle$ is a proper duality pairing (i.e. between $X^*$ and $X$ on the left hand side and between $Y^*$ and $Y$ on the right hand side). Let $R(T)$ denote the range of $T$. Is it true that: $T$ is injective on $R(T)$ if and only if $T^*$ is surjective onto $X^*$? The full equivalence does not hold in general. Rather it holds: $$T$$ is injective if and only if $$R(T^*)$$ is weak-star dense in $$X^*$$. If $$T^*$$ is surjective, then $$T$$ is injective: Suppose $$Tx=0$$, then $$0 = \langle Tx,y\rangle_{Y^*,Y} = \langle T^*y,x\rangle_{X^*,X},$$ and since $$R(T^*)=X^*$$, it follows $$0= \langle x^*,x\rangle_{X^*,X}$$ for all $$x^*\in X^*$$. Hence $$x=0$$. Here, it would have been sufficient to suppose that $$R(T^*)$$ is dense in $$X^*$$. Now, let $$T$$ be injective, assume that $$R(T^*)$$ is not weak-star dense in $$X^*$$. Then there is $$x^*\in X^* \setminus \overline{R(T^*)}^{w^*}$$, where closure is taken with respect to the weak-star topology. By Hahn-Banach separation theorem, there is $$x\in X$$, $$t\in\mathbb R$$, such that $$\langle x^*,x\rangle_{X^*,X} < t \le \langle T^*y,x\rangle_{X^*,X} \quad \forall y\in Y.$$ Since $$T^*$$ is linear, it follows $$\langle T^*y,x\rangle_{X^*,X}=0$$ for all $$y\in Y$$, which is equivalent to $$\langle Tx,y\rangle_{Y^*,Y}=0$$ for all $$y$$. Hence $$Tx=0$$ and by injectivity $$x=0$$ follows, which is a contradiction. As a counter-example, which shows that injectivity of $$T$$ does not implies surjectivity of $$T^*$$, you can choose any compact and injective $$T$$, with $$X,Y$$ being infinite-dimensional. Then $$T^*$$ is compact as well, but $$R(T^*)$$ cannot be closed. • Two comments for this application of Hahn-Banach separation theorem: 1. The theorem states that there exist a $x^{**} \in X^{**}$ s.t. ... Hence you have additionally assumed the reflexivity of $X$. 2. The theorem states that there exists a $t\in \mathbb{R}$ s.t. the separation property holds, not necessarily $t=0$. (see H-B theorem for instance here en.wikipedia.org/wiki/…) – Wojtek Feb 9 '15 at 11:11 • 1) H-B works in topological vector spaces as well, so we can really choose $x\in X$. 2) see edit: inserted $t$. – daw Feb 9 '15 at 12:31 • Thanks for answering my comment. – Wojtek Feb 10 '15 at 9:37 • Thanks for answering my comment. 1) Unless there is a version of H-B dealing with predual spaces I am not aware of, I don't agree with your explanation. The theorem, considering $X^*$ as a topological vector space, only gives you an element $x^{**}\in X^{**}$ such that $\langle x^{**}, x^* \rangle < t \leq \langle x^{**}, T^* y \rangle$. Representing this $x^{**}$ as some $x\in X$, as stated above, is nothing else but saying that $X$ is reflexive. 2) If we have $t$ instead of $0$ then it is not true that $\langle T^*y,x \rangle =0$ for all $y\in Y$ (take, for instance, $t>0$). – Wojtek Feb 10 '15 at 9:45 • 1) Talking about topological vector spaces, the dual of $X^*$ ($X^*$ equipped with weak-star topology) is $X$. 2) $y$ is arbitrary from the vector space $Y$. If $\langle T^*y,x\rangle$ would be non-zero for some $y$, then you can take $sy$, $s\in \mathbb R$ to obtain a contradiction. – daw Feb 10 '15 at 11:54
My Math Forum What percentage should be removed? Elementary Math Fractions, Percentages, Word Problems, Equations, Inequations, Factorization, Expansion May 13th, 2019, 10:55 PM #1 Newbie   Joined: May 2019 From: London Posts: 16 Thanks: 0 What percentage should be removed? The coat was \$20 it dropped down to \$15 by have much% did it reduce? We did this problem when we were doing percentages, and I can't still figure out the answer. Can someone give me a hand please. Thanks. Last edited by skipjack; May 14th, 2019 at 12:39 AM. May 14th, 2019, 12:45 AM #2 Global Moderator   Joined: Dec 2006 Posts: 20,623 Thanks: 2076 The reduction was \\$5, which was a quarter of the original price. As a percentage, a quarter is 25%. Alternatively, treat the original price as 100%. As the reduced price was 100% × 15/20 = 75%, the reduction was 25%. Tags arithmetic, percentage, removed Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post DecoratorFawn82 Algebra 3 September 14th, 2018 08:59 AM Karma Peny Math 112 December 28th, 2014 08:40 PM saha Elementary Math 4 November 8th, 2014 04:32 PM Contact - Home - Forums - Cryptocurrency Forum - Top