url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://academy.lucedaphotonics.com/training/topical_training/mzm/heater.html | # 4.3. Thermo-optic phase shifter (Heater)¶
A thermo-optic phase shifter works by varying the temperature of the waveguide. This modulates the refractive index of the waveguide material through the thermo-optic effect. This, in turn, modulates the effective index and the phase of the light at the end of the waveguide.
## 4.3.1. Layout¶
SiFab contains a heater that is used in this tutorial. This heated waveguide can be used as any other waveguide in IPKISS.
from si_fab import all as pdk
from ipkiss3 import all as i3
# Heater
ht = pdk.HeatedWaveguide(contact_pitch=0.6,
heater_width=0.6,
heater_offset=1.0,
m1_width=1.0,
length_via_section=3.0,
via_pitch=1.0,)
ht_lv = ht.Layout(shape=[(0.0, 0.0), (10.0, 0.0)])
ht_lv.visualize(annotate=True)
xs = ht_lv.cross_section(cross_section_path=i3.Shape([(1.0, -8.0), (1.0, 8.0)]))
xs.visualize()
## 4.3.2. Model¶
As described here, the phase shift is proportional to the dissipated power in the heater. As the dynamics controlling the temperature are much slower than the electro-optic effects in the phase shifter, we chose an instantaneous model. By doing so, we can reach steady-state without immediately in simulations. The heater in SiFab has a simulation recipe, which is used to ramp up the voltage and check the phase variation.
from si_fab import all as pdk
from si_fab.components.heater.simulate.simulate import simulate_heater
from si_fab.components.heater.pcell.cell import r_sheet, j_max
import pylab as plt
import numpy as np
import os
# Phase Shifter
name = "heater_sweep"
results_array = []
length = 100
ht = pdk.HeatedWaveguide(contact_pitch=0.6,
heater_width=0.6,
heater_offset=1.0,
m1_width=1.0,
length_via_section=3.0,
via_pitch=1.0,)
ht_lv = ht.Layout(shape=[(0.0, 0.0), (length, 0.0)])
P_pi = 30e-3 # W
p_pi_sq = r_sheet * P_pi
ht.CircuitModel(p_pi_sq=p_pi_sq)
results = simulate_heater(cell=ht,
v_bias_start=0,
v_bias_end=1,
nsteps=500,
center_wavelength=1.5,
simulate_sources=True,
simulate_circuit=True,
debug=False)
times = results["timesteps"]
results_array.append(results)
def phase_unwrap_normalize(results):
unwrapped = np.unwrap(np.angle(results))
return (unwrapped - unwrapped[0]/np.pi)
outputs = ["in", "elec1", "elec2", "out", "current_density"]
process = [np.real, np.real, np.real, phase_unwrap_normalize, np.real]
axis_y = ["V", "V", "V", "[rad/pi]", "A/um^2"]
fig, axs = plt.subplots(nrows=len(outputs), ncols=1, figsize=(6, 10))
for cnt, (f, pn) in enumerate(zip(process, outputs)):
axs[cnt].set_title(pn)
axs[cnt].plot(times, f(results[pn]), label="length: {}".format(length))
axs[cnt].set_ylabel(axis_y[cnt])
axs[len(process)-1].axhline(y=j_max)
plt.tight_layout()
fig.savefig(os.path.join("{}.png".format(name)), bbox_inches='tight')
1. Open topical_training/mzm/explore_heater/example_heaterwaveguide.py.
1. Open topical_training/mzm/explore_heater/example_heater_simulation.py.
3. With the heater having a $$P_{\pi}=30 mW$$, find the the length of the heater that would lead to a tuning range of $$\pi/2$$ under a voltage swing of 0 to 1.
Solution | 2020-11-27 17:17:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4293695092201233, "perplexity": 10984.374701322298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00617.warc.gz"} |
http://openstudy.com/updates/510da7c4e4b0d9aa3c4779b9 | ## gabie1121 Group Title find the values for c such that the function is continuous at all x. FUNCTION BELOW one year ago one year ago
1. gabie1121 Group Title
$f(x) \left\{ \frac{ \sin(cx) }{ x } if x<0 \right\}$
2. gabie1121 Group Title
$f(x)=(e ^{x}-2c) if x \ge0$
3. gabie1121 Group Title
those go together. I just couldn't figure out how to get them all in at once
4. zepdrix Group Title
In order for this function to be continuous, we need to pick a $$c$$ value that makes the two pieces connect together nicely at $$x=0$$. Imagine railroad tracks, the track needs to be continuous, and can't have any sharp corners or the train will fly off the tracks.
5. gabie1121 Group Title
Yep, that i get. So do i set both functions equal to 0 and solve for c or something?
6. zepdrix Group Title
We want to look at them in the limit. We want to see what is happening when we get closer and closer to 0 from the left side. And also what is happening when we get closer and closer to 0 from the right side. For this function to be continuous, they must be approaching the same value. So we'll set these limits equal to one another.
7. zepdrix Group Title
$\large \lim_{x \rightarrow 0^-}f(x) \quad = \quad \lim_{x \rightarrow 0^-} \frac{\sin(cx)}{x}$
8. zepdrix Group Title
Understand why our function is sin(cx)/x when we're approaching from the left?
9. zepdrix Group Title
The tiny negative (that looks like an exponent) is letting us know we're approaching from the left.
10. gabie1121 Group Title
yep cuz it says less than 0
11. zepdrix Group Title
Ok cool :) so we'll set that equal to the other piece.
12. zepdrix Group Title
And solve for c.
13. gabie1121 Group Title
do we make the other side a limit as well?
14. zepdrix Group Title
$\large \lim_{x \rightarrow 0^-}f(x) \qquad \qquad = \qquad \qquad \lim_{x \rightarrow 0^+}f(x)$$\large \lim_{x \rightarrow 0^-} \frac{\sin(cx)}{x} \qquad = \qquad \lim_{x \rightarrow 0^+}e^x-2c$
15. zepdrix Group Title
Yes :) the limits need to agree in order for this function to be continuous.
16. gabie1121 Group Title
alrighty, well the left side is equal to just c, right?
17. zepdrix Group Title
Yes very good ^^
18. gabie1121 Group Title
because if i multiply by C , i can use the rule that says sinx/x=1
19. zepdrix Group Title
you remembered your identity i take it hehe
20. gabie1121 Group Title
yep! the right side is giving me fits though lol
21. zepdrix Group Title
Or would it be -1 since we're coming from the left? Hmm I didn't think about that. lemme check real quick.
22. zepdrix Group Title
Nah it's still 1, my bad.
23. gabie1121 Group Title
well would the right just be e^x-2? Or can i not do that cuz i'm thinking of derivative rules?
24. gabie1121 Group Title
well the derivative is the limit as x approaches 0, so shouldn't e^x, stay e^x?
25. zepdrix Group Title
no we're not thinking of this as a derivative :) We're looking at the limit and saying to ourselves, "If I plug x=0 directly into this function, does it cause a problem?" If the answer is no, then we can do just that!
26. gabie1121 Group Title
oh! well then we're left with $c=2c$ ?
27. zepdrix Group Title
Woops! Recall that if we have a 0 in the exponent, what will that change our base to?
28. zepdrix Group Title
Not 0 silly! :O
29. gabie1121 Group Title
oh well i just plugged in e^0 in my calculator and it gave me 1
30. zepdrix Group Title
hah XD that's a way to do it i guess! :D
31. zepdrix Group Title
yah 1 :3
32. gabie1121 Group Title
lol so its actuallly c=1-2c?
33. zepdrix Group Title
Yah looks good c:
34. zepdrix Group Title
Do you by chance have an answer key that we can check this against? This is one of those annoying problems that it's easy to make a mistake on c: lol
35. gabie1121 Group Title
c=1/3! , and I don't have one yet. I will monday so its not a huge deal
36. gabie1121 Group Title
It looks logical enough to me. Thanks, AGAIN lol
37. zepdrix Group Title
This type of problem becomes a little bit harder when they throw 2 unknown constants at you. Because then you have to also look at the limits of their derivatives. But this was a good problem to get a feel for the concept c:
38. gabie1121 Group Title
well i just started calc1 this semester, so i haven't learned much yet. Just dipping my toes in the water. I have to take all the way through calculus 3 though, bleh
39. zepdrix Group Title
Hmm you'll do quite well, I can tell. You seem quite smart. You're very quick on remember how to do little steps. Calc 2 is a doozy!! Power Series made me want to rip my hair out! :O
40. gabie1121 Group Title
Yeah, i'm told i'll want to murder myself with Calc 2. Not looking forward to it, but thanks! That makes me feel a little better
41. gabie1121 Group Title
i might be hunting you down again! lol | 2014-09-17 19:40:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7623405456542969, "perplexity": 1846.1617279892716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124356.76/warc/CC-MAIN-20140914011204-00073-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.classes.cs.uchicago.edu/archive/2017/winter/12200-1/install-guide/index-flash-drive.html | # Install Guide for CS122¶
This guide provides instructions on how to set up the VirtualBox software and load the VirtualBox image that you will need to do the assignments in this course.
## OS X¶
1. Plug in the flash drive, open a finder window and look for the device labeled CMSC121-A16. Click on the name of the device and then open (by double-clicking) the VirtualBox folder and then open the OSX folder.
2. Open the file named VirtualBox-5.1.6-110634-OSX.dmg. This action will open a new window.
3. Double click on the icon that looks an open shipping box. It looks like the screenshot here.
4. Press Continue when presented with a security message.
5. Press Continue again on the introductory screen.
6. Press the Install button to accept the standard installation.
8. Press Close once the installation completes.
9. In your web browser, go to https://mit.cs.uchicago.edu/ and log in with your CNet ID and password.
10. Open a finder window for the device the labeled CMSC121-A16. Click on the name of the device and then open the file named CMSC121-a16-v2.ova. This step will open a VirtualBox window and a prompt. Within this prompt, click Import. This process may take as much as 5-10 minutes.
11. Click Start.
12. Log in. The username is student and password is uccs.
13. An empty desktop should appear, with a dock on the left. At the top, you may see one or more messages with Xs to dismiss them on the right. Dismiss these so that you can see “Ubuntu Desktop” at the top left of the display-within-a-window.
14. Click on the terminal icon, which is fourth from the top in the dock and looks like a screen. You now have a Linux command line window open.
15. You can switch back and forth between programs running on your computer, and programs running within the VirtualBox window. But, you should do all your work (such as writing code) within the VirtualBox environment so that the files are in the right place. You can do other course-related tasks using your standard applications (e.g. using a web browser to visit Piazza).
You can copy text from your browser on the OSX-side in the usual way (Command-c) and then paste it in the terminal window in the VM using Shift-Control-v. Make sure you have selected the terminal window on the VM (just click on it) before you do the paste.
16. Run the following commands in the terminal window.
update-cs-software
You will be asked for the password for the student account when you run these commands. The password is uccs. Note that you will not see any characters echoed back as you type the password.
17. You are now ready to set up your Git repository. Git is a software version constrol system that we will be using for labs and programming assignments. Run the following commands:
cd
cs-setup-script cs122-win-17
Setting up your Git repository...
Setting up chisubmit...
chisubmit has been set up. You can use chisubmit commands inside /home/student/cs122-win-17-username
where username has been replaced by your CNetID.
18. If you want to suspend your VirtualBox, click the red close button in the top left of the window. We recommend choosing “Save the machine state”. That way, you can always pick up where you left off.
Note
Never select “Power off the machine” when closing your VM, because this option is the equivalent of yanking the power cord on a running desktop machine. This option can result in hard drive corruption and many other problems.
When closing your VM, make sure that you always select either “Save the machine state” (this is the best option, as it will allow you to resume the VM in the exact same state you left it) or “Send the shutdown signal” (which will perform an orderly shutdown of the machine).
If you need to shut down the machine where your VM is running, make sure you close your VM before shutting down your machine. Shutting down your machine while the VM is running can have the same effect as selecting the “Power off the machine” option when closing the VM.
1. When you want to resume a session, relaunch the VirtualBox application (in your Applications folder.
2. If you back up your computer, you may want to exclude the large files maintained by VirtualBox from your backups. In particular, when you use your VirtualBox, the state of the machine is saved to your disk, and a large (multi-GB) file is changed each time. This can result in very large backups with each backup saving another copy of this large file due to the changes. Exclude the files that you downloaded, and the folder named “VirtualBox VMs” within your home folder if you wish to avoid this problem.
## Windows¶
1. Plug in the flash drive. Find the device called “CMSC121-A16” in Explorer, by clicking on “Computer” or “File Explorer” in the Start Menu, or by pressing “Windows+E” on your keyboard.
2. Now right-click (do NOT double-click) this file and select “Run as Administrator”. If you do not see the “Run as Administrator” option because you are on an older version of Windows, you may resort to double-clicking the executable. Note that if you are not on an administrator account on your machine, your VirtualBox installation may fail.
3. If you are presented with a UAC prompt (aka: the screen goes dark and you are asked “are you sure you want to run X?”), select Yes and accept the prompt. Do not select No/Cancel/Reject.
4. You will now be presented with the installer window. Press Next. On the next screen, accept the defaults by pressing Next again. You can customize these settings if you feel you need to, but we do not offer troubleshooting help should you fail to install your VM correctly due to tinkering. Keep pressing Next until you cannot press Next anymore. The installation will begin at this point and may take several minutes to complete.
5. You may be asked to accept the installation of a new network adapter/interface or setting up a new network connection. In case of the former, you should accept the new network adapter/interface by pressing Yes/Accept/Okay. In case of the latter and a new window pops up, you should indicate that it is a public connection and click Okay/Close.
6. When the installation is complete, you will be presented with a shiny new VirtualBox window. Minimize/close this window.
7. In your web browser, go to https://mit.cs.uchicago.edu/ and log in with your CNet ID and password.
8. Go back to the flash drive that you opened in Step 1.
9. Once you are presented with the drive’s contents, double-click to open the file named CMSC121-a16-v2.ova.
10. Give it a few moments. You will eventually be presented with your VirtualBox window and a prompt. Within this prompt, click Import. This process may take as much as 5-10 minutes.
11. Once the previous step is complete, you should see your VirtualBox window. On the left, you should see a VM named “CMSC121-a16”. Click this VM once and then click the Start button at the top of the window. You will now boot into the VM. Be patient as it may take a few moments for anything to happen. Do not close or otherwise tinker with VirtualBox at this point.
12. Once your VM boots up, proceed to log in. The username is student and password is uccs.
13. An empty desktop should appear, with a dock on the left. At the top, you may see one or more messages with Xs to dismiss them on the right. Dismiss these so that you can see “Ubuntu Desktop” at the top left of the display-within-a-window.
14. Click on the terminal icon, which is fourth from the top in the dock and looks like a screen. You now have a Linux command line window open.
15. You can switch back and forth between programs running on your computer, and programs running within the VirtualBox window. But, you should do all your work (such as writing code) within the VirtualBox environment so that the files are in the right place. You can do other course-related tasks using your standard applications (e.g. using a web browser to visit Piazza).
You can copy text from your browser on the Windows-side in the usual way (Command-c) and then paste it in the terminal window in the VM using Control-v. Make sure you have selected the terminal window on the VM (just click on it) before you do the paste.
16. Run the following commands in the terminal window.
update-cs-software
You will be asked for the password for the student account when you run these commands. The password is uccs. Note that you will not see any characters echoed back as you type the password.
17. You are now ready to set up your Git repository. Git is a software version constrol system that we will be using for labs and programming assignments. Run the following commands:
cd
cs-setup-script cs122-win-17
Setting up your Git repository...
Setting up chisubmit...
chisubmit has been set up. You can use chisubmit commands inside /home/student/cs122-win-17-username
where username has been replaced by your CNetID.
18. If you want to suspend your VirtualBox, click the red close button in the top left of the window. We recommend choosing “Save the machine state”. That way, you can always pick up where you left off.
Note
Never select “Power off the machine” when closing your VM, because this option is the equivalent of yanking the power cord on a running desktop machine. This option can result in hard drive corruption and many other problems.
When closing your VM, make sure that you always select either “Save the machine state” (this is the best option, as it will allow you to resume the VM in the exact same state you left it) or “Send the shutdown signal” (which will perform an orderly shutdown of the machine).
If you need to shut down the machine where your VM is running, make sure you close your VM before shutting down your machine. Shutting down your machine while the VM is running can have the same effect as selecting the “Power off the machine” option when closing the VM.
1. When you want to resume a session, relaunch the VirtualBox application (in your Applications folder. Do not re-open either of the files you preiously downloaded) and click Start.
2. If you back up your computer, you may want to exclude the large files maintained by VirtualBox from your backups. In particular, when you use your VirtualBox, the state of the machine is saved to your disk, and a large (multi-GB) file is changed each time. This can result in very large backups with each backup saving another copy of this large file due to the changes. Exclude the files that you downloaded, and the folder named “VirtualBox VMs” within your home folder if you wish to avoid this problem. | 2021-04-13 21:42:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2519916594028473, "perplexity": 1898.730552952277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038075074.29/warc/CC-MAIN-20210413213655-20210414003655-00597.warc.gz"} |
https://mathoverflow.net/questions/181771/handelmans-positivstellensatz-for-symmetric-matrix-valued-polynomials | # Handelman's positivstellensatz for symmetric matrix-valued polynomials
For certain classes of sets $S \subseteq \mathbb{R}^n$, there exist algebraic characterizations of real valued polynomials $p: \mathbb{R}^n \rightarrow \mathbb{R}$ that are positive on $S$. Several of these algebraic characterizations, known as positivstellensatz, are listed here in Wikipedia.
Some of these positivstellensatze can be generalized for matrix polynomials, i.e. a function $P$ from $\mathbb{R}^n$ to the set of all $r \times r$ symmetric matrices whose entries are polynomials. In this case, $P$ is understood to be positive in $S$ if $P(x)$ is positive definite for all $x \in S$.
For instance, in [1], Pólya's positivstellensatz for homogeneous polynomials is generalized to homogeneous matrix polynomials and in [2], Putinar's positivstellensatz is also generalized to matrix polynomials. Stengle's and Schweighofer's positivstellensatze are also generalized to matrix polynomials in [3].
Does Handelman's positivstellensatz have also a matrix polynomial version?
[1]: Scherer, C. W. "Relaxations for robust linear matrix inequality problems with verifications for exactness." SIAM Journal on Matrix Analysis and Applications 27(2) (2005): 365-395.
[2]: Scherer, C. W. and Hol, C. W. J. "Matrix sum-of-squares relaxations for robust semi-definite programs." Mathematical programming 107(1-2) (2006): 189-211.
[3]: Công-Trình, Lê. "Some Positivstellensätze for polynomial matrices." arXiv preprint arXiv:1403.3783 (2014)
The obvious generalization is false: Consider $A=\begin{pmatrix} 1+x &y \\y & 1-x \end{pmatrix}$ which is a linear matrix polynomial. It defines a compact set in the plane, namely the unit disc. The algebra $\mathbb{R}[A]$ of all polynomial expressions in $A$ is contained in $\mathbb{R}(x,y)[A]$ which is a two dimensional $\mathbb{R}(x,y)$-vector space since $A^2=2A+x^2+y^2-1$. Therefore $\mathbb{R}[A]$ can not contain all of these three $\mathbb{R}(x,y)$-linear independent matrices: $\begin{pmatrix}1 & 0 \\ 0 & 2 \end{pmatrix}$, $\begin{pmatrix}2 & 0 \\ 0 & 1 \end{pmatrix}$, $\begin{pmatrix}2 & 1 \\ 1 & 2 \end{pmatrix}$ which are all positive definite on the unit disc.
Edit: So Handelman's statement is the following: If the linear forms $l_1, \ldots, l_r$ define a compact set and if the polynomial $f$ is strictly positive on this set, then $f$ can be expressed as a sum of products of $l_1, \ldots, l_r$ with positive coefficients.
Now replace "(linear) polynomials" by "(linear) matrix polynomials" and you get a generalization to matrix polynomials which is false. The matrix $A$ plays hereby the role of $l_1, \ldots, l_r$. | 2020-01-25 06:09:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339390993118286, "perplexity": 138.68997166848573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00397.warc.gz"} |
https://rdrr.io/cran/SteinIV/man/sps.est.html | # sps.est: Semi-parametric Stein-like (SPS) estimator. In SteinIV: Semi-Parametric Stein-Like Estimator with Instrumental Variables
## Description
Computes the SPS estimator for a two-stage structural model, as well as the set of standard errors for each individual estimator, and the sample estimate of the asymptotic variance/covariance matrix.
## Usage
1 sps.est(y,X,Z,SE=FALSE,ALPHA=TRUE,REF="TSLS",n.bt=100,n.btj=10)
## Arguments
y Numeric: A vector of observations, representing the outcome variable. X Numeric: A matrix of observations, whose number of columns corresponds to the number of predictors in the model, and the number of rows should be conformal with the number of entries in y. This matrix may contain both endogenous and exogenous variables. Z Numeric: A matrix of observations representing the intrumental variables (IVs) in the first-stage structural equation. The number of IVs should be at least as large as the number of endogenous variables in X. SE Logical: If TRUE, then the function also returns the standard errors of the individual SPS estimators, and a sample (or bootstrap, if JIVE is selected as a reference estimator) estimate of its asymptotic variance/covariance matrix. ALPHA Logical: If TRUE, the function returns the value of the sample estimate of the parameter controlling the respective contribution of the reference estimator (by default, this is the TSLS estimator), and the one of the alternative estimator (by default, this is the OLS estimator). REF Character: Controls the choice of the reference estimator in the SPS framework. This can accept two values: "TSLS" or "JIVE", with the former being the default option. The alternative estimator is always the OLS estimator. n.bt Numeric: The number of bootstrap samples performed, when the sample variance/covariance matrix is estimated using the bootstrap. This automatically occurs, whenever the user selects the JIVE as the reference estimator. n.btj Numeric: The number of boostrap iterations performed, when computing the SPS estimator, when using the JIVE as reference estimator. This option is only relevant, when JIVE has been selected as the reference estimator. These iterations are used to compute the various components entering in the calculation of the SPS estimator.
## Details
The SPS estimator is applied to a two-stage structural model. We here adopt the terminology commonly used in econometrics. See, for example, the references below for Cameron and Trivedi (2005), Davidson and MacKinnon (1993), as well as Wooldridge (2002). The second-stage equation is thus modelled as follows,
y = Xβ + ε,
in which y is a vector of n observations representing the outcome variable, X is a matrix of order n\times k denoting the predictors of the model, and comprised of both exogenous and endogenous variables, β is the k-dimensional vector of parameters of interest; whereas ε is an unknown vector of error terms. The first-stage level of the model is given by a multivariate multiple regression. That is, this is a linear modle with a multivariate outcome variable, as well as multiple predictors. This first-stage model is represented in this manner,
X = ZΓ + Δ,
where X is the matrix of predictors from the second-stage equation, Z is a matrix of instrumental variables (IVs) of order n \times l, Γ is a matrix of unknown parameters of order l\times k; whereas Δ denotes an unknown matrix of order n\times k of error terms.
As for the TSLS estimator, whenever certain variables in X are assumed to be exogenous, these variables should be incorporated into Z. That is, all the exogneous variables are their own instruments. Moreover, it is also assumed that the model contains at least as many instruments as predictors, in the sense that l≥q k, as commonly donein practice (Wooldridge, 2002). Also, the matrices, X^TX, Z^TX, and Z^TZ are all assumed to be full rank. Finally, both X and Z should comprise a column of one's, representing the intercept in each structural equation.
The formula for the SPS estimator is then obtained as a weigthed combination of the OLS and TSLS estimators (using the default options), such that
\hatβ_{SPS}(α) := α\hatβ_{OLS} + (1-α)\hatβ_{TSLS},
for every α. The proportion parameter, α, controls the respective contributions of the OLS and TSLS estimators. (Despite our choice of name, however, note that α needs not be bounded between 0 and 1.) This parameter is selected in order to minimize the trace of the theoretical MSE of the corresponding SPS estimator,
MSE(\hatβ_{SPS}(α)) = E[(\barβ(α)-β)(\hatβ(α)-β)^{T}] = Var(\hatβ(α)) + Bias^{2}(\hatβ(α)),
where β\in R^{k} is the true parameter of interest and the MSE is a k\times k matrix. It is particularly appealing to combine these two estimators, because the asymptotic unbiasedness of the TSLS estimator guarantees that the resulting SPS is asymptotically unbiased. Thus, the MSE automatically strikes a trade-off between the unbiasedness of the TSLS estimator and the efficiency of the OLS estimator.
## Value
list A list with one or four arguments, depending on whether the user has activated the SE flag, and the ALPHA flag. The first element (est) in the list is the SPS estimate of the model in vector format. The second element (se) is the vector of standard errors; the third element (var) is the sample estimate of the asymptotic variance/covariance matrix; the fourth element (alpha) is a real number representing the estimate of the contribution of the OLS to the combined SPS estimator.
## Author(s)
Cedric E. Ginestet <[email protected]>
## References
Judge, G.G. and Mittelhammer, R.C. (2004). A semiparametric basis for combining esti- mation problems under quadratic loss. Journal of the American Statistical Association, 99(466), 479–487.
Judge, G.G. and Mittelhammer, R.C. (2012a). An information theoretic approach to econo- metrics. Cambridge University Press.
Judge, G. and Mittelhammer, R. (2012b). A risk superior semiparametric estimator for over-identified linear models. Advances in Econometrics, 237–255.
Judge, G. and Mittelhammer, R. (2013). A minimum mean squared error semiparametric combining estimator. Advances in Econometrics, 55–85.
Mittelhammer, R.C. and Judge, G.G. (2005). Combining estimators to improve structural model estimation and inference under quadratic loss. Journal of econometrics, 128(1), 1–29.
## Examples
1 2 3 4 5 6 7 8 9 10 ### Generate a simple example with synthetic data, and no intercept. n <- 100; k <- 3; l <- 3; Ga<- diag(rep(1,l)); be <- rep(1,k); Z <- matrix(0,n,l); for(j in 1:l) Z[,j] <- rnorm(n); X <- matrix(0,n,k); for(j in 1:k) X[,j] <- Z[,j]*Ga[j,j] + rnorm(n); y <- X%*%be + rnorm(n); ### Compute SPS estimator with SEs and variance/covariance matrix. print(sps.est(y,X,Z)) print(sps.est(y,X,Z,SE=TRUE));
SteinIV documentation built on May 2, 2019, 6:17 a.m. | 2021-06-16 06:59:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7780510187149048, "perplexity": 2089.8274441078797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00143.warc.gz"} |
https://martinralbrecht.wordpress.com/page/2/ | Large Modulus Ring-LWE and Module-LWE
Our paper Large Modulus Ring-LWE ≥ Module-LWE — together with Amit Deo — was accepted at AsiaCrypt 2017. Here’s the abstract:
We present a reduction from the module learning with errors problem (MLWE) in dimension $d$ and with modulus $q$ to the ring learning with errors problem (RLWE) with modulus $q^{d}$. Our reduction increases the LWE error rate $\alpha$ by a quadratic factor in the ring dimension $n$ and a square root in the module rank $d$ for power-of-two cyclotomics. Since, on the other hand, MLWE is at least as hard as RLWE, we conclude that the two problems are polynomial-time equivalent. As a corollary, we obtain that the RLWE instance described above is equivalent to solving lattice problems on module lattices. We also present a self reduction for RLWE in power-of-two cyclotomic rings that halves the dimension and squares the modulus while increasing the error rate by a similar factor as our MLWE to RLWE reduction. Our results suggest that when discussing hardness to drop the RLWE/MLWE distinction in favour of distinguishing problems by the module rank required to solve them.
Our reduction is an application of the main result from Classical Hardness of Learning with Errors in the context of MLWE. In its simplest form, that reduction proceeds from the observation that for $\mathbf{a}, \mathbf{s} \in \mathbb{Z}_{q}^{d}$ with $\mathbf{s}$ small it holds that
$q^{d-1} \cdot \langle{\mathbf{a},\mathbf{s}}\rangle \approx \left(\sum_{i=0}^{d-1} q^{i} \cdot a_{i}\right) \cdot \left(\sum_{i=0}^{d-1} q^{d-i-1} \cdot s_{i}\right) \bmod q^{d} = \tilde{a} \cdot \tilde{s} \bmod q^{d}.$
Thus, if there exists an efficient algorithm solving the problem in $\mathbb{Z}_{q^d}$, we can use it to solve the problem in $\mathbb{Z}_{q}^d$.
In our paper, we essentially show that we can replace integers mod $q$ resp. $q^d$ with the ring of integers $R$ of a Cyclotomic field (considered mod $q$ resp. $q^d$). That is, we get the analogous reduction from $R_{q}^d$ (MLWE) to $R_{q^d}$ (RLWE). The bulk of our paper is concerned with making sure that the resulting error distribution is sound. This part differs from the Classical Hardness paper since our target distribution is in $R$ rather than $\mathbb{Z}$.
Revisiting the Expected Cost of Solving uSVP and Applications to LWE
Our — together with Florian Göpfert, Fernando Virdia and Thomas Wunderer — paper Revisiting the Expected Cost of Solving uSVP and Applications to LWE is now available on ePrint. Here’s the abstract:
Reducing the Learning with Errors problem (LWE) to the Unique-SVP problem and then applying lattice reduction is a commonly relied-upon strategy for estimating the cost of solving LWE-based constructions. In the literature, two different conditions are formulated under which this strategy is successful. One, widely used, going back to Gama & Nguyen’s work on predicting lattice reduction (Eurocrypt 2008) and the other recently outlined by Alkim et al. (USENIX 2016). Since these two estimates predict significantly different costs for solving LWE parameter sets from the literature, we revisit the Unique-SVP strategy. We present empirical evidence from lattice-reduction experiments exhibiting a behaviour in line with the latter estimate. However, we also observe that in some situations lattice-reduction behaves somewhat better than expected from Alkim et al.’s work and explain this behaviour under standard assumptions. Finally, we show that the security estimates of some LWE-based constructions from the literature need to be revised and give refined expected solving costs.
Our work is essentially concerned with spelling out in more detail and experimentally verifying a prediction made in the New Hope paper on when lattice reduction successfully recovers an unusually short vector.
Denoting by $v$ the unusually short vector in some lattice $\Lambda$ of dimension $d$ (say, derived from some LWE instance using Kannan’s embedding), $\beta$ the block size used for the BKZ algorithm and $\delta_0$ the root-Hermite factor for $\beta$, then the New Hope paper predicts that $v$ can be found if
$\sqrt{\beta/d} \|v\| \leq \delta_0^{2\beta-d} \, {\mathrm{Vol}(\Lambda)}^{1/d},$
under the assumption that the Geometric Series Assumption holds (until a projection of the unusually short vector is found).
The rationale is that this condition ensures that the projection of $v$ orthogonally to the first $d-\beta$ (Gram-Schmidt) vectors (denoted as $\pi_{d-\beta+1}(v)$) is shorter than the expectation for the $d-\beta+1$-th Gram-Schmidt vector $b_{d-\beta+1}^*$ under the GSA and thus would be found by the SVP oracle when called on the last block of size $\beta$. Hence, for any $\beta$ satisfying the above inequality, the actual behaviour would deviate from that predicted by the GSA. Finally, the argument can be completed by appealing to the intuition that a deviation from expected behaviour on random instances — such as the GSA — leads to a revelation of the underlying structural, secret information. In any event, such a deviation would already solve Decision-LWE.
In our work, we spell out this argument in more detail (e.g. how $v$ is recovered from $\pi_{d-\beta+1}(v)$) and throw 23k core hours at the problem of checking if the predicted behaviour, e.g.
matches the observed behaviour, e.g.
Just like for the above plots, the general answer is a clear “yes”.
Pretty Pictures or GTFO
I forgot the most important bit. The behaviour of the BKZ algorithm on uSVP(-BDD) instances can be observed in this video.
You can observe the basis approaching the GSA until the SVP oracle finds the unusually short vector $\pi_{d-\beta+1}(v)$. From $\pi_{d-\beta+1}(v)$, $v$ is then immediately recovered using size reduction. The grey area is the currently worked on block. The notation in the legend isn’t consistent with the plots above or even internally ($n$ v $d$), but the general idea should still be apparent. In case you’re wondering about the erratic behaviour of the tails (which occasionally goes all over the place), this is due to a bug in fpylll which has recently been fixed.
Reading Material on Gender Essentialism
In a memo titled Google’s Ideological Echo Chamber James Damore claims that “the distribution of preferences and abilities of men and women differ in part due to biological causes and that these differences may explain why we don’t see equal representation of women in tech and leadership” with the aim to show that “discrimination to reach equal representation is unfair, divisive, and bad for business.” Soon after the memo went viral, tech sites such as Hacker News started to see supportive statements. Motherboard reports that the verdicts expressed in the memo have some traction amongst the author’s former co-workers. It stands to reason that this agreement is not the privilege of Google employees, or as Alice Goldfuss put it:
I’ve read the Google anti-diversity screed and you should, too. You meaning men. Women have heard this shit before. Why should men read it? Because it’s a 10 page essay that eloquently tears away the humanity of women and non-white men. It uses bullet points and proper spelling and sounds very calm and convincing. And it should, because it was written by one of your peers.
— Alice Goldfuss (@alicegoldfuss) August 5, 2017
While I do not work in (US) “tech” (I’m an academic cryptographer at a British university), I guess the fields are close enough. Besides, gender essentialism is a prevalent idea beyond the confines of STEM disciplines. As mentioned above, the memo offers a bullet point list to support its claim:
1. [The differences between men and women] are universal across human cultures
2. They often have clear biological causes and links to prenatal testosterone
3. Biological males that were castrated at birth and raised as females often still identify and act like males
4. The underlying traits are highly heritable
5. They’re exactly what we would predict from an evolutionary psychology perspective
The memo and its defenders accuse those who disagree with its claims as being ideologically driven moralists1, hence the memo’s title. Alas, since I read several good critiques and their source material over the last few days, I figured I might attempt to summarise some of these arguments.2 Initially, my plan was to simply dump a list of books and articles here, but reading around as someone not so familiar with this literature, I found this mode of presentation (“well, my meta-study says your meta-study is full of it”) rather unhelpful. Thus, I opted for spelling out in more detail which arguments I found particularly illuminating.3
Postdoc at Royal Holloway on Quantum-Safe Cryptography in Hardware
Together with Carlos Cid, we have a two-year postdoc position available. The position is focused on hardware implementations of quantum-safe cryptography such as lattice-based, code-based, hash-based or mq-based schemes. If you are interested, feel free to get in touch with Carlos or me. If you know of somone who might be interested, we would appreciate if you could make them aware of this position.
Adventures in Cython Templating
Fpylll makes heavy use to Cython to expose Fplll’s functionality to Python. Fplll, in turn, makes use of C++ templates. For example, double, long double, dd_real (http://crd.lbl.gov/~dhbailey/mpdist/) and mpfr_t (http://www.mpfr.org/) are supported as floating point types. While Cython supports C++ templates, we still have to generate code for all possible instantiations of the C++ templates for Python to use/call. The way I implemented these bindings is showing its limitations. For example, here’s how attribute access to the dimension of the Gram-Schmidt object looks like:
@property
def d(self):
"""
Number of rows of B (dimension of the lattice).
>>> from fpylll import IntegerMatrix, GSO, set_precision
>>> A = IntegerMatrix(11, 11)
>>> M = GSO.Mat(A)
>>> M.d
11
"""
if self._type == gso_mpz_d:
return self._core.mpz_d.d
IF HAVE_LONG_DOUBLE:
if self._type == gso_mpz_ld:
return self._core.mpz_ld.d
if self._type == gso_mpz_dpe:
return self._core.mpz_dpe.d
IF HAVE_QD:
if self._type == gso_mpz_dd:
return self._core.mpz_dd.d
if self._type == gso_mpz_qd:
return self._core.mpz_qd.d
if self._type == gso_mpz_mpfr:
return self._core.mpz_mpfr.d
if self._type == gso_long_d:
return self._core.long_d.d
IF HAVE_LONG_DOUBLE:
if self._type == gso_long_ld:
return self._core.long_ld.d
if self._type == gso_long_dpe:
return self._core.long_dpe.d
IF HAVE_QD:
if self._type == gso_long_dd:
return self._core.long_dd.d
if self._type == gso_long_qd:
return self._core.long_qd.d
if self._type == gso_long_mpfr:
return self._core.long_mpfr.d
raise RuntimeError("MatGSO object '%s' has no core."%self)
In the code above uppercase IF and ELSE are compile-time conditionals, lowercase if and else are run-time checks. If we wanted to add Z_NR<double> to the list of supported integer types (yep, Fplll supports that), then the above Python approximation of a switch/case statement would grow by a factor 50%. The same would have to be repeated for every member function or attribute. There must be a more better way.
CCA Conversions
In Tightly Secure Ring-LWE Based Key Encapsulation with Short Ciphertexts we — together with Emmanuela Orsini, Kenny Paterson, Guy Peer and Nigel Smart — give a tight reduction of Alex Dent’s IND-CCA secure KEM conversion (from an OW-CPA schemes) when the underlying scheme is (Ring-)LWE:
Abstract: We provide a tight security proof for an IND-CCA Ring-LWE based Key Encapsulation Mechanism that is derived from a generic construction of Dent (IMA Cryptography and Coding, 2003). Such a tight reduction is not known for the generic construction. The resulting scheme has shorter ciphertexts than can be achieved with other generic constructions of Dent or by using the well-known Fujisaki-Okamoto constructions (PKC 1999, Crypto 1999). Our tight security proof is obtained by reducing to the security of the underlying Ring-LWE problem, avoiding an intermediate reduction to a CPA-secure encryption scheme. The proof technique maybe of interest for other schemes based on LWE and Ring-LWE.
16th IMA International Conference on Cryptography and Coding
IMA-CCC is a crypto and coding theory conference biennially held in the UK. It was previously held in Cirencester. So you might have heard of it as the “Cirncester” conference. However, it has been moved to Oxford, so calling it Cirencester now is a bit confusing. Anyway, it is happening again this year. IMA is a small but fine conference with the added perk of being right before Christmas. This is great because around that time of the year Oxford is a fairly Christmas-y place to be.
12 – 14 December 2017, St Catherine’s College, University of Oxford | 2019-02-23 23:09:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5256545543670654, "perplexity": 1999.9396088869562}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249556231.85/warc/CC-MAIN-20190223223440-20190224005440-00440.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/even-when-head-held-erect-figure-940-its-center-mass-not-directly-over-0 | Change the chapter
Question
Even when the head is held erect, as in Figure 9.40, its center of mass is not directly over the principal point of support (the atlanto-occipital joint). The muscles at the back of the neck should therefore exert a force to keep the head erect. That is why your head falls forward when you fall asleep in the class. (a) Calculate the force exerted by these muscles using the information in the figure. (b) What is the force exerted by the pivot on the head?
Question Image
1. $25\textrm{ N}$
2. $75 \textrm{ N}$
Solution Video
# OpenStax College Physics for AP® Courses Solution, Chapter 9, Problem 32 (Problems & Exercises) (1:42)
#### Sign up to view this solution video!
Rating
No votes have been submitted yet.
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. The atlanto-occipital joint is exerting a force upwards on the skull and there's a center of gravity positioned here which is exerting a torque about a pivot at this joint and it's 2.5 centimeters away and then there's neck muscles pulling straight down on this side exerting a counter-clockwise torque 5.0 centimeters from the pivot and when you say the two torque's have to be equal in order for the head to be stationary, and so we can solve for F M by dividing both sides by 5.0 centimeters. So by the way, this is the torque due to the weight of the head its weight multiplied by its lever arm— 2.5 centimeters— and here's the twerk due to the neck muscles the neck muscle force times 5 centimeters. So the neck muscle force then is the weight times 2.5 centimeters divided by 5.0 centimeters and it's okay to leave the units in centimeters instead of converting them into meters because since we are dividing these lengths, the centimeter units will cancel since they are the same. So we have 50 newtons times 2.5 divided by 5.0 which is 25 newtons is the force due to the muscles and that's downwards. The force due to this joint is the only force upwards and it has to balance the two forces downwards: force due to the muscle and the weight of the head. So it's going to be 25 newtons plus 50 newtons which is 75 newtons and that will be upwards. | 2022-01-22 14:19:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.505829393863678, "perplexity": 684.5098361106479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00341.warc.gz"} |
https://mathoverflow.net/questions/310589/an-integral-in-gradshteyn-and-ryzhik | # An integral in Gradshteyn and Ryzhik
Section 3.248 of the 4th edition of the table of integrals by Gradshteyn and Ryzhik contains three entries. They are of elementary examples of the beta function. In the 5th edition there are two new entries in this section: 3.248.5 is the integral from 0 to infinity of the the function $$\int_0^\infty \frac{1}{(1+x^2)^{3/2}\sqrt{f(x) + \sqrt{f(x)}}},$$
where
$$f(x) = 1 + \frac{4x^2}{3(1+x^2)^2.}$$ The answer is given as $\pi/(2 \sqrt{6})$. This is incorrect. The entry was taken out in the 7th edition. Now we know what the correct answer should be (a complicated difference of two elliptic integrals) and we also know if the inner square root is replaced by the 3/2 power, the answer is $\pi/(2 \sqrt{6})$. We would like to know if anyone has information about the origin of this entry.
The integral appeared in an edition of the table GR. I became interested in these evaluations and when we contacted the editors, there was no information about the origin of this formula. Since I had done some evaluations involving the double square root function, this entry caught my eye. I was hopeful that by bringing this entry to a wider forum, someone would know about its origin.
Thanks.
• Welcome, Victor! – Igor Rivin Sep 14 at 18:31
• Note: Victor Moll is one of the editors for recent editions of the book. – Gerald Edgar Sep 14 at 19:22
• But we want an exact answer. There is no point of Mathematica/Maple to evaluate integrals numerically. – Victor Moll Sep 15 at 1:42 | 2018-09-25 23:46:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666350245475769, "perplexity": 330.2513995616914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162563.97/warc/CC-MAIN-20180925222545-20180926002945-00285.warc.gz"} |
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3504029 | # Gauge Transformations in the Dual Space, and Pricing and Estimation in the Long Run in Affine Jump-Diffusion Models
23 Pages Posted: 13 Jan 2020 Last revised: 26 Mar 2021
See all articles by Svetlana Boyarchenko
## Svetlana Boyarchenko
University of Texas at Austin - Department of Economics
## Sergei Levendorskii
Calico Science Consulting
Date Written: December 14, 2019
### Abstract
We suggest a simple reduction of pricing European options in affine jump-diffusion models to pricing options with modified payoffs in diffusion models. The procedure is based on the conjugation of the infinitesimal generator of the model with an operator of the form $e^{i\Phi(-\sqrt{-1}\dd_x)}$ (gauge transformation in the dual space). A general procedure for the calculation of the function $\Phi$ is given, with examples. As applications, we consider pricing in jump-diffusion models and their subordinated versions using the eigenfunction expansion technique, and estimation of the extremely rare jumps component. The beliefs of the market about yet unobserved extreme jumps and pricing kernel can be recovered: the market prices allow one to see "the shape of things to come".
Keywords: affine jump-diffusions, eigenfunction expansion, long run, estimation, Ornstein-Uhlenbeck model, Vasicek model, square root model, CIR model
JEL Classification: C58, C63, C65, G12
Suggested Citation
Boyarchenko, Svetlana I. and Levendorskii, Sergei Z., Gauge Transformations in the Dual Space, and Pricing and Estimation in the Long Run in Affine Jump-Diffusion Models (December 14, 2019). Available at SSRN: https://ssrn.com/abstract=3504029 or http://dx.doi.org/10.2139/ssrn.3504029 | 2022-01-22 00:37:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5658562183380127, "perplexity": 2402.1326062363783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00387.warc.gz"} |
https://m-sciences.com/index.php/jast/citationstylelanguage/get/associacao-brasileira-de-normas-tecnicas?submissionId=57&publicationId=57 | LAKSHMI HN, K. .; LATHA, S. . A Note on Sufficient Conditions for Sakaguchi Type Functions of Order $$\beta$$. Journal of Advanced Studies in Topology, [S. l.], v. 3, n. 2, p. 59–65, 2022. Disponível em: https://m-sciences.com/index.php/jast/article/view/57. Acesso em: 5 dec. 2022. | 2022-12-06 00:31:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8266324996948242, "perplexity": 13989.976488884628}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00465.warc.gz"} |
http://bhomnick.net/cracking-the-taiwan-railways-captcha-python/ | Anyone who's ever spent time on the Taiwanese internet knows it's not exactly a model of modern usability. Take the Taiwan Railways Administration (TRA) online booking system, for example. Someone decided that the only way possible to query whether or not a train has seats available is to actually book a ticket on that train. Someone (correctly) also decided that every script kiddie out there is going to try and automate this because it's surprisingly nice to know which trains have availability before you book, rather than filling out a lengthy form and seeing the "Order Ticket Fail:No seat available in this journey" message over and over again.
Anyone who's ever spent time on the Taiwanese internet also knows it's not exactly a model of modern security. Take the TRA online booking system, for example. Someone decided that to prevent automation they'll throw a captcha on the last step in the booking process--right before they tell you there are no seats available. That someone (also correctly) realized that if they built their own captcha they could bill at least 20 more hours at full rate, because who doesn't want more security?
I took a crack (no pun intended) at this, thinking it would be a fun project to get some non-web python in. Follow along with the code on GitHub.
There are two main pieces to breaking the catpcha: isolating characters from the original image, and then guessing what each character is by comparing image data to a sample training set.
To give you an example of what we're dealing with, here are a few examples of captchas I've encountered:
As you can see they're not the greatest captchas in the world. There's some color variation, a bit of background noise, rotation in the characters, but the real weakness is only numerical digits are used and all captchas only have either 5 or 6 characters. This means we'll only have to guess among 10 possibilities instead of 30+ you'd expect in a normal alphanumeric captcha. We also know exactly how many characters to expect, which significantly increases the accuracy of our decoder.
The first step is to remove anything that isn't part of a character, so we'll try and get rid of the background color and those weird web-1.0-looking boxes. We can take advantage of the fact both the background and boxes are generally lighter than the characters and filter out colors above a certain pixel value.
colors = sorted(
filter(
lambda x: x[1] < max and x[1] > min,
im.getcolors()
),
key=itemgetter(0),
reverse=True
)[:n]
Here we're returning the n most prominent colors in the image by pixel color frequency, ignoring pixels above (lighter than) max or below (darker than) min. All the images we're working with here have been converted to 256 color pallete gifs, so 255 always represents white. Next we'll use this list of colors to build a new monochrome image using only pixels from the original that appear in our list of prominent colors.
sample = Image.new('1', im.size, 1)
width, height = im.size
for col in range(width):
for row in range(height):
pixel = im.getpixel((col, row))
if pixel in colors:
sample.putpixel((col, row), 0)
If we run this against the top right captcha in the image above the result looks like this:
This is already looking pretty reasonable, but some of the background noise still remains and will cause trouble when we try and break the image into characters. This can be removed by doing some more filtering:
im = im.filter(ImageFilter.RankFilter(n, m))
The rank filter takes windows of n pixels, orders them by value, and sets the center point as the mth value in that list. In this case, I'm using n=3 and m=2, but these numbers were just eyeballed and there's probably a better way to do this. Here's what the filtered image looks like:
Now it's looking pretty obvious where the characters are. Things are a little blurrier from the filter but luckily this doesn't matter as the character shapes are still unique enough for comparison. The last step is to break them up into features.
arr = numpy.asarray(im.convert('L'))
labels, _ = ndimage.label(numpy.invert(arr))
slices = ndimage.find_objects(labels)
features = []
for slice in slices:
feature = arr[slice]
features.append((feature, numpy.count_nonzero(feature),
slice[1].start))
Here we're taking advantage of scipy's ndimage to determine slices representing contiguous areas of black pixels in our image, pull those features out of the image, and create a list of features along with their pixel count and starting x-coordinate. Since we know there are only 5 or 6 characters in the captcha we only need to look at the 6 largest features.
features = sorted(
filter(lambda x: x[1] > min_pixels, features),
key=itemgetter(1),
reverse=True
)[:max_features]
features = sorted(features, key=itemgetter(2))
Features with pixel counts below min_pixels are filtered out, we only return max_features features (6 in this case), and the features are sorted from left to right. This list, when converted back to images, looks how we'd expect:
Now it's time to take a guess at what these characters are.
## Using the vector space model to compare images
Some research (googling) led me to Ben Boyter's Decoding Captchas book where he uses a vector space model for classifying images and calculating similarity values. It works, the math isn't too complicated, and it sounds cool to talk about things in vector space. That's good enough for me.
The vector space model is something that appears frequently in text search. Documents can be represented as vectors where components are weights of specific terms, and it's then possible to retrieve relevant documents by comparing a query to a list of documents by examining the angle between their respective vectors.
We can do the same thing for images, except instead of words we're going to be identifying images by pixel values. Let's take these three blown-up images for example:
A B C
It's obvious to a human A and C are more similar visually. Now we need to teach our future AI overlords.
We can convert each of these images into vectors by setting each component to a pixel value, either 0 for white or 1 for black, left to right, top to bottom. Doing this yields:
$$\bf{A} = [0, 1, 1, 1, 0, 0, 0, 1, 0]\\ \bf{B} = [0, 0, 1, 0, 1, 1, 1, 1, 0]\\ \bf{C} = [1, 1, 1, 1, 0, 0, 0, 1, 0]\\$$
Now that we have vector space representations of our images, we need to compare them somehow. It turns out the cosine of the angle between two vectors is relatively easy to calculate:
$$\cos\theta=\frac{\bf{A}\cdot\bf{B}}{\vert\vert\bf{A}\vert\vert\:\vert\vert\bf{B}\vert\vert}$$
Remember both the norm of a vector and the dot product of two vectors return scalars, which is what we expect since we're looking for a cosine. Identical images will have no angle between them so $$\theta=0$$ and $$\cos\theta=1$$, and since we're only dealing with postive values, completely unrelated images will be orthogonal, i.e. $$\theta=\pi/2$$ and $$\cos\theta=0$$. Let's do the math for A and B:
$$\bf{A}\cdot\bf{B}=\sum_{i=1}^n\it{A_iB_i}=2\\ \vert\vert\bf{A}\vert\vert\:\vert\vert\bf{B}\vert\vert=\sqrt{\sum_{i=1}^n\it{A}_i^2}\sqrt{\sum_{i=1}^n\it{B}_i^2}=\sqrt{20}\\ \cos\theta=\frac{2}{\sqrt{20}}\approx0.4472$$
Not so close. However, if we do the same for A and C:
$$\cos\theta=\frac{4}{\sqrt{20}}\approx0.8944$$
That's a much better match. Python makes it easy to do this for PIL images:
def cosine_similarity(im1, im2):
def _vectorize(im):
return list(im.getdata())
def _dot_product(v1, v2):
return sum([c1*c2 for c1, c2 in izip(v1, v2)])
def _magnitude(v):
return sqrt(sum(count**2 for c in v))
v1, v2 = [vectorize(im) for im in scale(im1, im2)]
return _dot_product(v1, v2) / (_magnitude(v1) * _magnitude(v2))
## Making some training data
Armed with a way of isolating characters from the original captcha and a method for comparing image similarity we can start cracking captchas.
The one tricky thing about these captchas is the characters have a random rotation angle. And while we could try to deskew the characters or rotate them and compare to a sample iconset, I thought it would probably be easier just to generate an iconset containing samples already rotated at various angles.
## Putting it all together
It's a bit brute force, but now all our decoder has to do is compare each isolated character to the sample iconest and pick the character with the greatest similarity:
def guess_character(self, im, min_similarity=0.7, max_guesses=5):
guesses = []
for symbol, icon in _get_icons():
guess = cosine_similarity(im, icon)
if guess >= min_similarity:
guesses.append((symbol, guess))
return sorted(
guesses,
reverse=True,
key=itemgetter(1)
)[:max_guesses]
## Next steps
My initial goal for the project was 20% accuracy and I was pleasantly surprised that just eyeballing parameters for the image processing yielded a whopping 40% accuracy over my test dataset. Note to developers: that's what happens when you try and roll your own captcha. The biggest trip-ups seem to be overlapping characters and leftover noise artifacts from filtering.
Overlapping characters could be dealt with by considering a maximum pixel size for features. We already have a minimum feature pixel size to filter out leftover background noise, so why not add a maximum pixel feature size, split chunks larger than that horizontally, and hope they're not too weird for the guessing bits? Because I'm lazy, that's why.
Filter settings could probably be optimized further to deal with leftover background noise, but at the end of the day I'm calling 40% squarely in "ain't broke" territory. | 2019-03-19 23:45:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3205268085002899, "perplexity": 1318.8895751532743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202161.73/warc/CC-MAIN-20190319224005-20190320010005-00117.warc.gz"} |
https://ncatlab.org/nlab/show/transitive+action | # Contents
## Definition
An action
$\rho : G \times X \to X$
of a group $G$ on a set $X$ is transitive if it has a single orbit, i.e., if for every two points $a,b$ there exists $g\in G$ such that $b = \rho(g,a)$. A set equipped with a transitive action of $G$ (and which is inhabited) is the same thing as a connected object in the category $G Set$.
For $k\ge 0$, an action $G \times X \to X$ is said to be $k$-transitive if the componentwise-action $G \times X^{\underline{k}} \to X^{\underline{k}}$ is transitive, where $X^{\underline{k}}$ denotes the set of tuples of $k$ distinct points (i.e., injective functions from $\{1,\dots,k\}$ to $X$). For instance, an action of $G$ on $X$ is 3-transitive if any pair of triples $(a_1,a_2,a_3)$ and $(b_1,b_2,b_3)$ of points in $X$, where $a_i \ne a_j$ and $b_i \ne b_j$ for $i\ne j$, there exists $g \in G$ such that $(b_1,b_2,b_3) = (g a_1,g a_2,g a_3)$.
A transitive action that is also free is called regular.
## Examples
• Any group $G$ acts transitively on itself by multiplication $\cdot : G \times G \to G$, which is called the (left) regular representation of $G$.
• The alternating group $A_n$ acts transitively on $\{1,\dots,n\}$ for $n \gt 2$, and in fact it acts $(n-2)$-transitively for all $n \ge 2$.
• The modular group $PSL(2,\mathbb{Z})$ acts transitively on the rational projective line $\mathbb{P}^1(\mathbb{Q}) = \mathbb{Q} \cup \{\infty\}$. The projective general linear group $PGL(2,\mathbb{C})$ acts 3-transitively on the Riemann sphere $\mathbb{P}^1(\mathbb{C})$.
## As an action on cosets
Let $\rho : G \times X \to X$ be a transitive action and suppose that $X$ is inhabited. Then $\rho$ is equivalent to the action of $G$ by multiplication on a coset space $G/H$, where the subgroup $H$ is taken as the stabilizer subgroup
$H = G_x = \{ g \in G \mid g x = x \}$
of some arbitrary element $x \in X$. In particular, the transitivity of $\rho$ guarantees that the $G$-equivariant map $G/H \to X$ defined by $g H \mapsto g x$ is a bijection. (Note that although the subgroup $H = G_x$ depends on the choice of $x$, it is determined up to conjugacy, and so the coset space $G/H$ is independent of the choice of element.)
## References
Revised on May 2, 2016 10:22:38 by Urs Schreiber (131.220.184.222) | 2016-08-25 18:25:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 53, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930015802383423, "perplexity": 81.05121229070761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293922.13/warc/CC-MAIN-20160823195813-00174-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://socratic.org/questions/a-classmate-claims-that-sodium-gains-a-positive-charge-when-it-becomes-an-ion-be | # A classmate claims that sodium gains a positive charge when it becomes an ion because it gains a proton. What is wrong with the student's claim?
Nov 24, 2016
You mean of course, $\text{apart from being egregiously wrong.........}$
#### Explanation:
The charge of an atom or ion, depends on the excess, the deficiency, or equality of electrons, fundamental particles of negligible mass and of NEGATIVE charge, to protons, fundamental particles of definite mass and of POSITIVE charge.
Our designation of positive and negative electronic charge is arbitrary, and in fact it would have made a lot more sense had chemists designated the electron as the positive particle.
However, the number of protons, fundamental particles of definite mass and of POSITIVE charge (I know I am repeating myself), specifies the identity of the nucleus absolutely, because it gives by definition the atomic number $Z$. $Z = 1$, we have hydrogen, $Z = 2$, we have helium, $Z = 3$, we have lithium,..................$Z = 41$, we have niobium. Elements don't gain protons during any chemical reaction; this is the province of the nuclear physicist, and not the chemist.
You don't have to learn these numbers, because for every exam you ever sit in chemistry and physics, you should and must be supplied with a copy of the Periodic Table. You do have to be able to use the Periodic Table correctly, and, given an atomic number, you have to be able to find the element, and know that this number $Z$ represents the number of nuclear protons, and thus also the number of electrons in the neutral element.
The other thing the Periodic Table supplies is the atomic mass, which for each nucleus is the weighted average of isotopes of each element, whose nuclei may contain different numbers of neutrons.
Confused yet? If there is a specific query or question, voice it, and someone will help you. Atomic structure is not as complicated as it seems. | 2019-08-25 04:30:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7845658659934998, "perplexity": 558.6755390770355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00041.warc.gz"} |
https://stats.stackexchange.com/questions/78079/confidence-interval-of-rmse | # Confidence interval of RMSE
I have taken a sample of $n$ data points from a population. Each of these points has a true value (known from ground truth) and an estimated value. I then calculate the error for each sampled point and then calculate the RMSE of the sample.
How can I then infer some sort of confidence interval around this RMSE, based upon the sample size $n$?
If I was using the mean, rather than the RMSE, then I wouldn't have a problem doing this as I can use the standard equation
$m = \frac{Z \sigma}{\sqrt{n}}$
but I don't know whether this is valid for RMSE rather than the mean. Is there some way that I can adapt this?
(I have seen this question, but I don't have issues with whether my population is normally-distributed, which is what the answer there deals with)
• What specifically are you computing when you "calculate the RMSE of the sample"? Is it the RMSE of the true values, of the estimated values, or of their differences?
– whuber
Nov 29, 2013 at 18:29
• I'm calculating the RMSE of the differences, that is, calculating the square root of the mean of the squared differences between the true and estimated values. Nov 29, 2013 at 18:30
• If you know the 'ground truth' (though I am not sure what that actually means), why would you need the uncertainty in RMSE? Are you trying to construct some kind of inference about cases where you don't have the ground truth? Is this a calibration issue? Dec 2, 2013 at 16:11
• @Glen_b: Yup, that's exactly what we're trying to do. We don't have the ground truth for the entire population, just for the sample. We are then calculating an RMSE for the sample, and we want to have the confidence intervals on this as we are using this sample to infer the RMSE of the population. Dec 2, 2013 at 19:19
• Possible duplicate of SE of RMSE in R Dec 3, 2013 at 10:48
I might be able to give an answer to your question under certain conditions.
Let $$x_{i}$$ be your true value for the $$i^{th}$$ data point and $$\hat{x}_{i}$$ the estimated value. If we assume that the differences between the estimated and true values have
1. mean zero (i.e. the $$\hat{x}_{i}$$ are distributed around $$x_{i}$$)
3. and all have the same standard deviation $$\sigma$$
in short:
$$\hat{x}_{i}-x_{i} \sim \mathcal{N}\left(0,\sigma^{2}\right),$$
then you really want a confidence interval for $$\sigma$$.
If the above assumptions hold true $$\frac{n\mbox{RMSE}^{2}}{\sigma^{2}} = \frac{n\frac{1}{n}\sum_{i}\left(\hat{x_{i}}-x_{i}\right)^{2}}{\sigma^{2}}$$ follows a $$\chi_{n}^{2}$$ distribution with $$n$$ (not $$n-1$$) degrees of freedom. This means
\begin{align} P\left(\chi_{\frac{\alpha}{2},n}^{2}\le\frac{n\mbox{RMSE}^{2}}{\sigma^{2}}\le\chi_{1-\frac{\alpha}{2},n}^{2}\right) = 1-\alpha\\ \Leftrightarrow P\left(\frac{n\mbox{RMSE}^{2}}{\chi_{1-\frac{\alpha}{2},n}^{2}}\le\sigma^{2}\le\frac{n\mbox{RMSE}^{2}}{\chi_{\frac{\alpha}{2},n}^{2}}\right) = 1-\alpha\\ \Leftrightarrow P\left(\sqrt{\frac{n}{\chi_{1-\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\le\sigma\le\sqrt{\frac{n}{\chi_{\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\right) = 1-\alpha. \end{align}
Therefore, $$\left[\sqrt{\frac{n}{\chi_{1-\frac{\alpha}{2},n}^{2}}}\mbox{RMSE},\sqrt{\frac{n}{\chi_{\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\right]$$ is your confidence interval.
Here is a python program that simulates your situation
from scipy import stats
from numpy import *
s = 3
n=10
c1,c2 = stats.chi2.ppf([0.025,1-0.025],n)
y = zeros(50000)
for i in range(len(y)):
y[i] =sqrt( mean((random.randn(n)*s)**2))
print "1-alpha=%.2f" % (mean( (sqrt(n/c2)*y < s) & (sqrt(n/c1)*y > s)),)
Hope that helps.
If you are not sure whether the assumptions apply or if you want to compare what I wrote to a different method, you could always try bootstrapping.
• I think you are wrong - he wants CI for RMSE, not $\sigma$. And I want it too :) Dec 3, 2013 at 10:54
• I don't think I am wrong. Just think about it like this: The MSE is actually the sample variance since $\mbox{MSE} = \hat\sigma^2 = \frac{1}{n}\sum_{i=1}^n (x_i-\hat x_i)^2$. The only difference is that you divide by $n$ and not $n-1$ since you are not subtracting the sample mean here. The RMSE would then correspond to $\sigma$. Therefore, the population RMSE is $\sigma$ and you want a CI for that. That's what I derived. Otherwise I must completely misunderstand your problem. Dec 3, 2013 at 16:37
• Your assumption of an unbiased estimator is quite strong. Moreover, your confidence interval should be with $n-1$.
– Sam
Jun 26, 2020 at 15:03
• I encoded an example using this technique in R: gist.github.com/brshallo/7eed49c743ac165ced2294a70e73e65e Mar 17, 2021 at 18:48
• The link to the Purdue university website is no longer valid, so an edit has removed it. If you happen to recall which resource you were referring to, please edit a full citation into the post. Thanks!
– Sycorax
Dec 20, 2021 at 22:27
The reasoning in the answer by fabee seems correct if applied to the STDE (standard deviation of the error), not the RMSE. Using similar nomenclature, $i=1,\,\ldots,\,n$ is an index representing each record of data, $x_i$ is the true value and $\hat{x}_i$ is a measurement or prediction.
The error $\epsilon_i$, BIAS, MSE (mean squared error) and RMSE are given by: $$\epsilon_i = \hat{x}_i-x_i\,,\\ \text{BIAS} = \overline{\epsilon} = \frac{1}{n}\sum_{i=1}^{n}\epsilon_i\,,\\ \text{MSE} = \overline{\epsilon^2} = \frac{1}{n}\sum_{i=1}^{n}\epsilon_i^2\,,\\ \text{RMSE} = \sqrt{\text{MSE}}\,.$$
Agreeing on these definitions, the BIAS corresponds to the sample mean of $\epsilon$, but MSE is not the biased sample variance. Instead: $$\text{STDE}^2 = \overline{(\epsilon-\overline{\epsilon})^2} = \frac{1}{n}\sum_{i=1}^{n}(\epsilon_i-\overline{\epsilon})^2\,,$$ or, if both BIAS and RMSE were computed, $$\text{STDE}^2 = \overline{(\epsilon-\overline{\epsilon})^2}=\overline{\epsilon^2}-\overline{\epsilon}^2 = \text{RMSE}^2 - \text{BIAS}^2\,.$$ Note that the biased sample variance is being used instead of the unbiased, to keep consistency with the previous definitions given for the MSE and RMSE.
Thus, in my opinion the confidence intervals established by fabee refer to the sample standard deviation of $\epsilon$, STDE. Similarly, confidence intervals may be established for the BIAS based on the z-score (or t-score if $n<30$) and $\left.\text{STDE}\middle/\sqrt{n}\right.$.
• You are right, but missed a part of my answer. I basically assumed that BIAS=0 (see assumption 1). In that case, $RMSE^2 = STDE^2$ as you derived. Since both $RMSE^2$ and $BIAS^2$ are $\chi^2$ and there exists a close form solution for the sum of two $\chi^2$ RVs, you can probably derive a close form confidence interval for the case when assumption 1 is dropped. If you do that and update your answer, I'll definitely upvote it. Nov 14, 2015 at 3:25
Following Faaber 1999, the uncertainty of RMSE is given as $$\sigma (\hat{RMSE})/RMSE = \sqrt{\frac{1}{2n}}$$ where $n$ is the number of datapoints.
Borrowing code from @Bryan Shalloway's link (https://gist.github.com/brshallo/7eed49c743ac165ced2294a70e73e65e, which is in the comment in the accepted answer), you can calculate this in R with the RMSE value and the degrees of freedom, which @fabee suggests is n (not n-1) in this case.
The R function:
rmse_interval <- function(rmse, deg_free, p_lower = 0.025, p_upper = 0.975){
tibble(.pred_lower = sqrt(deg_free / qchisq(p_upper, df = deg_free)) * rmse,
.pred_upper = sqrt(deg_free / qchisq(p_lower, df = deg_free)) * rmse)
}
A practical example: If I had an RMSE value of 0.3 and 1000 samples were used to calculate that value, I can then do
rmse_interval(0.3, 1000)
which would return:
# A tibble: 1 x 2
.pred_lower .pred_upper
<dbl> <dbl>
1 0.287 0.314 | 2022-05-25 14:25:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6773292422294617, "perplexity": 646.3381396749413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00750.warc.gz"} |
https://math.stackexchange.com/questions/2396947/a-question-of-trigonometry-on-how-to-find-minimum-value | # A question of trigonometry on how to find minimum value. [duplicate]
Find The minimum value of $$2^{\sin^2 \alpha} + 2^{\cos^2 \alpha}.$$
## marked as duplicate by lab bhattacharjee trigonometry StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Aug 19 '17 at 1:13
$f(x)=2^x$ is a convex function.
Thus, by Jensen: $$2^{\sin^2\alpha}+2^{\cos^2\alpha}\geq2\cdot2^{\frac{\sin^2\alpha+\cos^2\alpha}{2}}=2\sqrt2.$$ The equality occurs for $\alpha=\beta=45^{\circ}$, which says that we got a minimal value.
Done!
Also, we can use $(x+y)^2\geq4xy$, which is $(x-y)^2\geq0$: $$2^{\sin^2\alpha}+2^{\cos^2\alpha}=\sqrt{\left(2^{\sin^2\alpha}+2^{\cos^2\alpha}\right)^2}\geq$$ $$\geq\sqrt{4\cdot2^{\sin^2\alpha}\cdot2^{\cos^2\alpha}}=\sqrt{4\cdot2^{\sin^2\alpha+\cos^2\alpha}}=\sqrt8=2\sqrt2$$
• could you give me a more "beginner-type" solution? – Lokesh Sangewar Aug 17 '17 at 15:09
• @Lokesh Sangewar My solution for beginner. Draw graph of $f(x)=2^x$ and two points $A\left(\sin^2\alpha,2^{\sin^2\alpha}\right)$ and $B\left(\cos^2\alpha,2^{\cos^2\alpha}\right)$. Then graph of $f$ placed behind the segment $AB$. Now, the inequality is obviuous! – Michael Rozenberg Aug 17 '17 at 15:15
• @Lokesh Sangewar I added something. See now. – Michael Rozenberg Aug 17 '17 at 15:21
HINT: By $AM-GM$ we have $$\frac{2^{\sin(x)^2}+2^{\cos(x)^2}}{2}\geq \sqrt{2^{\sin(x)^2+\cos(x)^2}}=...$$
Hint: Write $\cos(\alpha)^2=1-\sin(\alpha)^2$, so that
\begin{align} 2^{\sin(\alpha)^2}+2^{\cos(\alpha)^2}&=2^{\sin(\alpha)^2}+2^{1-\sin(\alpha)^2} \\&=2^{\sin(\alpha)^2}+\frac{2}{2^{\sin(\alpha)^2}} \end{align}
With $t={\sin(\alpha)^2}$, can you minimize the expression above?
• How should I minimize it? – Lokesh Sangewar Aug 17 '17 at 15:07
• We know $t\in[0,1]$, so $2^t\in[1,2]$. Let $x=2^t$. The question is hence equivalent to minimizing $x+\frac2x$. This can be done in several different ways -- you can use calculus, AM-GM inequality, and if you can picture the graph of $f(x)=x+\frac2x$, you can even do it via quadratic equations (when does a quadratic polynomial have a single real root?). – Fimpellizieri Aug 17 '17 at 15:10
$f'(\alpha)=2\log 2 \sin \alpha \cos \alpha \left[2^{\sin ^2\alpha}- 2^{\cos ^2\alpha}\right]$
$f'(\alpha)=0 \to 2\sin\alpha\cos\alpha=0$ or $2^{\sin ^2\alpha}- 2^{\cos ^2\alpha}=0$
$\sin 2\alpha=0\to 2\alpha=k\pi\to\alpha=\dfrac{k\pi}{2}$
$2^{\sin ^2\alpha}- 2^{\cos ^2\alpha}=0\to 2^{\sin ^2\alpha}= 2^{\cos ^2\alpha}$
$\sin^2\alpha=\cos^2\alpha\to |\sin\alpha|=|\cos\alpha|\to \alpha=\dfrac{\pi}{4}+k\dfrac{\pi}{2}$
Now it's easy to see that $\alpha=\dfrac{\pi}{4}$ etc leads to the minimum
$2^{\sin^2 \alpha} + 2^{\cos^2 \alpha}=2^{\frac12}+2^{\frac12}=2\sqrt 2$
hope this helps
$x:= \sin^2(\alpha)$; $1-x = \cos^2(\alpha)$ .
Then: $f(x):= 2^x + \frac{2}{2^x}$, $0 \le x \le1$.
$z := 2^x$ ;
$g(z): = z + \frac{2}{z} , 1 \le z \le 2$.
AM GM inequality:
$(1/2) ( z + \frac{2}{z}) \ge (z \frac{2}{z})^{1/2} =$
$\sqrt{2}$.
$g(z) = z + \frac{2}{z} \ge 2 \sqrt{2}$.
Equality for $z = √2$.
Back substitution: $2^x = 2^{1/2}$.
Minimum at $x= 1/2$, I.e
$x = \sin^2(\alpha) = 1/2 = \cos^2(\alpha)$ ,
hence $\alpha = 45°$.
$\min(f(x)) = f(x =1/2) = 2√2$. | 2019-10-16 02:00:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6201372146606445, "perplexity": 3127.9730692395806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00558.warc.gz"} |
https://picongpu.readthedocs.io/en/0.3.1/usage/workflows/numberOfCells.html | # Setting the Number of Cells¶
Together with the grid resolution in grid.param, the number of cells in our .cfg files determine the overall size of a simulation (box). The following rules need to be applied when setting the number of cells:
Each GPU needs to:
1. contain an integer multiple of supercells
2. at least three supercells
Supercell sizes in terms of number of cells are set in memory.param and are by default 8x8x4 for 3D3V simulations on GPUs. For 2D3V simulations, 16x16 is usually a good supercell size, however the default is simply cropped to 8x8, so make sure to change it to get more performance. | 2019-12-12 23:31:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33542969822883606, "perplexity": 1694.33801313699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00152.warc.gz"} |
https://algorithm-essentials.soulmachine.me/dfs/additive-number/ | # Additive Number
### 描述
Additive number is a string whose digits can form additive sequence.
A valid additive sequence should contain at least three numbers. Except for the first two numbers, each subsequent number in the sequence must be the sum of the preceding two.
For example:
"112358" is an additive number because the digits can form an additive sequence: 1, 1, 2, 3, 5, 8.
1 + 1 = 2, 1 + 2 = 3, 2 + 3 = 5, 3 + 5 = 8
"199100199" is also an additive number, the additive sequence is: 1, 99, 100, 199.
1 + 99 = 100, 99 + 100 = 199
Note: Numbers in the additive sequence cannot have leading zeros, so sequence 1, 2, 03 or 1, 02, 3 is invalid.
Given a string containing only digits '0'-'9', write a function to determine if it's an additive number.
Follow up:
How would you handle overflow for very large input integers?
### 代码
// Additive Number// 多入口深搜// 时间复杂度O(n^3),空间复杂度O(1)public class Solution { public boolean isAdditiveNumber(String num) { for (int i = 1; i <= num.length() / 2; ++i) { if (num.charAt(0) == '0' && i > 1) continue; for (int j = i + 1; j < num.length(); ++j) { if (num.charAt(i) == '0' && j - i > 1) continue; if (dfs(num, 0, i, j)) return true; } } return false; } // 判断从 [i, j) 和 [j, k) 出发,能否走到尽头 private static boolean dfs(String num, int i, int j, int k) { long num1 = Long.parseLong(num.substring(i, j)); long num2 = Long.parseLong(num.substring(j, k)); final String addition = String.valueOf(num1 + num2); if (!num.substring(k).startsWith(addition)) return false; if (k + addition.length() == num.length()) return true; return dfs(num, j, k, k + addition.length()); }} | 2023-04-02 08:42:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45119914412498474, "perplexity": 1812.00504134766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00709.warc.gz"} |
https://brilliant.org/problems/sometimes-coefficients-are-scary/ | # Do you know how to multiply?
Algebra Level 4
$\large \displaystyle \prod_{n=1}^{100} (x+n)$
Find the coefficient of $$x^{98}$$ in the above product.
× | 2018-01-22 10:26:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.574849009513855, "perplexity": 947.0435923003122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891277.94/warc/CC-MAIN-20180122093724-20180122113724-00699.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-molecular-science-5th-edition/chapter-4-energy-and-chemical-reactions-questions-for-review-and-thought-topical-questions-page-189d/61d | ## Chemistry: The Molecular Science (5th Edition)
The reaction that is the most exothermic will be the one with the most negative $\Delta H$. Using the calculations from part c, we find that this is the reaction with HF. | 2019-08-18 03:42:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.376518577337265, "perplexity": 964.7316986445567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00251.warc.gz"} |
https://www.middleprofessor.com/files/applied-biostatistics_bookdown/_book/plotting-models.html | # Chapter 4 Plotting Models
So, along the lines of Sarah Susanka’s “Not So Big House,” Kolbert asks the group, “What would a Pretty Good House look like?” – Michael Maines2
Plots should be the focus of both the reader and researcher. Instead of mindless plotting, a researcher should ask a series of questions of every plot
1. What is the point of each element in a plot?
2. Are these the points that I most want to communicate?
3. Are there better practices for communicating these points?
4. Are the points that I want to communicate that are not covered by these elements?
The answer to these questions should inform what is and what is not plotted. The result is a pretty good plot. The idea of a pretty good plot is borrowed from the “pretty good house” concept that grew out of a collaborative group of builders and architects in Northern New England. The “pretty good house” combines best practices for building an earth friendly, high performance home at a reasonable cost. There is no pretty good house governing body that awards certificates of achievement but, instead, a set of metrics and a collection of building practices that can achieve these.
A typical pretty good plot contains some combination of
1. Modeled effects with confidence intervals. “Effects” are differences between groups in response to treatment – the raison d’etre of an experiment.
2. Modeled means and confidence intervals.
3. Individual data points or a summary distribution of these.
## 4.1 Pretty good plots show the model and the data
The data to introduce best practices in plotting come from Figure 2d and Figure 2e from “ASK1 inhibits browning of white adipose tissue in obesity”, introduced in the introductor chapter (Analyzing experimental data with a linear model)
### 4.1.1 Pretty good plot component 1: Modeled effects plot
Biologists infer the biological consequences of a treatment by interpreting the magnitude and sign of treatment “effects”, such as the differences in means among treatment levels. Why then do we mostly plot treatment level means, where effect magnitude and sign can only be inferred indirectly, by mentally computing differences in means? A pretty good plot directly communicates treatment effects and the uncertainty in the estimates of these effects using an effects plot.
Figure ?? is an effects plot of the linear model fit to the glucose tolerance data. The effects plot is “flipped”. The y-axis is the categorical variable – it contains the labels identifying the pair of groups in the contrast and the direction of the difference. In addition to the pairwise comparisons, I include the interaction effect on the y-axis. The x-axis is the continuous variable – it contains the simple effects, which is the difference in means between the two groups identified by the y-axis labels. Additionally, the y-axis includes the estimate of the $$diet \time genotype$$ interaction effect. The bars are 95% confidence intervals of the effects (either simple effects or interaction effect), which is the range of values that are compatible with the observed data at the 95% level.
We can use the effects and CIs of the effects to evaluate the treatment effects. For example, when on a high fat diet (HFD), the mean, post-baseline plasma glucose level in the ASK1$$\Delta$$adipo is 3.5 mmol/L less than that for the control (ASK1F/F). Differences less than 5.3 mmol/L less than ASK1F/F levels or greater than 1.7 mmol/L less than ASK1F/F levels are not very compatible with the data. It is up to the research community to decide if 1.7 mmol/L or 3.5 mmol/L differences are physiologically meaningful effects.
### 4.1.2 Pretty good plot component 2: Modeled mean and CI plot
The response plot in Figure 4.2 “shows the model” – by this I mean the plot shows the modeled means, represented by the large circles, the modeled 95% confidence intervals of each mean, represented by the error bars, and the model-adjusted individual response values, represented by the small colored dots. What do I mean by modeled means, modeled error intervals, and model-adjusted responses?
ask1 diet N Sample mean Sample sigma Sample SE Model mean Model sigma Model SE
ASK1F/F chow 10 14.35 1.48 0.47 14.35 1.85 0.59
ASK1F/F HFD 13 19.40 3.60 1.00 18.63 1.85 0.52
1. The modeled means and error intervals are estimated from the statistical model. Many published plots show the sample means and sample error intervals, which are computed within each group independently of the data in the other groups and are not adjusted for any covariates or for any hierarchical structure to the data.
2. A modeled mean will often be equal to the raw mean, but this will not always be the case. Here, the modeled means for the non-reference groups in Figure ?? do not equal the sample means because the modeled means are adjusted for the baseline measures of glucose (Table ??) (specifically, the modeled means are conditional on the baseline being equal to the mean of the baseline in the reference group).
3. For most of the analyses in this text, modeled error intervals are not the same as the sample error intervals and are commonly conspicuously different. For the glucose tolerance data, the modeled error intervals are calculated from a pooled estimate of $$\sigma$$ while the sample error intervals are estimated from sample-specific estimates of $$\sigma$$ (Table ??).
4. Model-adjusted responses are responses that are adjusted for covariates in the model. If there are no covariates in the model, the model-adjusted responses are the same as the raw response. In the glucose tolerance data, the model-adjusted responses are the modeled, individual response measures if all individuals had the same baseline glucose (the covariate).
Modeled means, error intervals, and responses are not commonly plotted but it is these values that are consistent with our inferences from the statistical model. There are many data sets in experimental biology where a plot of sample means, error intervals, and responses give a very distorted view of inference from the model.
The response plot in Figure 4.2 also “shows the data” by plotting response values as “jittered” dots. Showing the data
1. allows the reader to get a sense of the underlying sample size and distribution including outliers, which can be used to mentally model check the published statistical analysis. Adding a box plot, violin plot, or dot plot augments the communication of the distributions if there are enough data to justify the addition.
2. allows a reader to see the overlap in individual responses among groups and to evaluate the biological consequences of this overlap.
### 4.1.3 Combining Effects and Modeled mean and CI plots – an Effects and response plot.
Combining the effects and response plots into a single plot is an easy solution to issues that arise if only one or the other is used. What are these issues?
While a response plot like that in Figure 4.2 is standard in biology, it fails to show the effects, and the uncertainty in the effects, explicitly. To infer the effects from the plot, a reader must perform mental math – either compute the difference or the ratio between pairs of means. This mental math is easy enough if the comparisons are between individual treatment levels but much harder if the comparisons are between pooled sets of treatment levels, for example in a factorial experimental design. The mental math that is excessively difficult is the reconstruction of some kind of error interval of the contrasts, for example the 95% confidence intervals in Figure ?? and it is these intervals that are necessary for a researcher to infer the range of biological consequences that are compatible with the experiment’s results. The inclusion of the p-values for all pairwise comparisons in a response plot gives the significance level of these contrasts, but of the kinds of summary results that we could present (contrasts, error intervals, p-values), the p-values are the least informative.
Effects plots are very uncommon in most of biology outside of meta-analysis and clinical medicine more generally. An effects plot alone fails to communicate anything about the sample size or conditional distribution of the data. Equally important, response values are often meaningful and researchers working in the field should be familiar with usual and unusual values. This can be useful for interpreting biological consequences of treatment effects but also for researchers and readers to asses the credibility of the data (for example, I have twice, once in my own data and once in a colleagues data, found mistakes in the measurement of an entire data set of response variable because the plotted values weren’t credible).
### 4.1.4 Some comments on plot components
1. Several recent criticisms of bar plots have advocated box plots or violin plots as alternatives. Box plots and violin plots are useful alternatives to jittered dots if there are sufficient data to capture the distribution but I wouldn’t advocate replacing the plot of modeled means and confidence intervals with box or violin plots, as these communicate different things. More importantly, box and violin plots do not communicate the treatment effects.
2. Almost all plots in biology report the error bars that represent the sample standard error. As described above, sample standard error bars do not reflect the fit model and can be highly misleading, at least if interpreting as if they do reflect the model. Also, sample standard error bars can explicitly include absurd values or imply absurd confidence intervals. For example, I sometimes see standard error bars cross $$y=0$$ for a response that cannot be negative, such as a count. Even if the standard error bar doesn’t cross zero, it is common to see standard error bars that imply (but do not explicitly show) 95% confidence intervals that cross zero, again for responses that cannot be negative. A standard error bar or confidence interval that crosses zero implies that negative means are compatible with the data. This is an absurd implication for responses that cannot have negative values (or are “bounded by” zero). Explicit or implicit error bars that cross zero are especially common for count responses with small means. If a researcher plots confidence intervals, these should be computed using a method that avoids absurd implications, such methods include the bootstrap and generalized linear models.
3. Significance stars are okay, the actual p-value is better, effects plots are best. Many researchers add star symbols to a plot indicating the level of significance of a particular paired comparison. Stars are okay in the sense that there is no inferential difference between $$p = 0.015$$ and $$p = 0.045$$. There’s also no inferential difference between $$p = 0.0085$$ and $$p = 0.015$$, which highlights the weakness of -chotomizing any continuous variable. For this reason, a better, alternative would be to add the actual p-value (as above). A more serious criticism of stars is that it encourages researchers and readers to focus on statistical significance instead of effect size and uncertainty. A more valuable alternative, then, is to report the effects and uncertainty in an effects plot or a combined effects-and-response plot.
## 4.2 Working in R
A reasonable goal of any research project should be a script to generate the final plots entirely within the R environment and not rely on external drawing software to add finishing features. This section covers some of the basics of using R packages to create plots. Later chapters cover some of the details that are specific to the analyses in that chapter.
ggplot2 is one of the major plotting environments in R and the one that seems to have the strongest following, especially among new R users. ggplot2 has the ability to generate extremely personalized and finished plots. However, ggplot2 has a long learning curve and until one is pretty comfortable with its implementation of the grammar of graphics, creating a plot with multiple layers (mean points, error intervals, raw data points, p-values, text annotations) and modified aesthetics (axis text, point colors) can often require many hours of googling.
ggpubr is an extension to ggplot2 (it calls ggplot2 functions under the hood) and provides many canned functions for producing the kinds of ggplots that are published in biological journals. With one line of script, a researcher can generate a publishable plot.
ggplot_the_model and related functions in this chapter are my attempts to create a simple function for creating publication ready plots that highlight effect size and uncertainty.
Some of the basics for using ggpubr, ggplot2, and ggplot_the_model are outlined here. More specific examples are in each chapter.
### 4.2.1 Source data
The source data are that for Figure 2E. The response is $$glucose\_auc$$ the “area under the curve” of repeated measures of blood glucose during the 120 minutes of a glucose tolerance test. Glucose AUC is a measure of glucose tolerance, the higher the area, the higher the blood glucose over the two hours, and the worse the physiological response to a sudden rise in blood glucose. There are two treatment factor variables: 1) $$Diet$$, with levels “chow” and “HFD”, where “chow” is normal mouse chow and “HFD” is a high fat diet, and 2) $$ask1$$, with levels “ASK1F/F” and “ASK1Δadipo” where “ASK1F/F” is the control level and “ASK1Δadipo” is the ASK1 adipose-deletion mouse described in Chapter 1.
#### 4.2.1.1 Import
data_from <- "ASK1 inhibits browning of white adipose tissue in obesity"
file_name <- "41467_2020_15483_MOESM4_ESM.xlsx"
file_path <- here(data_folder, data_from, file_name)
# the data are in "tranposed" format -- each row contains the n
# measures of a treatment level. Read, then transpose
# to make the treatment levels the columns
sheet = "Source Date_Figure 2",
range = c("A233:O236"), # lot of NA
col_names = FALSE) %>%
data.table %>%
transpose(make.names = 1) # turn data onto
# melt the four columns into a single "glucose_auc" column
# and create a new column containing treatment level.
y_cols <- colnames(fig2e_wide)
# melt
fig2e <- melt(fig2e_wide,
measure.vars = y_cols,
value.name = "glucose_auc",
variable.name = "treatment")
# create two new columns that are the split of treatment
" ",
fixed=TRUE)]
# since glucose_auc is the only response variable in this
# data.table, omit all rows with any NA
fig2e <- na.omit(fig2e)
# View(fig2e)
### 4.2.2 How to plot the model
The steps throughout the text for plotting the model fit to experimental data are
1. fit the statistical model
2. use the fit model to estimate the modeled means and confidence limits using emmeans from the emmeans package.
3. use the emmeans object to estimate the contrasts of interests using the contrast function from emmeans.
4. Plot the individual points. If covariates are in the model, use the fit model from step 1 to plot the adjusted values of the points.
5. Use the values from step 2 to plot the modeled means and error intervals.
6. If including p-value brackets, use the values from step 3.
If you are using ggplot_the_model functions, then steps 4-6 are done for you.
Here, I fit a linear model to the fig2e data and compute the use the emmeans and contrast functions without comment. The details of these functions are in the chapters that follow. The fit model and construction of the plots is simplified from those above.
#### 4.2.2.1 Fit the model
The response is the glucose AUC, which is the area under the curve of the data from the glucose tolerance test. The model is a factorial linear model with ask1 genotype and diet as the two factors.
# glucose_auc is the AUC of the glucose tolerance curves computed using trapezoidal algorithm
data = fig2e)
#### 4.2.2.2 Compute the modeled means table of estimated means and confidence intervals
Modeled means are computed by passing the model object (m1) to the emmeans function and specifying the columns containing the groups using the specs = argument.
m1_emm <- emmeans(m1, specs = c("ask1", "diet"))
m1_emm
## ask1 diet emmean SE df lower.CL upper.CL
## ASK1F/F chow 1691 83.4 41 1523 1859
## ASK1F/F HFD 2257 73.1 41 2110 2405
##
## Confidence level used: 0.95
#### 4.2.2.3 Compute the contrasts table of estimated effects with confidence intervals and p-values
Contrasts among levels, or combinations of levels, are computed by passing the emmeans object (m1.emm) to the contrast function. There are many important variations of this step. This text advocates computing planned comparisons, which requires expert knowledge and forthought. Here I limit the computation to the four simple effects (the effects of one factor in each of the levels of the other factor).
m1_simple <- contrast(m1_emm,
method = "revpairwise",
simple = "each",
combine = TRUE,
summary(infer = TRUE)
m1_simple
## diet ask1 contrast estimate SE df lower.CL upper.CL
## . ASK1F/F HFD - chow 566.5 111 41 342.48 790
## t.ratio p.value
## -0.429 0.6703
## -3.808 0.0005
## 5.107 <.0001
## 1.997 0.0525
##
## Confidence level used: 0.95
Notes
1. I often use m1_pairs as the name of the contrast table. Here I use m1_simple to remind me that I’ve limited the comparison to the four simple effects. If I only compute planned comparisons, I might use m1_planned.
### 4.2.3 Be sure ggplot_the_model is in your R folder
If you skipped Create an R Studio Project for this textbook, then download and move the file ggplot_the_model.R into the R folder in your Project folder.
### 4.2.4 How to use the Plot the Model functions
The philosophy underneath these functions is to use the model fitted by the researcher to make the plots. The functions require information from three objects:
1. the data frame containing the modeled data
2. the modeled means and CIs from emmeans
3. the modeled effects, CIs and p-values from emmeans::contrast.
This philosophy strikes a balance between functions in which all of the statistical modeling is hidden and the researcher only sees the output and manually building ggplots. Actually, I strongly encourage researchers to learn how to build these plots and to not rely on canned functions, and I outline this after introducing the ggplot_the_model functions
These functions require the fit model object (m1), the emmeans object of modeled means (m1_emm) and the contrast object of modeled effects (m1_simple).
#### 4.2.4.1 ggplot_the_response
For the response plot only, use ggplot_the_response.
m1_response_plot <- ggplot_the_response(
fit = m1,
fit_emm = m1_emm,
fit_pairs = m1_simple,
palette = pal_okabe_ito_blue,
y_label = expression(paste("mmol ", l^-1, " min")),
g_label = "none"
)
m1_response_plot
ggplot_the_response arguments:
fit model object from lm, lmer, nlme, glmmTMB
fit_emm object from ‘emmeans’. Or, a data frame that looks like this object, with modeled factor variables in columns 1 and 2 (if a 2nd factor is in the model), a column of means with name “emmean”, and columns of error intervals named “lower.CL” and “upper.CL”
fit_pairs object from emmeans:contrast. Or, a data frame that looks like this object.
wrap_col = NULL Not used at the moment
x_label = “none” A character variable used for the X-axis title.
y_label = “Response (units)” A character variable used for the Y-axis title. Use expression(paste()) method for math.
g_label = NULL A character variable used for the grouping variable (the 2nd factor) title. Use “none” to remove
dots = “sina” controls the plotting of individual points. sina from ggforce package. Alternatives are “jitter” and “dotplot”
dodge_width = 0.8 controls spacing between group means for models with a 2nd factor (the grouping variable)
adjust = 0.5 controls spread of dots if using dots = "sina"
contrast_rows = “all” controls which rows of fit_pairs to use for p-value brackets. Use “none” to hide.
y_pos = NULL manual control of the y-coordinates for p-value brackets
palette = pal_okabe_ito allows control of the color palette. The default pal_okabe_ito palette is a color blind palette.
legend_position = “top” controls position of the legend for the grouping variable (the 2nd factor in a two-factor model)
flip_horizontal = FALSE controls the orientation of the axes.
group_lines = FALSE used for plotting lines connecting group means. Not yet implemented.
#### 4.2.4.2 ggplot_the_effects
For the effects plot only, use ggplot_the_effects.
m1_effects_plot <- ggplot_the_effects(
fit = m1,
fit_pairs = m1_simple,
effect_label = expression(paste("Effect (mmol ", l^-1, " min)"))
)
m1_effects_plot
ggplot_the_effects arguments
fit model object from lm, lmer, nlme, glmmTMB
fit_pairs object from emmeans:contrast. Or, a data frame that looks like this object.
contrast_rows = “all” controls which rows of fit_pairs to include in plot.
show_p = TRUE controls show/hide of p-values
effect_label = “Effect (units)” character variable for the title of the effects axis title.
#### 4.2.4.3 ggplot_the_model
For the combined response and effects plot, use ggplot_the_model.
m1_plot <- ggplot_the_model(
fit = m1,
fit_emm = m1_emm,
fit_pairs = m1_simple,
palette = pal_okabe_ito_blue,
y_label = expression(paste("mmol ", l^-1, " min")),
g_label = "none",
effect_label = expression(paste("Effect (mmol ", l^-1, " min)"))
)
m1_plot
ggplot_the_model arguments
fit same as for ggplot_the_response
fit_emm same as for ggplot_the_response
fit_pairs same as for ggplot_the_response
wrap_col = NULL same as for ggplot_the_response
x_label = “none” same as for ggplot_the_response
y_label = “Response (units)” same as for ggplot_the_response
g_label = NULL same as for ggplot_the_response
effect_label = “Effect (units)” same as for ggplot_the_effect
dots = “sina” same as for ggplot_the_response
dodge_width = 0.8 same as for ggplot_the_response
adjust = 0.5 same as for ggplot_the_response
contrast_rows = “all” same as for ggplot_the_response
y_pos = NULL same as for ggplot_the_response
palette = pal_okabe_ito same as for ggplot_the_response
legend_position = “bottom” Except for default, same as for ggplot_the_response
flip_horizontal = TRUE Except for default, same as for ggplot_the_response
group_lines = FALSE used for plotting lines connecting group means. Not yet implemented.
rel_heights = c(1,1) used to control relative heights of the effects and response plot
#### 4.2.4.4 ggplot_the_treatments
x_levels <- rbind(
Diet = c("chow", "chow", "HFD", "HFD")
)
# this is the same code as above but hiding
# legend position
m1_response_plot_base <- ggplot_the_response(
fit = m1,
fit_emm = m1_emm,
fit_pairs = m1_simple,
palette = pal_okabe_ito_blue,
y_label = expression(paste("mmol ", l^-1, " min")),
g_label = "none",
legend_position = "none"
)
m1_response_plot2 <- ggplot_the_treatments(
m1_response_plot_base,
x_levels = x_levels,
text_size = 3.5,
rel_heights = c(1, 0.1)
)
m1_response_plot2
If you prefer plus and minus symbols, use minus <- "\u2013" for the minus sign instead of the hyphen “-”
minus <- "\u2013" # good to put this in the setup chunk
x_levels <- rbind(
Δadipo = c(minus, "+", minus, "+"),
HFD = c(minus, minus, "+", "+")
)
m1_response_plot2 <- ggplot_the_treatments(
m1_response_plot_base,
x_levels = x_levels,
text_size = 3.5,
rel_heights = c(1, 0.1)
)
m1_response_plot2
### 4.2.5 How to generate a Response Plot using ggpubr
Steps 1-3 were completed above.
#### 4.2.5.1 Step 4: Plot the individual points
Using ggplot
I’m going to show how to create the initial, base plot of points using ggplot2 in order to outline very briefly how ggplot2 works.
m1_response <- ggplot(
data = fig2e,
aes(x = treatment, # these 2 lines define the axes
y = glucose_auc,
color = ask1 # define the grouping variable
)) +
# surprisingly, the code above doesn't plot anything
# this adds the points as a layer
geom_jitter(width = 0.2) +
# change the title of the y-axis
ylab(expression(paste("AUC (mmol ", l^-1, " min)"))) +
# change the theme
theme_pubr() +
# these theme modifications need to be added after re-setting
# the theme
theme(
legend.position = "none", # remove legend
axis.title.x = element_blank() # remove the x-axis title
) +
# change the colors palette for the points
scale_color_manual(values = pal_okabe_ito_blue) +
NULL
m1_response
Notes
1. The ggplot function requires a data frame passed to data containing the data to plot and an aesthetic (aes), which passes the column names that set the x and y axes. These column names must be in the data frame passed to data. color = is an aesthetic that sets the grouping variable used to assign the different colors.
2. The x-axis is discrete but is numeric. The x-axis values are 1, 2, 3, 4. But instead of using these numbers as the labels for the x-axis values, ggplot uses the names of the groups (the four values of the column “treatment”)
Using ggpubr
m1_response <- ggstripchart(
data = fig2e,
x = "treatment",
y = "glucose_auc",
xlab = "",
ylab = expression(paste("AUC (mmol ", l^-1, " min)")),
palette = pal_okabe_ito_blue,
legend = "none"
)
m1_response
#### 4.2.5.2 Step 5: Plot the modeled means and 95% error intervals
To add points and error bars to m1_response, we need to tell ggplot the x-axis positions (or coordinates). These positions are the values of the “treatment” column in fig2e. The modeled means and and 95% CIs are in the m1_emm object but there is no “treatment” column, or any column with these values. We, therefore have to make this column before we can add the modeled means and CIs to the plot.
# convert m1_emm to a data.table
m1_emm_dt <- summary(m1_emm) %>%
data.table()
# create treatment column
# make sure it matches values in the two factor columns
"ASK1Δadipo HFD")]
Now add the modeled means and CIs
m1_response <- m1_response +
geom_point(data = m1_emm_dt,
aes(y = emmean,
size = 3) +
# add layer containing error bars
geom_errorbar(data = m1_emm_dt,
aes(y = emmean,
ymin = lower.CL,
ymax = upper.CL,
width = 0.05) +
NULL
m1_response
Notes
1. m1_response generated by ggpubr::stripchart is a ggplot2 object. This means modifications of the plot are implemented by adding these with the “+” sign.
2. The modeled means are in the column “emmean” in the data frame m1_emm_dt. We need to tell geom_point where to find the data using the data = argument. geom_point() (and other geoms) assumes that the points that we want to plot are defined by the same x and y column names used to create the plot – if these don’t exist, we need to state the x and y column names in the aesthetic function aes. Since we created a “treatment” column in m1_emm_dt that contains the x-axis coordiantes (1, 2, 3, 4), we do not need to tell ggplot where to find the x-values. But there is no “glucose_auc” column in m1_emm_dt so we need to tell geom_point() where to find the y values using aes(y = "emmean").
3. Adding the modeled error intervals using geom_errorbar follows the same logic as adding the modeled means. Importantly, and interestingly, y = emmean has to be passed even though no information from this column is used to plot the error bars.
4. Note that column names passed to a ggpubr function must be in quotes but column names passed to a ggplot2 function cannot be in quotes
#### 4.2.5.3 Step 6: Adding p-values
p-value brackets are added to a response plot using stat_pvalue_manual from the ggpubr package. This function needs a column of p-values, and a pair of columns that define the left and right x-axis positions of the bracket.
# convert m1_simple to a data.table
m1_simple_dt <- data.table(m1_simple)
# create group1 -- column containing x-position
# of the left side of the bracket
# need to look at m1_simple_dt to construct this.
# create group2 -- column containing x-position
# of the right side of the bracket
# need to look at m1_simple_dt to construct this.
m1_simple_dt[, p_rounded := p_round(p.value,
digits = 2)]
m1_simple_dt[, p_pretty := p_format(p_rounded,
digits = 2,
accuracy = 1e-04,
add.p = TRUE)]
# simply assigning this to a new plot with new name
# because I want to re-use the base plot in the next chunk
m1_response_p <- m1_response +
stat_pvalue_manual(
data = m1_simple_dt,
label = "p_pretty",
y.position = c(3300, 3300, 3450, 3600),
tip.length = 0.01)
m1_response_p
Notes on adding p-values to the plot:
1. The y.position argument in stat_pvalue_manual() contains the position on the y-axis for the p-value brackets. I typically choose these values “by eye”. Essentially, I look at the maximum y-value on the plot and then choose a value just above this for the first bracket. This may take some trial-and-error to position the brackets satisfactorily.
2. Use base R indexing to specify a subset. For example
m1_response_p <- m1_response +
stat_pvalue_manual(
data = m1_simple_dt[c(2,4), ], # only rows 2, 4
label = "p_pretty",
y.position = c(3300, 3450),
tip.length = 0.01)
m1_response_p
1. ggpubr::stat_compare_means automates the process somewhat but the function is too limited for statistics on anything but the simplest experiments. I don’t advocate it’s use.
#### 4.2.5.4 A variation for factorial models
The experiment in Fig2e has a factorial design and was analyzed (here, not in the original paper) using a factorial model. The factorial design can be represented in the plot by clustering the levels.
dodge_width = 0.4
jitter_width = 0.2
m1_simple_dt[, xmin := c(1-dodge_width/4,
2-dodge_width/4,
1-dodge_width/4,
1+dodge_width/4)]
m1_simple_dt[, xmax := c(1+dodge_width/4,
2+dodge_width/4,
2-dodge_width/4,
2+dodge_width/4)]
m1_response_fac <- ggstripchart(
data = fig2e,
x = "diet",
y = "glucose_auc",
ylab = expression(paste("AUC (mmol ", l^-1, " min)")),
palette = pal_okabe_ito_blue,
# position = position_dodge(width = dodge_width)
position = position_jitterdodge(dodge.width = dodge_width,
jitter.width = jitter_width)
) +
rremove("xlab") + #ggpubr function
geom_point(data = m1_emm_dt,
aes(y = emmean,
size = 3,
position = position_dodge(width = dodge_width)) +
# add layer containing error bars
geom_errorbar(data = m1_emm_dt,
aes(y = emmean,
ymin = lower.CL,
ymax = upper.CL,
width = 0.05,
position = position_dodge(width = dodge_width)) +
stat_pvalue_manual(
data = m1_simple_dt,
label = "p_pretty",
xmin = "xmin",
xmax = "xmax",
y.position = c(3300, 3300, 3450, 3600),
tip.length = 0.01,
size = 3) +
NULL
m1_response_fac
#### 4.2.5.5 How to add treatment combinations to a ggpubr plot
Many researchers in bench biology insert a grid of treatments below the x-axis of a plot. This is time consuming if we are using some external software to add to this. A much leaner workflow would be to add the treatment grid in the same step as generating the plot itself. Here is a kludgy way to do this using a ggpubr plot. A much more elegant method is described below
##### 4.2.5.5.1 Variant 1 – Fake axes
use_this <- FALSE # if false use +/- symbols
if(use_this == TRUE){
x_levels <- rbind(
"Diet:" = c("chow", "chow", "HFD", "HFD")
)
}else{
minus <- "\u2013" # good to put this in the setup chunk
x_levels <- rbind(
"Δadipo:" = c(minus, "+", minus, "+"),
"HFD: " = c(minus, minus, "+", "+")
)
}
x_levels_text <- apply(x_levels, 2, paste0, collapse="\n")
x_levels_title <- paste(row.names(x_levels), collapse="\n")
x_axis_min <- 0.4
x_axis_max <- 4.5
y_plot_min <- 1200
y_axis_min <- 1350
y_axis_max <- 3500
y_breaks <- seq(1500, 3500, by = 500)
y_labels <- as.character(y_breaks)
m1_response_final <- m1_response_p +
coord_cartesian(ylim = c(y_plot_min, y_axis_max)) +
scale_y_continuous(breaks = y_breaks) +
theme(
axis.line = element_blank(), # remove both x & y axis lines
axis.text.x = element_blank(), # remove x-axis labels
axis.ticks.x = element_blank() # remove x-axis ticks
) +
# add shortened y-axis line that doesn't extend to treatments
geom_segment(aes(x = x_axis_min + 0.01,
y = y_axis_min,
xend = x_axis_min + 0.01,
yend = y_axis_max),
size = 0.5) +
# add x-axis line above treatments
geom_segment(aes(x = x_axis_min,
y = y_axis_min,
xend = x_axis_max,
yend = y_axis_min),
size = 0.5) +
annotate(geom = "text",
x = 1:4,
y = y_plot_min,
label = x_levels_text) +
annotate(geom = "text",
x = x_axis_min,
y = y_plot_min,
label = x_levels_title,
hjust = 0) +
NULL
m1_response_final
Notes
1. ggplot_the_treatments should work with any ggplot, including those generated by ggpubr functions, with categorical values on the x-axis.
2. Here, I add the treatment combinations manually. The table of combinations is added inside the plot area (inside the axes). To make this look nice, and not weird, the x and y axes are removed and new lines are inserted to create new axis lines. The bottom of the new y-axis line starts above the treatment table. The new x-axis line is inserted above the treatment table.
3. To include the treatment combination (group) names, add this as the first row of x_levels.
4. Note that if the plot has a grid, this grid will extend into the area occupied by the treatment table using this method.
### 4.2.6 How to generate a Response Plot with a grid of treatments using ggplot2
Above, I wrote a short script for generating the base response plot using ggplot. In this plot, the x-axis variable (“treatment”) is categorical and ggplot uses the integers 1, 2, 3, 4 as the coordinate values for the position of the four groups on the x-axis. Understanding this mapping is important for adding lines to a plot or annotating a plot with text, both of which requires x,y coordinates to position the feature. While the x-axis is discrete – I cannot add a tick at x = 1.5 – the horizontal dimension of the plot area itself is continuous and I can add points or lines or text at any x coordinate within the plot area.
Here I generate the same plot using a continuous x-axis. This requires that I explicitly create a numeric column in the data with the x-axis position for each group. This is easy. The advantage of this is that I now have a continuous x-axis that is more manipulatable than a discrete x-axis. I use this to add treatment levels below the plot
#### 4.2.6.1 First, wrangle the data
# make sure treatment is a factor with the levels in the
# desired order
fig2e[, treatment_i := as.integer(treatment)]
# convert m1_emm to a data.table
m1_emm_dt_i <- summary(m1_emm) %>%
data.table()
# create treatment_i column
m1_emm_dt_i[, treatment_i := 1:4]
# convert m1_simple to a data.table
m1_simple_dt_i <- data.table(m1_simple)
# create group1 -- column containing x-position
# of the left side of the bracket
# need to look at m1_simple_dt to construct this.
m1_simple_dt_i[, group1 := c(2, 4, 3, 4)]
# create group2 -- column containing x-position
# of the right side of the bracket
# need to look at m1_simple_dt to construct this.
m1_simple_dt_i[, group2 := c(1, 3, 1, 2)]
m1_simple_dt_i[, p_rounded := p_round(p.value,
digits = 2)]
m1_simple_dt_i[, p_pretty := p_format(p_rounded,
digits = 2,
accuracy = 1e-04,
add.p = TRUE)]
Notes
1. The first line simply converts the categorical variable treatment into a numeric variable with values 1-4 assigned to the four treatment levels.
2. As above, m1_emm_dt is created and a column with the x-variable name is added
3. As above, m1_simple_dt is created and group1 and group2 columns with the x-axis positions of the two groups in the contrast are created. Here these columns are integers and not group names.
#### 4.2.6.2 Second, generate the plot
m1_response_2 <- ggplot(
data = fig2e,
aes(x = treatment_i, # these 2 lines define the axes
y = glucose_auc,
color = ask1 # define the grouping variable
)) +
geom_jitter(width = 0.2) +
geom_point(data = m1_emm_dt_i,
aes(y = emmean,
size = 3) +
# add layer containing error bars
geom_errorbar(data = m1_emm_dt_i,
aes(y = emmean,
ymin = lower.CL,
ymax = upper.CL,
width = 0.05) +
stat_pvalue_manual(
data = m1_simple_dt_i[c(2,4), ], # only rows 2, 4
label = "p_pretty",
y.position = c(3300, 3450),
tip.length = 0.01) +
# change the title of the y-axis
ylab(expression(paste("AUC (mmol ", l^-1, " min)"))) +
# change the theme
theme_pubr() +
# these theme modifications need to be added after re-setting
# the theme
theme(
legend.position = "none", # remove legend
axis.title.x = element_blank() # remove the x-axis title
) +
# change the colors palette for the points
scale_color_manual(values = pal_okabe_ito_blue) +
NULL
m1_response_2
Notes
1. This code is exactly the same as that above but uses an x-variable that is numeric instead of a factor.
#### 4.2.6.3 Third, add the grid of treatments below the x-axis
minus <- "\u2013" # good to put this in the setup chunk
x_levels <- rbind(
c("HFD: ", minus, minus, "+", "+")
)
x_breaks_text <- apply(x_levels, 2, paste0, collapse="\n")
x_breaks <- c(0.5, 1:4)
m1_response_2 <- m1_response_2 +
coord_cartesian(xlim = c(0.5, 4.5)) +
scale_x_continuous(breaks = x_breaks,
labels = x_breaks_text,
expand = c(0, 0)) +
theme(axis.ticks.x = element_blank()) + # remove x-axis ticks
NULL
m1_response_2
Notes
1. This is a good plot. A pretty good plot would include the effects.
### 4.2.7 How to generate an Effects Plot
The effects plot is built using ggplot2 instead of ggpubr because the plot is “flipped” – the y-axis is the categorical variable and the x-axis is the continuous variable. The base plot with the estimates (but not the error bars) could be made using ggpubr (see below) but the subsequent functions to modify the plot are awkward because of inconsistencies in the designation of the x and y-axis (old or new?).
# use the m1_simple_dt object created above
# create nice labels for the contrasts
"HFD - Control",
# make sure the levels of contrast_pretty are in the order of that in
# m1_simple_dt
m1_simple_dt[, contrast_pretty := factor(contrast_pretty,
levels = contrast_pretty)]
m1_effects <- ggplot(data = m1_simple_dt,
aes(x = estimate,
y = contrast_pretty)) +
geom_point() +
xlab("Effects (mmol/L)") +
geom_errorbar(aes(x = estimate,
xmin = lower.CL,
xmax = upper.CL),
width = 0.02) +
geom_vline(xintercept = 0,
linetype = "dashed",
color = pal_okabe_ito_blue[1]) +
annotate(geom = "text",
x = m1_simple_dt$estimate + c(-30,0,0,0), y = 1:4 + 0.2, label = m1_simple_dt$p_pretty,
size = 3) +
theme_pubr() +
rremove("ylab") + #remove y-axis, ggpubr function
NULL
m1_effects
Notes
1. c(-30,0,0,0) is added to the x coordinate to shift the first p-value to the left of the zero-effect line.
how would this be done with ggpubr?
# use the modified m1_simple_dt object created in the previous chunk
# use ggpubr::ggerrorplot with no error
m1_effects_pubr <- ggerrorplot(data = m1_simple_dt,
y = "estimate",
x = "contrast_pretty",
ylab = "Effects (mmol/L)",
desc_stat = "mean", # mean only!
orientation = "horizontal") +
# remove y-axis title
rremove("ylab") + #ggpubr function
# add error bars using ggplot function
# note using original (not horizontal) axis designation
geom_errorbar(aes(y = estimate,
ymin = lower.CL,
ymax = upper.CL),
width = 0.02) +
# add zero effect line using ggplot function
# note using original (not horizontal) axis designation
geom_hline(yintercept = 0,
linetype = "dashed",
color = pal_okabe_ito_blue[1]) +
# note using original (not horizontal) axis designation
annotate(geom = "text",
y = m1_simple_dt$estimate + c(-30,0,0,0), x = 1:4 + 0.3, label = m1_simple_dt$p_pretty,
size = 3) +
NULL
m1_effects_pubr
### 4.2.8 How to combine the response and effects plots
#### 4.2.8.1 Using the ggpubr response plot
The cowplot::plotgrid() function is used generally to arrange multiple plots into a single figure. Here I use it to combine the response and effects subplots into a single plot. The response plot is m1_response created using ggpubr.
# modify the response and effects plots for consistency
# change group names for consistency
groups_pretty <- c("Control",
"HFD",
m1_bottom <- m1_response +
scale_x_discrete(labels = groups_pretty) +
coord_flip() # rotate to horizontal
m1_top <- m1_effects + # assign to new plot object
scale_x_continuous(position="top") # move x-axis to "top"
m1_plot <- plot_grid(m1_top,
m1_bottom,
nrow = 2,
align = "v",
axis = "lr",
rel_heights = c(1, 1))
m1_plot
Notes
1. The response plot was flipped in this code. If you know that you want it in this orientation, simply build it with this orientation to avoid the kinds of axis designation ambiguities highlighted above with the ggpubr effects plot.
2. the align and axis arguments are used to force the plot areas to have the same width.
3. rel_heights = c() adjusts the relative heights of the top and bottom plot. This typically requires fiddling.
4. The placement of the p-values looks different than it does in the standalone effects plot. To improve the look in the combined plot, fiddle with the placement argument (the x and y positions) in the chunk that generates the effects plot.
#### 4.2.8.2 Using a response plot with a treatment grid
Here, I re-build the response plot in a horizontal orientation
tab <- "\u0009" # good to put this in the setup chunk
minus <- "\u2013" # good to put this in the setup chunk
x_levels <- rbind(
)
x_breaks_text <- apply(x_levels, 2, paste0, collapse = "")
x_breaks <- c(1:4, 4.5)
m1_response_horiz <- ggplot(
data = fig2e,
aes(y = treatment_i, # these 2 lines define the axes
x = glucose_auc,
color = ask1 # define the grouping variable
)) +
geom_jitter(width = 0.2) +
geom_point(data = m1_emm_dt_i,
aes(x = emmean,
size = 3) +
# add layer containing error bars
geom_errorbar(data = m1_emm_dt_i,
aes(x = emmean,
xmin = lower.CL,
xmax = upper.CL,
width = 0.05) +
# change the title of the y-axis
xlab(expression(paste("AUC (mmol ", l^-1, " min)"))) +
# change the theme
theme_pubr() +
# these theme modifications need to be added after re-setting
# the theme
theme(
legend.position = "none", # remove legend
axis.title.y = element_blank(), # remove the x-axis title
axis.ticks.y = element_blank() # remove y-axis ticks
) +
# change the colors palette for the points
scale_color_manual(values = pal_okabe_ito_blue) +
coord_cartesian(ylim = c(0.5, 4.5)) +
scale_y_continuous(breaks = x_breaks,
labels = x_breaks_text,
expand = c(0, 0)) +
NULL
m1_response_horiz
m1_plot <- plot_grid(m1_top,
m1_response_horiz,
nrow = 2,
align = "v",
axis = "lr",
rel_heights = c(1, 1))
m1_plot
### 4.2.9 How to add the interaction effect to response and effects plots
In the experiment for Figure 2E, a good question to ask is, is the effect of ASK1Δadipo conditional on diet? For example, a scenario where ASK1Δadipo lowers AUC about the same amount in both Chow and HFD mice (that is, the effect of ASK1Δadipo is not conditional on diet) has a different underlying biological explanation of the control of browning than a scenario where ASK1Δadipo lowers AUC only in HFD mice. To pursue this, we need an estimate of the interaction effect. In general, if an experiment has a factorial design, we always want estimates of the interaction effects
#### 4.2.9.1 Adding interaction p-value to a response plot
Wrangle
m1_coef <- coef(summary(m1))
p_round(digits = 3) %>%
p_format(digits = 3, accuracy = 1e-04)
p_ixn_text <- paste0("interaction p = ", p_ixn)
m1_response_ixn <- m1_response_fac +
# add line connecting group means
# comment out all lines to remove
geom_line(data = m1_emm_dt,
aes(y = emmean,
position = position_dodge(width = dodge_width)) +
annotate(geom = "text",
x = 1.5,
y = 2300,
label = p_ixn_text) +
NULL
m1_response_ixn
#### 4.2.9.2 Adding interaction effect to effects plot
##### 4.2.9.2.1 Add the interaction effect to the contrast table
# convert m1_simple to data.table
m1_simple_dt <- data.table(m1_simple)
# get interaction p-value using emmeans::contrast()
m1_ixn <- contrast(m1_emm,
interaction = c("revpairwise"),
by = NULL) %>%
summary(infer = TRUE) %>%
data.table()
setnames(m1_ixn,
old = names(m1_ixn)[1:2],
new = names(m1_simple_dt)[1:2])
# note that column order does not need to be the same
m1_effects_dt <- rbind(m1_simple_dt,
m1_ixn)
m1_effects_dt[, contrast_pretty := c("Δadipo - Control",
"HFD - Control",
"Interaction")]
m1_effects_dt[, p_round := p_round(p.value, digits = 2)]
m1_effects_dt[, p_pretty := p_format(p_round,
digits = 2,
accuracy = 1e-04,
# don't forget this imperative step!
# otherwise your p-values and factor labels won't match!
m1_effects_dt[, contrast_pretty := factor(contrast_pretty,
levels = contrast_pretty)]
m1_effects <- ggplot(data = m1_effects_dt,
aes(x = estimate,
y = contrast_pretty)) +
geom_point() +
xlab("Effects (mmol/L)") +
geom_errorbar(aes(x = estimate,
xmin = lower.CL,
xmax = upper.CL),
width = 0.02) +
geom_vline(xintercept = 0,
linetype = "dashed",
color = pal_okabe_ito_blue[1]) +
annotate(geom = "text",
x = m1_effects_dt$estimate + c(-30,0,0,0,0), y = 1:5 + 0.2, label = m1_effects_dt$p_pretty,
size = 3) +
theme_pubr() +
rremove("ylab") + #remove y-axis, ggpubr function
NULL
m1_effects
m1_top <- m1_effects +
scale_x_continuous(position="top") # move x-axis to "top"
m1_plot <- plot_grid(m1_top,
m1_response_horiz,
nrow = 2,
align = "v",
axis = "lr",
rel_heights = c(1, 0.9))
m1_plot`
Notes
1. This is a pretty good plot | 2022-10-03 14:20:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4666675925254822, "perplexity": 4518.0831332419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00663.warc.gz"} |
http://cyclismo.org/tutorial/R/basicOps.html | # 3. Basic Operations and Numerical Descriptions¶
We look at some of the basic operations that you can perform on lists of numbers. It is assumed that you know how to enter data or read data files which is covered in the first chapter, and you know about the basic data types.
## 3.1. Basic Operations¶
Once you have a vector (or a list of numbers) in memory most basic operations are available. Most of the basic operations will act on a whole vector and can be used to quickly perform a large number of calculations with a single command. There is one thing to note, if you perform an operation on more than one vector it is often necessary that the vectors all contain the same number of entries.
Here we first define a vector which we will call “a” and will look at how to add and subtract constant numbers from all of the numbers in the vector. First, the vector will contain the numbers 1, 2, 3, and 4. We then see how to add 5 to each of the numbers, subtract 10 from each of the numbers, multiply each number by 4, and divide each number by 5.
> a <- c(1,2,3,4)
> a
[1] 1 2 3 4
> a + 5
[1] 6 7 8 9
> a - 10
[1] -9 -8 -7 -6
> a*4
[1] 4 8 12 16
> a/5
[1] 0.2 0.4 0.6 0.8
We can save the results in another vector called b:
> b <- a - 10
> b
[1] -9 -8 -7 -6
If you want to take the square root, find e raised to each number, the logarithm, etc., then the usual commands can be used:
> sqrt(a)
[1] 1.000000 1.414214 1.732051 2.000000
> exp(a)
[1] 2.718282 7.389056 20.085537 54.598150
> log(a)
[1] 0.0000000 0.6931472 1.0986123 1.3862944
> exp(log(a))
[1] 1 2 3 4
By combining operations and using parentheses you can make more complicated expressions:
> c <- (a + sqrt(a))/(exp(2)+1)
> c
[1] 0.2384058 0.4069842 0.5640743 0.7152175
Note that you can do the same operations with vector arguments. For example to add the elements in vector a to the elements in vector b use the following command:
> a + b
[1] -8 -6 -4 -2
The operation is performed on an element by element basis. Note this is true for almost all of the basic functions. So you can bring together all kinds of complicated expressions:
> a*b
[1] -9 -16 -21 -24
> a/b
[1] -0.1111111 -0.2500000 -0.4285714 -0.6666667
> (a+3)/(sqrt(1-b)*2-1)
[1] 0.7512364 1.0000000 1.2884234 1.6311303
You need to be careful of one thing. When you do operations on vectors they are performed on an element by element basis. One ramification of this is that all of the vectors in an expression must be the same length. If the lengths of the vectors differ then you may get an error message, or worse, a warning message and unpredictable results:
> a <- c(1,2,3)
> b <- c(10,11,12,13)
> a+b
[1] 11 13 15 14
Warning message:
longer object length
is not a multiple of shorter object length in: a + b
As you work in R and create new vectors it can be easy to lose track of what variables you have defined. To get a list of all of the variables that have been defined use the ls() command:
> ls()
[1] "a" "b" "bubba" "c" "last.warning"
[6] "tree" "trees"
Finally, you should keep in mind that the basic operations almost always work on an element by element basis. There are rare exceptions to this general rule. For example, if you look at the minimum of two vectors using the min command you will get the minimum of all of the numbers. There is a special command, called pmin, that may be the command you want in some circumstances:
> a <- c(1,-2,3,-4)
> b <- c(-1,2,-3,4)
> min(a,b)
[1] -4
> pmin(a,b)
[1] -1 -2 -3 -4
## 3.2. Basic Numerical Descriptions¶
Given a vector of numbers there are some basic commands to make it easier to get some of the basic numerical descriptions of a set of numbers. Here we assume that you can read in the tree data that was discussed in a previous chapter. It is assumed that it is stored in a variable called tree:
> tree <- read.csv(file="trees91.csv",header=TRUE,sep=",");
> names(tree)
[1] "C" "N" "CHBR" "REP" "LFBM" "STBM" "RTBM" "LFNCC"
[9] "STNCC" "RTNCC" "LFBCC" "STBCC" "RTBCC" "LFCACC" "STCACC" "RTCACC"
[17] "LFKCC" "STKCC" "RTKCC" "LFMGCC" "STMGCC" "RTMGCC" "LFPCC" "STPCC"
[25] "RTPCC" "LFSCC" "STSCC" "RTSCC"
Each column in the data frame can be accessed as a vector. For example the numbers associated with the leaf biomass (LFBM) can be found using tree$LFBM: > tree$LFBM
[1] 0.430 0.400 0.450 0.820 0.520 1.320 0.900 1.180 0.480 0.210 0.270 0.310
[13] 0.650 0.180 0.520 0.300 0.580 0.480 0.580 0.580 0.410 0.480 1.760 1.210
[25] 1.180 0.830 1.220 0.770 1.020 0.130 0.680 0.610 0.700 0.820 0.760 0.770
[37] 1.690 1.480 0.740 1.240 1.120 0.750 0.390 0.870 0.410 0.560 0.550 0.670
[49] 1.260 0.965 0.840 0.970 1.070 1.220
The following commands can be used to get the mean, median, quantiles, minimum, maximum, variance, and standard deviation of a set of numbers:
> mean(tree$LFBM) [1] 0.7649074 > median(tree$LFBM)
[1] 0.72
> quantile(tree$LFBM) 0% 25% 50% 75% 100% 0.1300 0.4800 0.7200 1.0075 1.7600 > min(tree$LFBM)
[1] 0.13
> max(tree$LFBM) [1] 1.76 > var(tree$LFBM)
[1] 0.1429382
> sd(tree$LFBM) [1] 0.3780717 Finally, the summary command will print out the min, max, mean, median, and quantiles: > summary(tree$LFBM)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1300 0.4800 0.7200 0.7649 1.0080 1.7600
The summary command is especially nice because if you give it a data frame it will print out the summary for every vector in the data frame:
> summary(tree)
C N CHBR REP LFBM
Min. :1.000 Min. :1.000 A1 : 3 Min. : 1.00 Min. :0.1300
1st Qu.:2.000 1st Qu.:1.000 A4 : 3 1st Qu.: 9.00 1st Qu.:0.4800
Median :2.000 Median :2.000 A6 : 3 Median :14.00 Median :0.7200
Mean :2.519 Mean :1.926 B2 : 3 Mean :13.05 Mean :0.7649
3rd Qu.:3.000 3rd Qu.:3.000 B3 : 3 3rd Qu.:20.00 3rd Qu.:1.0075
Max. :4.000 Max. :3.000 B4 : 3 Max. :20.00 Max. :1.7600
(Other):36 NA's :11.00
STBM RTBM LFNCC STNCC
Min. :0.0300 Min. :0.1200 Min. :0.880 Min. :0.3700
1st Qu.:0.1900 1st Qu.:0.2825 1st Qu.:1.312 1st Qu.:0.6400
Median :0.2450 Median :0.4450 Median :1.550 Median :0.7850
Mean :0.2883 Mean :0.4662 Mean :1.560 Mean :0.7872
3rd Qu.:0.3800 3rd Qu.:0.5500 3rd Qu.:1.788 3rd Qu.:0.9350
Max. :0.7200 Max. :1.5100 Max. :2.760 Max. :1.2900
RTNCC LFBCC STBCC RTBCC
Min. :0.4700 Min. :25.00 Min. :14.00 Min. :15.00
1st Qu.:0.6000 1st Qu.:34.00 1st Qu.:17.00 1st Qu.:19.00
Median :0.7500 Median :37.00 Median :18.00 Median :20.00
Mean :0.7394 Mean :36.96 Mean :18.80 Mean :21.43
3rd Qu.:0.8100 3rd Qu.:41.00 3rd Qu.:20.00 3rd Qu.:23.00
Max. :1.5500 Max. :48.00 Max. :27.00 Max. :41.00
LFCACC STCACC RTCACC LFKCC
Min. :0.2100 Min. :0.1300 Min. :0.1100 Min. :0.6500
1st Qu.:0.2600 1st Qu.:0.1600 1st Qu.:0.1600 1st Qu.:0.8100
Median :0.2900 Median :0.1700 Median :0.1650 Median :0.9000
Mean :0.2869 Mean :0.1774 Mean :0.1654 Mean :0.9053
3rd Qu.:0.3100 3rd Qu.:0.1875 3rd Qu.:0.1700 3rd Qu.:0.9900
Max. :0.3600 Max. :0.2400 Max. :0.2400 Max. :1.1800
NA's :1.0000
STKCC RTKCC LFMGCC STMGCC
Min. :0.870 Min. :0.330 Min. :0.0700 Min. :0.100
1st Qu.:0.940 1st Qu.:0.400 1st Qu.:0.1000 1st Qu.:0.110
Median :1.055 Median :0.475 Median :0.1200 Median :0.130
Mean :1.105 Mean :0.473 Mean :0.1109 Mean :0.135
3rd Qu.:1.210 3rd Qu.:0.520 3rd Qu.:0.1300 3rd Qu.:0.150
Max. :1.520 Max. :0.640 Max. :0.1400 Max. :0.190
RTMGCC LFPCC STPCC RTPCC
Min. :0.04000 Min. :0.1500 Min. :0.1500 Min. :0.1000
1st Qu.:0.06000 1st Qu.:0.2000 1st Qu.:0.2200 1st Qu.:0.1300
Median :0.07000 Median :0.2400 Median :0.2800 Median :0.1450
Mean :0.06648 Mean :0.2381 Mean :0.2707 Mean :0.1465
3rd Qu.:0.07000 3rd Qu.:0.2700 3rd Qu.:0.3175 3rd Qu.:0.1600
Max. :0.09000 Max. :0.3100 Max. :0.4100 Max. :0.2100
LFSCC STSCC RTSCC
Min. :0.0900 Min. :0.1400 Min. :0.0900
1st Qu.:0.1325 1st Qu.:0.1600 1st Qu.:0.1200
Median :0.1600 Median :0.1800 Median :0.1300
Mean :0.1661 Mean :0.1817 Mean :0.1298
3rd Qu.:0.1875 3rd Qu.:0.2000 3rd Qu.:0.1475
Max. :0.2600 Max. :0.2800 Max. :0.1700
## 3.3. Operations on Vectors¶
Here we look at some commonly used commands that perform operations on lists. The commands include the sort, min, max, and sum commands. First, the sort command can sort the given vector in either ascending or descending order:
> a = c(2,4,6,3,1,5)
> b = sort(a)
> c = sort(a,decreasing = TRUE)
> a
[1] 2 4 6 3 1 5
> b
[1] 1 2 3 4 5 6
> c
[1] 6 5 4 3 2 1
The min and the max commands find the minimum and the maximum numbers in the vector:
> min(a)
[1] 1
> max(a)
[1] 6
Finally, the sum command adds up the numbers in the vector:
> sum(a)
[1] 21 | 2018-08-21 02:44:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6689380407333374, "perplexity": 5671.9651771886765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217909.77/warc/CC-MAIN-20180821014427-20180821034427-00116.warc.gz"} |
https://scriptinghelpers.org/questions/78175/script-isnt-updating-from-bool-value | Still have questions? Join our Discord server and get real time help.
0
# Script isn't updating from bool value?
RookPvPz -27
3 months ago
Edited by incapaxx 3 months ago
Hello,
I'm making an alarm script, however, the settings will not update from the settings folder and bool value setup to tell the script if the normal alarm is on, or the gas alarm is on.
lua while true do
wait(0.1)
if game.Workspace.Settings.Alarms.Valu e== true then
wait(0.1)
script.Parent.BottomLight.Enabled = true
script.Parent.TopLight.Enabled = true
script.Parent.Spinner.Disabled = false
else
wait(0.1)
script.Parent.BottomLight.Enabled=false
script.Parent.TopLight.Enabled=false
script.Parent.Spinner.Disabled=true
script.Parent.Orientation = Vector3.new(90, -90, 0)
if game.Workspace.Settings.Gas.Value == true then
wait(0.1)
script.Parent.Parent.Lamp.BrickColor = BrickColor.new("Really blue")
script.Parent.TopLight.Color = Color3.new(0, 0, 255)
script.Parent.BottomLight.Color = Color3.new(0, 0, 255)
wait(0.1)
script.Parent.Orientation = Vector3.new(90, -90, 0)
script.Parent.BottomLight.Enabled = true
script.Parent.TopLight.Enabled = true
workspace.Settings.Alarms.Value = false
else
if game.Workspace.Settings.Gas.Value == false then
wait(0.1)
script.Parent.Parent.Lamp.BrickColor = BrickColor.new("CGA brown")
script.Parent.TopLight.Color = Color3.new(170, 85, 0)
script.Parent.BottomLight.Color = Color3.new(170, 85, 0)
script.Parent.BottomLight.Enabled = false
script.Parent.TopLight.Enabled = false
script.Parent.Orientation = Vector3.new(90, -90, 0)
end
end
end
end
0
Where is the bool value located and is this script local? Gameplay28 69 — 3mo
0
No, this script is server, and the bool value is in workspace > Settings. RookPvPz -27 — 3mo
0
put it in a code block this time LoganboyInCO 150 — 3mo
0
I better be getting compensation for fixing your unindented code. incapaxx 3361 — 3mo
2
Edited 3 months ago
Ok, I see two main issues with your script.
1. You don't use any variables
2. You use a while loop instead of the useful events Roblox has provided you with
Your bug is probably stemming from the second issue that I pointed out, but solving the first issue can make your life as a programmer so much easier, and your programs could become more efficient as a result of using variables.
Let's dive right in! I notice that you use script.Parent a significant number of times. Do your hands never get tired? There's a solution that everyone knows about, but never uses. It's called a variable. Now, the issue I notice a lot is that YouTubers and tutorials explain how to use a variable, but never when. As a general rule, if you're writing the same code twice, it should be in a variable or a function depending on what you are doing. In your case, you can make a variable, such as par(which would stand for parent) and use that instead of script.Parent. Once you create the variable, you can save yourself some typing time by using Ctrl + h to replace all occurrences of script.Parent with your variable name (which was par in my example).
Ok, on to fixing the actual issue. You're using a while loop to detect changes, which is not optimal nor is it recommended by any experienced programmer on Roblox. The reason it's not recommended is that there are two ways that allow you to detect changes with little to no work. They are Changed and GetPropertyChangedSignal("PropertyName"). Here's an example with both in use:
local boolVal = Instance.new("BoolValue")
boolVal.Value = false
-- first way:
boolVal.Changed:Connect(function()
print(boolVal.Value)
end
boolVal.Value = true
-- output --> 'true'
-- other (I recommend this one) way:
boolVal:GetPropertyChangedSignal("Value"):Connect(function()
print(boolVal.Value)
end
boolVal.Value = false
-- not output because the value did not change
boolVal.Value = true
-- output --> 'true'
I'm assuming you can figure it out from here.
If you use these methods and it still doesn't work, then it's probably because you are making the changes manually from the client, and the server cannot detect those changes (even in studio) because of filtering enabled. The way to fix this for testing is to click the button in studio that says current server (or something like that) so that you can make the changes from the server. Or, you can write a different script to make the changes in order to test.
One final note: I noticed that the indentation in your script was pretty poor. I suggest indentation because it improves readability and ease in debugging. You can read more on that here.
I hope this helps! If you have any additional questions, feel free to leave them in the comment section below.
0
My discussion about script.Parent was an example. I encourage variables for your other repeated code as well. Roblox has already made a global variable 'workspace' for your convenience. You can use that instead of game.Workspace. AetherProgrammer 75 — 3mo | 2019-07-17 08:39:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21965397894382477, "perplexity": 2289.1320697129004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00259.warc.gz"} |
https://math.stackexchange.com/questions/4036478/deriving-the-expected-value-of-the-normal-distribution-via-a-substitution | Deriving the expected value of the normal distribution via a substitution
I am trying to compute the expected value, E$$[x]$$, of a random variable $$X\sim\mathcal{N}(\mu,\sigma^2)$$. The density function of the normal distribution is $$f_X(x)=\frac{1}{\sigma \sqrt{2\pi}}\exp\left(-\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)^2\right), \ \ \ -\infty
I am attempting to use the following substitution to help find the expected value: $$y=\frac{x-\mu}{\sigma\sqrt{2}}\implies dx=\sigma\sqrt{2} dy \tag{1}.$$
The expected value is computed as \begin{align} \text{E}[x]&=\int_{-\infty}^{\infty} x f_X(x) \ dx \\ &=\frac{1}{\sigma \sqrt{2\pi}}\int_{-\infty}^{\infty} x\exp\left(-\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)^2\right) \ dx \\ &=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}(\mu+\sigma\sqrt{2} y)\exp(-y^2)\ dy \ \ \ \ \text{(using substitution (1))} \\ &=\frac{\mu}{\sqrt{\pi}}\int_{-\infty}^{\infty} \exp(-y^2) \ dy \ + \ \sigma\sqrt{\frac{2}{\pi}}\int_{-\infty}^{\infty} y\exp(-y^2) \ dy \\ &=\sigma\sqrt{\frac{2}{\pi}}\left(\left[-\frac{1}{2}\exp(-y^2)\right]_{-\infty}^{\infty}+\frac{1}{2}\int_{-\infty}^{\infty}\frac{\exp(-y^2)}{y} \ dy\right). \end{align} I am unsure what this simplifies to (specifically how to deal with the final integral). I've noticed that the integrand is odd (does the integral simply cancel?).
• Try – BruceET 2 days ago
• @BruceET Thanks. I have read the solution and agree with the result. However, I'm wondering if my method will work. It seems the most intuitive to me. – M B 2 days ago
Your substitution is terrible but it must work as well! the natural substitution is
$$Y=\frac{X-\mu}{\sigma}$$
Using this (in this case you are standardizing your Gaussian) the result is very easy.
Considering valid your procedure, reading your last but one passage, the sum is the following
$$\frac{\mu}{\sqrt{\pi}}\cdot \sqrt{\pi}+0=\mu$$
this because the first integral is the Gaussian integral (in the link you can find the easy proof too) and the second is the integral of an odd function over a symmetric domain around zero.
• Would you suggest that I instead use the substitution $$Y=\frac{X-\mu}{\sigma}?$$ If it's easier, it's probably a good idea. It never occurred to me. – M B 2 days ago
• @MB the substitution I suggested you is the more natural but the passages are more or less the same as yours – tommik 2 days ago | 2021-02-26 00:57:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8181954026222229, "perplexity": 481.5813533416332}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00239.warc.gz"} |
https://www.math.kyoto-u.ac.jp/dynamics/2019.html | # 2019年度 京都力学系セミナー
## 2019 Kyoto Dynamical Systems seminar
English
from 14:00, every Friday
Room 609 in Building no. 6 South Wing, at Facalty of Science, Kyoto University (Map)
1月 24日 (金) 上原 崇人 氏 (岡山大学) (来年度に延期になりました)
• 1月17日(金)
沈 維孝 (Weixiao Shen) 氏 (復旦大学)
Primitive tuning via quasiconformal surgery
Abstract:
Using quasiconformal surgery, we prove that any primitive, postcritically-finite hyperbolic polynomial can be tuned with an arbitrary generalized polynomial, generalizing a result of Douady-Hubbard for quadratic polynomials to the case of higher degree polynomials. This solves affirmatively a conjecture by Inou and Jiwi on surjectivity of the renormalization operator on higher degree polynomials in one complex variable. This is a joint work with Wang Yimin.
• 12月13日(金)14時30分から
Jan Kiwi 氏 (Pontificia Universidad Católica de Chile)
Indeterminacy Loci of Iterate Maps in Moduli Space
Abstract:
We study the action of the iteration maps on moduli spaces of complex rational maps. The tools employed emerge from considering dynamical systems acting on the Berkovich projective line over an appropriate non-Archimedean field.
The moduli space $\operatorname{rat}_d$ of rational maps in one complex variable of degree $d \ge 2$ has a natural compactification by a projective variety $\overline{\operatorname{rat}}_d$ provided by geometric invariant theory. Given $n \ge 2$, the iteration map $\Phi_n : \operatorname{rat}_d \to \operatorname{rat}_{d^n}$, defined by $\Phi_n: [f] \mapsto [f^n]$, extends to a rational map $\Phi_n: \overline{\operatorname{rat}}_d \dashrightarrow\overline{\operatorname{rat}}_{d^n}$. We characterize the elements of $\overline{\operatorname{rat}}_d$ which lie in the indeterminacy locus of $\Phi_n$. This is a joint work with Hongming Nie (Hebrew University of Jerusalem).
• 12月6日(金)
Fabrizio Bianchi 氏 (Université de Lille)
Stability and bifurcation in families of polynomial skew products
Abstract:
(Quadratic) polynomial skew-products are maps of the form $F (z, w) = (p(z), q(z, w))$, where $p$ and $q$ are polynomials of degree 2. These maps give the simplest non trivial examples of endomorphisms of P2(C). In this talk we investigate the natural parameter space of these maps, with emphasis on the stability-bifurcation dichotomy (that we will review in the beginning of the the seminar). In particular, we describe the geometry of the bifurcation current near infinity, and we give a partial classification of hyperbolic components. One of the tools we use is a generalisation to this setting of the one-dimensional equidistribution of some dynamically defined hypersurfaces of the parameter space towards the bifurcation current.
This is a joint work with Matthieu Astorg, Orléans.
• 11月22日(金) (応用数学セミナーと共催)
宮路 智行 氏 (京都大学)
A billiard problem arising from nonlinear and nonequilibrium systems
Abstract:
有界領域に閉じ込められたある種の自己駆動粒子は領域内部での直進と境界での反射を繰り返す.あたかもビリヤード球のようだが,境界に衝突せずに進行方向を変え,その反射規則は完全弾性反射ではないようである.そのため,数学的ビリヤード問題とは異なる様相が示される.このような運動は水面に浮かぶ円板状の樟脳や垂直に振動する液面を跳ねる液滴,平面上の反応拡散系やある種の非線形光共振器の数理モデルにおけるスポット解の運動など様々な系で観察されている.本講演では樟脳円板の運動を記述する数理モデル及びそこから中心多様体縮約によって導かれる常微分方程式モデルを通して,その運動の性質と矩形領域における軌道について議論する.
• 11月8日(金)15:30より
Walter Bergweiler 氏 (Christian–Albrechts–Universität zu Kiel)
Hyperbolic entire functions with bounded Fatou components
Abstract: (PDF)
The Eremenko-Lyubich class $B$ consists of all transcendental entire functions $f$ for which the set $\text{sing}(f^{-1})$ of critical and (finite) asymptotic values is bounded. A function $f\in B$ is called hyperbolic if every point of the closure of $\text{sing}(f^{-1})$ is contained in an attracting periodic basin. We show that if a hyperbolic map $f\in B$ has no asymptotic value and every Fatou component of $f$ contains at most finitely many critical points, then every Fatou component of $f$ is bounded. Moreover, the Fatou components are quasidisks in this case. If, in addition, there exists $N$ such that every Fatou component contains at most $N$ critical points, then the Julia set of $f$ is locally connected.
For hyperbolic maps in $B$ with only two critical values and no asymptotic value we find that either all Fatou components are unbounded, or all Fatou components are bounded quasidisks.
We illustrate the results by a number of examples. In particular, we show that there exists a hyperbolic entire function $f\in B$ with only two critical values and no asymptotic value for which all Fatou components are bounded quasidisks, but the Julia set is not locally connected.
The results are joint work with Núria Fagella and Lasse Rempe-Gillen.
• 11月1日(金)
Jan Kiwi 氏 (Pontificia Universidad Católica de Chile) (延期になりました)
宇敷 重廣 氏
Invariant curves in complex surface automorphisms
Abstract:
Automorphisms of complex surfaces can have various invariant curves. In this talk, we consider a family of rational surface automorphisms with an invariant caspidal cubic curve.
Such rational automorphism can have, at he same time, an invariant line, or an invariant quadratic curve, or a pair of lines intersecting at a point.
Dynamics in invariant curves are studied.
• 10月25日(金)
Matthieu Arfeux 氏 (Pontificia Universidad Católica de Valparaíso)
Trees and holomorphic dynamics
Abstract:
Some years ago Mistuhiro Shishikura showed how the use of some special trees and be useful to study existence of special holomorphic dynamical systems. Then, the same kind of trees have been introduced by DeMarco-McMullen to study some hyperbolic components of the space of polynomials dynamical systems. After the work of Jan Kiwi on Berkovich space, I introduced in my thesis a vocabulary to propose a definition of dynamical systems between trees of spheres in order to unify all of these points of view. With this vocabulary, Jan Kiwi's results have been reproved and I showed with Cui Guizhen how to improve our knowledge of special kind of behavior when taking sequences of rational maps diverging in the natural associated Moduli space.
• 10月18日(金)
James A. Yorke 氏 (University of Maryland) 3号館110号室にて (部屋が変更になりました)
Period Doubling Cascades - The big unsolved problem
Abstract:
Review of cascades results from the point of view of continuation theory and a description of the biggest problem that remains.
• 10月11日(金)
山中 祥五 氏 (京都大学)
2次元微分方程式系の可積分性とPoincaré-Dulac標準形への変換の収束性
Abstract:
平衡点近傍における解析手法として,ハミルトン系の場合にはBirkhoff標準形,一般的な場合にはPoincaré-Dulac標準形がある.一般に標準系への変換は収束するとは限らないベキ級数で与えられるが,Zung(2002)により,解析的に可積分であれば平衡点の近傍で収束する変換により標準化可能であることが示されている.この結果により,平衡点近傍における可積分性は収束する標準形への変換の存在と標準形自体の可積分性から調べることが可能と考えられる.このアイデアを適用し,ある2次元微分方程式系に対して,可積分であるための必要十分条件を与える.特に,標準形への変換の収束性を示すためにBorel変換を用いる.また,平衡点を持つ微分方程式に対して,共鳴次数1以下のPoincaré-Dulac標準形は必ず可積分であるという講演者の最近の結果も用いる.
• 10月4日(金)
高橋 博樹 氏 (慶応大学)
existence of large deviations rate function for any S-unimodal map
Abstract:
区間上のS-unimodal map, 特に2次写像が大偏差原理(LDP)を満たすことを示す。 以前に講演者らにより、高々有限回繰り込み可能で吸引周期点を持たない場合に、 最も深いrenormalization cycle上に制限した上でのLDPが(ほぼ)示されている。 今回の結果により、任意の2次写像について初期条件の制限なしにLDPが 示されたことになる。非常に複雑な分岐が起きているにも関わらず、 LDPが常に成立している点は興味深い。
本講演での内容はプレプリント arXiv:1908.07716 に収められている。
• 8月5日(月) 午前10時30分から 6号館809セミナー室にて
石川 勲 氏(理化学研究所/慶應義塾大学)
解析的な正定値関数から定まる再生核Hilbert空間における合成作用素 の有界性について
Abstract:
合成作用素(Koopman作用素)は複素解析的な文脈から古典的に調べられている対象であるが、近年では非線形な力学系モデルから生成される時系列データの解析や機械学習といった工学的な応用の文脈において高い注目を集めている。また、解析的な正定値関数から定まる再生核Hilbert空間は工学や統計において広く使われている対象である。ある写像の性質と写像の定める合成作用素の数学的な性質(有界性やコンパクト性など)との関係性は数学的に興味深い問題であると同時に、工学的な応用への理論保証を与えるためにも重要である。一方で、その関係性が知られているケースはあまり多くないのが現状である。本研究では、いくつか重要なケースで合成作用素の有界性ならば合成作用素を定める写像がcontractiveなAffine写像になるという事実を示したので、それについて証明の概要はアイデアについて概説したい。本研究は池田正弘氏(理研/慶應)、澤野嘉宏氏(首都大)との共同研究である。
• 7月26日(金)
Vadim Kaloshin 氏 (University of Maryland)
Birkhoff Conjecture for convex planar billiards
Abstract:
G.D.Birkhoff introduced a mathematical billiard inside of a convex domain as the motion of a massless particle with elastic reflection at the boundary. A theorem of Poncelet says that the billiard inside an ellipse is integrable, in the sense that the neighborhood of the boundary is foliated by smooth closed curves and each billiard orbit near the boundary is tangent to one and only one such curve (in this particular case, a confocal ellipse). A famous conjecture by Birkhoff claims that ellipses are the only domains with this property. We show a local version of this conjecture - namely, that a small perturbation of an ellipse has this property only if it is itself an ellipse.
• 7月19日(金)
Andrzej J. Maciejewski 氏 (University of Zielona Góra)
Global residue theorem and integrability of homogeneous potentials
Abstract:
I will present an overview of my works connected with the integrability of natural Hamiltonian system with homogeneous potentials. An application of differential Galois methods for such system is effective as we known for them particular solutions. These solutions are defined by an algebraic set $\mathcal{D}$ in a complex projective space. It appears that residue of certain differential forms taken over points of $\mathcal{D}$ restricts the necessary conditions for the integrability deduced from the differential Galois theory.
• 6月21日(金)
色川 怜未 氏 (東京工業大学/理化学研究所)
Activity measures of dynamical systems over non-archimedean fields
Abstract:
Toward the understanding of bifurcation phenomena of dynamics on the Berkovich projective line, an over non-archimedean fields, we study the stability (or passivity) of critical points of families of polynomials parametrized by an analytic curve. We construct the activity measure of a critical points of a family of polynomials, and study its property such as equidistribution, its relation to the Mandelbrot set.
• 6月14日(金)
金 英子 氏 (大阪大学)
Entropies of hyperbolic surface bundles over the circle as branched double covers of 3-manifolds
Abstract:
The branched virtual fibering theorem by Makoto Sakuma states that every closed orientable 3-manifold M with a Heegaard surface of genus g has a branched double cover which is a genus g surface bundle over the circle.
It is proved by Brooks and Montesinos that such surface bundle can be chosen to be hyperbolic. i.e, the monodromy map of such surface bundle can be chosen to be pseudo-Anosov. So it makes sense to talk about the topological entropies of hyperbolic surface bundles over the circle as branched double covers of M.
I discuss some properties of entropies of those hyperbolic surface bundles.
In joint work with Susumu Hirose, we prove that when M is the 3-sphere S^3, the minimal entropy over all hyperbolic, genus g surface bundles as branched double covers of S^3 behaves like 1/g.
If time permits, I will introduce some questions related to the branched virtual fibering theorem.
• 5月24日(金)
• Pieter Allaart 氏 (University of North Texas)
The pointwise Holder spectrum of self-affine functions
Abstract:
We study general self-affine functions on an interval, which include the Takagi function and Okamoto's functions. We show that the pointwise Holder spectrum of these functions can be completely determined. In most cases, the Holder spectrum is given by the multifractal formalism, but there is an important class of exceptions. In fact, it is possible to give exact (but complicated) expressions for the pointwise Holder exponent of any self-affine function at any point. The proofs of these results use a variety of techniques: Divided differenes, constrained optimization, and general Hausdorff measure estimates. This is joint work with S. Dubuc.
• 河邑 紀子 氏 (University of North Texas)
Revolving Fractals
Abstract:
Davis and Knuth in 1970 introduced the notion of revolving sequences, as representations of a Gaussian integer. Later, Mizutani and Ito pointed out a close relationship between a set of points determined by all revolving sequences and a self-similar set, which is called the Dragon from the viewpoint of symbolic dynamical systems. We will show how their result can be generalized by a completely different approach. The talk will be presented with a lot of pictures; accessible for graduate students. A few open problems will be introduced as well. This is a joint work with Drew Allen (UNT).
• 5月17日(金)
篠田 万穂 氏 (京都大学)
Intrinsic ergodicity for factors of ($-\beta$)-shift
Abstract:
We proved that every subshift factor of ($-\beta$) shifts is intrinsically ergodic, when $\beta$ is more than the golden ratio and the ($-\beta$)-expansion of $-1$ is not periodic with odd period. Moreover, the unique measure of maximal entropy satisfies a certain Gibbs property. This is an application of the technique established by Climenhaga and Thompson to prove intrinsic ergodicity beyond specification. We also prove that there exists a factor of $(-\beta)$-shift which is not intrinsically ergodic in the cases other than the above. This is a joint work with Kenichiro Yamamoto in Nagaoka University of Technology.
• 5月10日(金)
大林 一平 氏 (理化学研究所 AIP)
Persistent homology: Data analysis by algebraic topology
Abstract:
位相的データ解析というトポロジーの概念を活用したデータ解析分野がここ10〜20年発展しつつあり、特にパーシステントホモロジーという概念が重要となっている。歴史的にベッチ数を計算機で計算してデータ解析をしようというアイデアは古くからあったが、ノイズへの耐性の問題やトポロジカルな情報だけを取りだすのは情報量が少なすぎる、という問題があった。 パーシステントホモロジーはフィルトレーション上のホモロジーを考えることでこういった問題を解決した。
本講演では主に講演者の最近の2つの研究について紹介する
* パーシステントホモロジーの逆解析 (Volume-optimal cycles)
* パーシステントホモロジーと機械学習の組み合わせ
また、この2つの組み合わせがいかに強力か、といった話もする。 これらの話は両方とも数学(algebraic topology)と計算機科学(最適化や 機械学習など)の組み合わせによって実現されている。
• 4月26日(金)
Vassilis Rothos 氏 (Aristotle University of Thessaloniki)
Discrete and Continuous Nonlocal NLS Equation
Abstract:
In the first part, we study the existence and bifurcation results for quasi periodic traveling waves of discrete nonlinear Schrödinger equations with nonlocal interactions and with polynomial type potentials. We employ variational ana topological methods to prove the existence of traveling waves in nonlocal DNLS lattice. Next, we examine the combined effects of cubic and quintic terms of the long range type in the dynamics of a double well potential (nonlocal NLS). While in the case of cubic and quintic interactions of the same kind (e.g. both attractive or both repulsive), only a symmetry breaking bifurcation can be identified, a remarkable effect that emerges e.g. in the setting of repulsive cubic but attractive quintic interactions is a symmetry restoring'' bifurcation. Namely, in addition to the supercritical pitchfork that leads to a spontaneous symmetry breaking of the anti-symmetric state, there is a subcritical pitchfork that eventually reunites the asymmetric daughter branch with the anti-symmetric parent one. The relevant bifurcations, the stability of the branches and their dynamical implications are examined both in the reduced (ODE) and in the full (PDE) setting. The model is argued to be of physical relevance, especially so in the context of optical thermal media.
• 4月19日(金)
宇敷 重廣 氏
複素曲面の実断面上の力学系と複素サレム数
Abstract:
複素曲面のコクセター型の同型写像の力学系のコホモロジーへの作用の固有値と してサレム数が出現することが知られている。そうした複素力学系の、実軸に沿 う断面への制限は実曲面の力学系を誘導する。
この力学系によるホモロジーへの作用の固有値として、サレム数に類する、特殊 な代数的整数が出現する。
この複素数の固有値がレフシェツの不動点定理を通じて力学系のサドルの挙動と 結びついている。
〒606-8502 京都市左京区北白川追分町 | 2020-10-27 06:18:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7894238233566284, "perplexity": 1033.6930886245448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893402.83/warc/CC-MAIN-20201027052750-20201027082750-00478.warc.gz"} |
https://codegolf.stackexchange.com/questions/137382/solving-secret-swapping-sequences/137401 | # Solving Secret Swapping Sequences
This is a challenge, the cops thread can be found here.
This is the robbers thread, your job here is to take submissions on the cops thread and try to find the hidden sequences. If you find any sequence that can be substituted into the original code to compute that sequence that is a valid crack. Please notify the cops of your cracks as they happen so they can update their answers.
## Scoring
Your score will be the number of successful cracks you have made, with more cracks being better.
• Why not just let robbers comment the sequence # in the cops thread? – Lynn Aug 3 '17 at 16:08
• @Lynn I think that robbers should be able to get upvotes for their work in cracking answers. I prefer the two thread format for that reason. – Wheat Wizard Aug 3 '17 at 16:09
# Japt, Shaggy, A000290
p#A000290uCnG
Try it online!
# Python 3: Mr. Xcoder, A010709
n=int(input())
print(sum(1for i in"A010709"if i>"0")*-~n//-~n)
Try it online!
Additionally, here's a golfed version of the original. :P
lambda n:sum(1for i in"A017016"if i>"0")*-~n//-~n
• Well done... I knew it would be cracked soon – Mr. Xcoder Aug 3 '17 at 15:14
# Python 3, pppery
A018226
The original code put the sequence name in a comment. Since the comment probably can't affect the code I figured the hidden sequence had to be some sub-sequence of the original. A quick search of the first couple terms brought up A018226. Since it is a sub-sequence the code works for both. A018226 is even listed on the original sequence's page if you look back
One way to generalize the magic number sequence in A018226.
• That was the intended solution. I had the idea of trying to make people think it was impossible by putting the sequence in a comment. – pppery Aug 3 '17 at 18:17
• @ppperry The comment was what gave it away :). I figured it had to be a sub-sequence. Good fun anyway! – Wheat Wizard Aug 3 '17 at 18:18
• Maybe I could have hid that better, but still is an interesting twist compared to the typical answers to this sort of thing; about numbers, rather than code. – pppery Aug 3 '17 at 18:20
# C#, TheLethalCoder
A000578 (Cubes)
An easy one - it was also posted here.
• Of course, people should stop posting the answers from the other challenge :) – Mr. Xcoder Aug 3 '17 at 14:50
A000007: The characteristic function of 0: a(n) = 0^n.
# C#, TheLethalCoder, A000244
Also works with A000079 (powers of two).
A000244 (Powers of 3)
This no longer works, the OP updated the example after I posted this.
# dc, Bruce Forte
Cracked with A027480.
• Well done! What gave it away? – ბიმო Aug 4 '17 at 13:42
• The modulus operations limit the number of sequences generated. In this case 8 × 9 = 72. So plugged the formula into a spreadsheet and generated all of them. Only a handful of sequences produced all integers for all terms, and of those made a guess that only sequences with all positive terms would be of interest. Then it was a matter of searching the sequences and plugging the reference number back in. Searched five, three had corresponding entries, the third one matched outputs for all inputs. – user15259 Aug 4 '17 at 14:01
• If only I had not divided by 2 ;P – ბიმო Aug 4 '17 at 14:13
# Python 2: officialaimm, A055642
lambda x:len(x**(sum(map(int,'A055642'[1:]))==22))
Try it online!
It took me a while to find the sequence... Mostly 'cause OEIS search is super slow for me. o0
• Nicely done. (y) – officialaimm Aug 3 '17 at 17:29
# Python 3, ppperry, A000027 -> A004526
f=lambda a,n=((int("A004526",11)-0x103519a)%100%30+1)/2:a//(14-n)
Try it online! (prints first few terms of both. Note the two sequences have offsets of 1 and 0 respectively, so the first has a leading zero - it threw me a little!)
Cracked with A055642.
# Python 3.6, RootTwo
Original is A005843
Cracked with A001107
Try it online
The eval'd code of the original (minus comments) is n*2, of the cracked version is 4*n*n-n*3.
After filtering out syntax errors, undeclared variables, zero divisions, etc, it didn't take too long to run through the remaining list. There were a few false positives (like A004917) that I had to filter out by hand due to only checking the first few numbers, but it wasn't too common.
Also, A040489 tries to calculate n**3436485154-n, which slowed me down a bit. :P
• Congrats. That's it. Did you brute force it? I tried to make a few incorrect sequence ID's result in valid Python to slow things down, but not enough I guess. – RootTwo Aug 10 '17 at 3:38
• @RootTwo I did mostly brute force it. I had some other heuristics in there too, but nothing very complex. Took a couple mins to find 1107, about 8 to get up to 5843. Out of curiousity, I went up to 50000. No other matches in that range. I'd guess 15-20% were valid python. – Phlarx Aug 10 '17 at 4:04
# Chip, Phlarx
Cracked with A060843. On a hunch, guessed the sequence was going to be short!
• You got it! Good job – Phlarx Aug 10 '17 at 4:06 | 2020-09-22 23:52:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47726374864578247, "perplexity": 2248.622100376357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400208095.31/warc/CC-MAIN-20200922224013-20200923014013-00750.warc.gz"} |
https://infoscience.epfl.ch/record/88844 | Infoscience
Journal article
# The Rayleigh law in piezoelectric ceramics
The direct longitudinal piezoelectric effect (d(33)) in lead zirconate titanate, barium titanate, bismuth titanate and strontium bismuth titanate ceramics was investigated with respect to the dependence on the amplitude of an alternating pressure. At low alternating pressure amplitudes, the behaviour of the piezoelectric charge and the piezoelectric coefficient may be explained in terms of the Rayleigh law originally discovered for magnetization and magnetic permeability in ferromagnetic materials. The charge versus pressure hysteresis loops measured for piezeoelectric ceramics may similarly be described as the Rayleigh loops. The results presented show that the Rayleigh law can be applied to irreversible displacement of several types of non-180 degrees ferroelectric domain walls and imply universal validity of the Rayleigh law for displacement of ferromagnetic, ferroelastic and ferroelectric domain walls. | 2017-03-25 00:17:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813773989677429, "perplexity": 3356.0131273176703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188717.24/warc/CC-MAIN-20170322212948-00513-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://bird.bcamath.org/handle/20.500.11824/16 | ### Recent Submissions
• #### Wildland fire propagation modeling: fire-spotting parametrisation and energy balance
(Proceedings of the 17th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2017, pp. 805 - 813, 2017-07-04)
Present research concerns the physical background of a wild-fire propagation model based on the split of the front motion into two parts - drifting and fluctuating. The drifting part is solved by the level set method and ...
• #### Wildland fire propagation modelling
(MODELLING FOR ENGINEERING AND HUMAN BEHAVIOUR 2017 Extended abstract, 2017-12)
Wildfire propagation modelling is a challenging problem due to its complex multi-scale multi-physics nature. This process can be described by a reaction- diffusion equation based on the energy balance principle. Alternative ...
• #### Front Curvature Evolution and Hydrodynamics Instabilities
(Proceedings/Extended Abstract Book (6 pages) of the XXXX Meeting of the Italian Section of the Combustion Institute, Rome, Italy, 2017-06-07)
It is known that hydrodynamic instabilities in turbulent premixed combustion are described by the Michelson-Sivashinsky (MS) equation. A model of the flame front propagation based on the G-equation and on stochastic ...
• #### Topology of Spaces of Valuations and Geometry of Singularities
(Transactions of the AMS - American Mathematical Society, 2017-11-11)
Given an algebraic variety X defined over an algebraically closed field, we study the space RZ(X,x) consisting of all the valuations of the function field of X which are centered in a closed point x of X. We concentrate ...
• #### A proof of the integral identity conjecture, II
(Comptes Rendus Mathematique, 2017-10-31)
In this note, using Cluckers-Loeser’s theory of motivic integration, we prove the integral identity conjecture with framework a localized Grothendieck ring of varieties over an arbitrary base field of characteristic zero.
• #### Universal bounds for large determinants from non-commutative Hölder inequalities in fermionic constructive quantum field theory
(Mathematical Models and Methods in Applied Sciences (M3AS), 2017-08-02)
Efficiently bounding large determinants is an essential step in non-relativistic fermionic constructive quantum field theory to prove the absolute convergence of the perturbation expansion of correlation functions in terms ...
• #### Numerical valuation of two-asset options under jump diffusion models using Gauss-Hermite quadrature
(Journal of Computational and Applied Mathematics, 2017-04-19)
In this work a finite difference approach together with a bivariate Gauss–Hermite quadrature technique is developed for partial integro-differential equations related to option pricing problems on two underlying asset ...
• #### From G - Equation to Michelson - Sivashinsky Equation in Turbulent Premixed Combustion Modelling
(Proceedings/Extended Abstract Book (6 pages) of the XXXIX Meeting of the Italian Section of the Combustion Institute, Naples, Italy, 2017-06-20)
It is well known that the Michelson-Sivashinky equation describes hydrodynamic instabilities in turbulent premixed combustion. Here a formulation of the flame front propagation based on the G-equation and on stochastic ...
• #### Darrieus-Landau instabilities in the framework of the G-equation
(Digital proceedings of the 8th European Combustion Meeting, 18-21 April 2017, Dubrovnik, Croatia, 2017-04)
We consider a model formulation of the flame front propagation in turbulent premixed combustion based on stochastic fluctuations imposed to the mean flame position. In particular, the mean flame motion is described by ...
• #### Representation of surface homeomorphisms by tête-à-tête graphs
(2017-06-21)
We use tête-à-tête graphs as defined by N. A'campo and extended versions to codify all periodic mapping classes of an orientable surface with non-empty boundary, improving work of N. A'Campo and C. Graf. We also introduce ...
• #### Non-normal affine monoids, modules and Poincaré series of plumbed 3-manifolds
(Acta Mathematica Hungarica, 2017-05-18)
We construct a non-normal affine monoid together with its modules associated with a negative definite plumbed 3-manifold M. In terms of their structure, we describe the $H_1(M,\mathbb{Z})$-equivariant parts of the topological ...
• #### On intersection cohomology with torus actions of complexity one
(Revista Matemática Completense, 2017-05-20)
The purpose of this article is to investigate the intersection cohomology for algebraic varieties with torus action. Given an algebraic torus T, one of our result determines the intersection cohomology Betti numbers of ...
• #### Euler reflexion formulas for motivic multiple zeta functions
(Journal of Algebraic Geometry, 2017-05-14)
We introduce a new notion of $\boxast$-product of two integrable series with coefficients in distinct Grothendieck rings of algebraic varieties, preserving the integrability of and commuting with the limit of rational ...
• #### Isotropic Bipolaron-Fermion-Exchange Theory and Unconventional Pairing in Cuprate Superconductors
(2017-05-03)
The discovery of high-temperature superconductors in 1986 represented a major experimental breakthrough (Nobel Prize 1987), but their theoretical explanation is still a subject of much debate. These materials have many ...
• #### Logarithmic connections on principal bundles over a Riemann surface
(arxiv, 2017)
• #### A Short Survey on the Integral Identity Conjecture and Theories of Motivic Integration
(Acta Mathematica Vietnamica, 2017-04-04)
In Kontsevich-Soibelman’s theory of motivic Donaldson-Thomas invariants for 3-dimensional noncommutative Calabi-Yau varieties, the integral identity conjecture plays a crucial role as it involves the existence of these ...
• #### The M-Wright function as a generalization of the Gaussian density for fractional diffusion processes
(Fractional Calculus and Applied Analysis, 2013-12-31)
The leading role of a special function of the Wright-type, referred to as M-Wright or Mainardi function, within a parametric class of self-similar stochastic processes with stationary increments, is surveyed. This class ...
• #### Two-particle anomalous diffusion: Probability density functions and self-similar stochastic processes
(Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2013-12-31)
Two-particle dispersion is investigated in the context of anomalous diffusion. Two different modeling approaches related to time subordination are considered and unified in the framework of self-similar stochastic processes. ...
• #### Homogeneous singularity and the Alexander polynomial of a projective plane curve
(2017-12-10)
The Alexander polynomial of a plane curve is an important invariant in global theories on curves. However, it seems that this invariant and even a much stronger one the fundamental group of the complement of a plane curve ... | 2018-04-23 15:09:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48648273944854736, "perplexity": 1642.1197764361762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946077.4/warc/CC-MAIN-20180423144933-20180423164933-00530.warc.gz"} |
http://learning.maxtech4u.com/computers/ | # Journey of Computers
July 27, 2018
“If you want to progress, you have to evaluate”; Today not only the humans but the computers have evaluated as well, concerning their performance, usability and stability. In the earlier days when the computer was invented it was supposed to do basic tasks like computations and today almost all of our daily life work is handled by the computers. This journey of computers from a single task calculator to game changer is a tremendous journey and today we are going to focus on that journey.
### What is a Computer?
A computer is nothing but an electronic device that performs processes calculation and operation from given instructions [1]. Computation consists of three commands as input, process, and output at where the input is the instructions that we are providing to the computer through our input devices such as keyboard, mouse, and joystick. Then as receiving the input, the computer hardware starts processing the input and after completing the process it produces the output to the user using the output hardware such as a monitor, printer etc.
### How did it start?
Who first thought about the computer? Who invented this ultrasmart device? The computer was invented in the 19th century by Sir Charles Babbage. In the beginning, the size of the computer was about the size of the room. In the earlier days, the computer was supposed to the primary and straightforward task like computations and then with the time the size of computer got reduced and the performance of the computer has increased.
### Generations of Computers:
The journey of the computer from the size of a room to a palm-sized computer is divided into four generations. With each generation, the computer has become smarter and reliable and faster, this is the way the human evolves with generations and thus the future possibilities of the computing world are infinite. As with the human evolution at the beginning, the computer was massive in size in the first generation, but now the time has come where we are carrying computers in our pockets.
The four generation of computers are as following:
1. First Generation
2. Second Generation
3. Third Generation
4. Fourth Generation
5. Fifth Generation
#### First Generation (Vacuum tubes) [2]:
[3]
In the first generation of the computer, the size of the computer was about the size of a room and thus the computer was costly at that time. The setup was very tricky and messy and even if a small component of the system fails then the entire system fails. The systems developed in the first generation were unreliable and unstable. The operations that first generation computers can perform were a single level operation like computations. The starting of the first-generation vacuum tube computer was in the 1940 ’s.
#### Second Generation (Transistors) [2]:
[3]
The use of transistors was a great replacement in place of the vacuum and due to the use of the transistors the size of the second generation computers were reduced and the performance were increased in comparison to the first generation heavy computers. The decrement in size has provided simplicity in the design of the computer and the computer has become more reliable.
#### Third Generation (Integrated Circuits) [2]:
[3]
In the third generation computers, the transistors were replaced by the Integrated Circuits aka IC’s. The idea behind the replacement of the transistor to IC has succeeded because a single IC is consist of many transistors, capacitors, and resistors. The main change that was visible in the third generation was in the shape of a computer, the size of the computer has reduced drastically in comparison to the first generation of computers. In the third generation, they have started getting the shape of computers of today’s date.
#### Fourth Generation (Microprocessor) [2]:
[3]
The computers that we are using today are the fourth generation of computers. The fourth generation computers consist of the microprocessor, which was turned out to be the game-changer in the ways of size and performance and reliability. With the microprocessors, the size of computers has reduced to fit in our palms. With the microprocessor, the task that was being performed by the single processor is now divided into multiple processors and hence the performance has boosted a lot. The fourth generation computers are considered as the most stable computers.
#### Fifth Generation(Artificial Intelligence) [2]:
[3]
The development of futuristic computer has been started and the voice recognition and face recognition are just a small step towards the future of ultrasmart computers, future of infinite possibilities. Virtual reality is the one concept of futuristic computation, where we can capture and view moments in 360 degrees.
### References:
[1] “What is Computer,” techopedia, [Online]. Available: https://www.techopedia.com/definition/4607/computer. [Accessed 27 July 2018]. [2] V. Beal, “The Five Generations of Computer,” webopedia, 06 February 2018. [Online]. Available: https://www.webopedia.com/DidYouKnow/Hardware_Software/FiveGenerations.asp. [Accessed 27 July 2018]. [3] “Computers Generations,” tutorialspoint, [Online]. Available: https://www.tutorialspoint.com/computer_fundamentals/computer_generations.htm. [Accessed 27 July 2018].
Related Posts:
Augmented Reality: Immersive Technology
Robotics: The future is here
$${}$$ | 2018-12-10 02:44:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3742024600505829, "perplexity": 1412.5270756529328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823236.2/warc/CC-MAIN-20181210013115-20181210034615-00599.warc.gz"} |
https://brilliant.org/problems/area-24/ | # Area
Geometry Level pending
Find the area of the region in the $$xy$$ plane consisting of all points in the set $$\{(x,y) | x^2+y^2 \leq 144 \}$$ and satisfy the inequality $$\sin(2x+3y) \leq 0$$.
× | 2018-04-25 12:39:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7360039949417114, "perplexity": 310.87576677023446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00566.warc.gz"} |
https://www.expii.com/t/converting-units-of-time-9111 | Expii
# Converting Units of Time - Expii
Time can be measured in seconds, minutes, hours, days, or even months. There are no consistent multiples in time measurement. | 2021-11-27 04:22:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028542399406433, "perplexity": 2220.425129334533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00605.warc.gz"} |
https://ask.openstack.org/en/questions/110556/revisions/ | Trying to figure out which one to use and why? They seem very similar and both have ansible directories with the same files and folders. I just want to simply deploy openstack newton services on a few different nodes to have an internal openstack environment. | 2019-08-17 23:44:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33721327781677246, "perplexity": 741.536931279694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00216.warc.gz"} |
https://www.hackmath.net/en/math-problem/3695 | # Holidays - on pool
Children's tickets to the swimming pool stands x € for an adult is € 2 more expensive. There was m children in the swimming pool and adults three times less. How many euros make treasurer for pool entry?
Result
e = Eur (Correct answer is: mx+(x+2)m/3)
#### Solution:
$e=mx+(x+2)m/3$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
## Next similar math problems:
1. Profit gain
If 5% more is gained by selling an article for Rs. 350 than by selling it for Rs. 340, the cost of the article is:
2. Trucks
Three lorries droved bricks. One drove n bricks at once, second m less bricks than the first and third 300 bricks more the first lorry. The first lorry went 4 times a day the largest went 3 times a day and the smallest 5 times a day. How many bricks b
3. Coach
Average age of 24 players and a coach of one team is 24 years. The average age of players without a coach is 23 years. How old is the coach?
4. Expression with powers
If x-1/x=5, find the value of x4+1/x4
5. Expression
Solve for a specified variable: P=a+4b+3c, for a
6. Algebra
X+y=5, find xy (find the product of x and y if x+y = 5)
7. Coefficient
Determine the coefficient of this sequence: 7.2; 2.4; 0.8
8. Simplify
Simplify the following problem and express as a decimal: 5.68-[5-(2.69+5.65-3.89) /0.5]
9. Summerjob
The temporary workers planted new trees. Of the total number of 500 seedlings, they managed to plant 426. How many percents did they meet the daily planting limit?
10. Sale discount
The product was discounted so that eight products at a new price cost just as five products at an old price. How many percents is the new price lower than the old price?
11. Sales off
The price has decreased by 20%. How many percents do I have to raise the new price to be the same as before the cut?
12. Determine AP
Determine the difference of the arithmetic progression if a3 = 7, and a4 + a5 = 71
13. Quotient
Determine the quotient and the second member of the geometric progression where a3=10, a1+a2=-1,6 a1-a2=2,4.
14. Sale off
Product cost 95 euros before sale off. After sale off cost 74 euros and 10 cents. About what percentage of produt became cheaper?
15. Finding the base
Find unknown base of percent: 12.5 percent of what = 16 ?
16. Voltmeter range
We have a voltmeter which in the original set measures voltage to 10V. Calculate the size of the ballast resistor for this voltmeter, if we want to measure the voltage up to 50V. Voltmeter's internal resistance is 2 kiloohm/Volt.
17. Unknown mixed number
Find the number that is smaller than 5 5/12 by as much as 2 2/13 is smaller than 6 1/6 | 2020-03-31 23:27:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39893898367881775, "perplexity": 2420.556094827313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00254.warc.gz"} |
https://papers.neurips.cc/paper/2017/file/84b20b1f5a0d103f5710bb67a043cd78-Reviews.html | NIPS 2017
Mon Dec 4th through Sat the 9th, 2017 at Long Beach Convention Center
### Reviewer 1
Paper Summary: The main idea is that Nesterov's acceleration method's and Stochastic Gradient Descent's (SGD) advantages are used to solve sparse and dense optimization problems with high-dimensions by using an improved GCD (Greedy Coordinate Descent) algorithm. First, by using a greedy rule, an $l_1$-square-regularized approximate optimization problem (find a solution close to $x^*$ within a neighborhood $\epsilon$) can be reformulated as a convex but non-trivial to solve problem. Then, the same problem is solved as an exact problem by using the SOTOPO algorithm. Finally, the solution is improved by using both the convergence rate advantage of Nesterov's method and the "reduced-by-one-sample" complexity of SGD. The resulted algorithm is an improved GCD (ASGCD=Accelerated Stochastic Greedy Coordinate Descent) with a convergence rate of $O(\sqrt{1/\epsilon})$ and complexity reduced-by-one-sample compared to the vanilla GCD. Originality of the paper: The SOTOPO algorithm proposed, takes advantage of the l1 regularization term to investigate the potential values of the sub-gradient directions and sorts them to find the optimal direction without having to calculate the full gradient beforehand. The combination of Nesterov's advantage with SGC advantage and the GCD advantage is less impressive. Bonus for making an efficient and rigorous algorithm despite the many pieces that had to be put together Contribution: -Reduces complexity and increases convergence rate for large-scale, dense, convex optimization problems with sparse solutions (+), -Uses existing results known to improve performance and combines them to generate a more efficient algorithm (+), -Proposes a criterion to reduce the complexity by identifying the non-zero directions of descent and sorting them to find the optimal direction faster (+), -Full computation of the gradient beforehand is not necessary in the proposed algorithm (+), -There is no theoretical way proposed for the choice of the regularization parameter $\lambda$ as a function of the batch size. The choice of $\lambda$ seems to affect the performance of the ASGCD in both batch choice cases (-). Technical Soundness: -All proofs to Lemmas, Corollaries, Theorems and Propositions used are provided in the supplementary material (+), -Derivations are rigorous enough and solid. In some derivations further reference to basic optimization theorems or Lemmas could be more en-lighting to non-optimization related researchers (-). Implementation of Idea: The algorithm is complicated to implement (especially the SOTOPO part). Clarity of presentation: -Overall presentation of the paper is detailed but the reader is not helped to keep in mind the bigger picture (might be lost in the details). Perhaps reminders of the goal/purpose of each step throughout the paper would help the reader understand why each step is necessary(-), -Regarding the order of application of different known algorithms or parts of them to the problem: it is explained but could be more clear with a diagram or pseudo-code (-), -Notation: in equation 3, $g$ is not clearly explained and in Algorithm 1 there are two typos in referencing equations (-), -For the difficulty of writing such a mathematically incremental paper, the clarity is at descent (+). Theoretical basis: -All Lemmas and transformations are proved thoroughly in the supplementary material (+), -Some literature results related to convergence rate or complexity of known algorithms are not referenced (lines 24,25,60,143 and 73 was not explained until equation 16 which brings some confusion initially). Remark 1 could have been referenced/justified so that it does not look completely arbitrary (-), -A comparison of the theoretical solution accuracy with the other pre-existing methods would be interesting to the readers (-), -In the supplementary material in line 344, a $d \theta_t$ is missing from one of the integrals (-). Empirical/Experimental basis: -The experimental results verify the performance of the proposed algorithm with respect to the ones chosen for comparison. Consistency in the data sets used between the different algorithms, supports a valid experimental analysis (+), -A choice of better smoothing constant $T_1$ is provided in line 208 (+) but please make it more clear to the reader why this is a better option in the case of $b=n$ batch size (-), -The proposed method is under-performing (when the batch size is 1) compared to Katyusha for small regularization $10^{-6}$ and for the test case Mnist while for Gisette it is comparable to Katyusha. There might be room for improvement in these cases or if not it would be interesting to show which regularization value is the threshold and why. The latter means that the algorithm proposed is more efficient for large-scale problems with potentially a threshold in sparsity (minimum regularization parameter) that the authors have not theoretically explored. Moreover, there seems to be a connection between the batch size (1 or n, in other words stochastic or deterministic case) and the choice of regularization value that makes the ASGCD outperform other methods which is not discussed (-). Interest to NIPS audience [YES]: This paper compares the proposed algorithm with well-established algorithms or performance improvement schemes and therefore would be interesting to the NIPS audience. Interesting discussion might arise related to whether or not the algorithm can be simplified without compromising it's performance. | 2021-06-19 12:47:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.658778190612793, "perplexity": 885.1475638638192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648194.49/warc/CC-MAIN-20210619111846-20210619141846-00391.warc.gz"} |
http://astrobites.com/2012/05/24/the-supernova-early-warning-system/ | The Supernova Early Warning System
Studying supernovae is a very tricky business. They occur so rarely in our galaxy that nobody knows when and where they will happen. The current strategy involves patiently waiting for a “new star” to appear in the sky. The problem with this strategy is that the entire sky cannot be monitored constantly, so astronomers don’t get to see the supernova as it happens. Instead, they stumble upon it hours or days later, after the initial explosion has happened. This leaves a major gap in our understanding of supernova processes that astronomers are desperate to close. But what are we to do without the telescopic manpower to monitor every arc-second of the sky all the time?
Enter SNEWS—the SuperNova Early Warning System—a world-wide network of observatories that looks for the sign that a galactic supernova is about to become visible and alerts the astronomical community so they can point their telescopes towards the supposed location and watch the show. The telltale sign SNEWS is looking for is a shower of neutrinos that are produced when an unfortunate star’s core takes its proverbial last breath and collapses.
Neutrinos Production in Supernovae
Neutrinos have no charge and are incredibly light-weight. In fact, they were believed to have no mass at all for quite some time. For this reason, neutrinos barely interact with anything at all. They just go. Photons, on the other hand, get held up in traffic despite having no mass. The material within a star is dense enough so that light is continuously absorbed and reemitted in a random direction. This causes the light to take a random walk on its way from the core to the surface. So although light moves faster than neutrinos do, it gets caught up within the stellar envelope and neutrinos are the first ones out the door.
Neutrino Detectors Worldwide
Neutrinos are infamously non-interactive, but it is still possible to detect them. The most common method involves detecting not the neutrinos themselves, but the products of their interactions with matter. A very common reaction involving neutrinos is inverse beta decay wherein an electron antineutrino interacts with a proton to create a neutron and a positron ( $\bar{\nu}_e + p \rightarrow n + e^+$ ). The trick is then to detect the neutrino-induced positron. For this reason, most detectors contain an ample amount of protons to maximize positron production.
Figure 1: The Sudbury Neutrino Observatory (SNO) is a heavy-water Cherenkov detector.
Scintillation Detectors — These detectors are large volumes of organic hydrocarbons that produce light when charged particles (like positrons) within them lose energy. The light produced then enters the photomultiplier tubes (PMTs), which are extremely sensitive photon detectors. The PMTs generate an amplified current that is sent to computers for analysis. Unfortunately, scintillators emit light in random directions, so there is no directional information contained in the created photons. Examples include: Large Volume Detector (LVD) in Italy, Mini-BooNE in the US, KamLAND in Japan, and Borexino in Italy.
Water Cherenkov Detectors — When neutrinos interact with water, the resultant charged particles are travelling faster than the speed of light for water and emit Cherenkov radiation. This radiation is again detected by PMTs. The light emitted by this process is directional and allow us to know which direction the neutrinos are coming from. Example: Super-Kamiokande (Super-K) in Japan.
Long String Water Cherenkov Detectors — These detectors are very similar to water detectors. The difference is that they’re arranged on long strings that are embedded within water or ice. They are designed to detect high-energy neutrinos, but can still be used to look for the signs of inverse beta decay. Example: IceCube in Antarctica.
High-Z / Neutron Detectors — Lead and Iron are commonly used in these detectors. They make use of another interactions between neutrinos of all flavors (there are three types of neutrinos, the electron, tau, and muon) and the nuclei of the detector material. This interaction scatters neutrons that are then detected and sent for analysis. Example: The Helium and Lead Observatory (HALO) in Canada.
Detectors ASSEMBLE!
Some of these detectors are part of the SuperNova Early Warning System. Like Marvel’s Avengers, these detectors form a team that quietly wait for the signal that a supernova is about to become visible, and then they save the day by alerting the world, telling them where to look.
Figure 2: Locations of SNEWS-affiliated neutrino detectors. Mini-BooNE–Chicago (Orange), SNOLAB–Sudbury (Pink), LVD/Borexino–Gran Sasso (Blue), Super-K–Hida (Red), IceCube–South Pole (Green).
The network of detectors reports to a centralized computer (referred to as the “coincidence server”) located at the Brookhaven National Laboratory. This server listens for triggers from any of the SNEWS detectors. False alarms are an issue, however, so no single detector can issue an alert. A coincident signal between multiple detectors must be provided before the coincidence server issues an alert to the community. Even so, the possibility of a false alarm still exists and SNEWS has designed its coincidence algorithms so that the rate of false alarms is approximately 1 per century.
Once a coincident signal has been received by the server, it designates the alert as either GOLD or SILVER depending on the confidence level. A GOLD alert is automatically sent to the public. A SILVER alert is sent to the experiments and requires human checking before more widespread dissemination. After all, it is important to prevent false alerts from being issued; we don’t want a ‘cry wolf’ sort of situation to arise and end up missing out on some serious science. To ensure the accuracy and utility of SNEWS, the system was designed with “the three P’s” in mind.
Figure 3: A flow chart describing how the coincidence server handles triggers and issues alerts.
The Three P’s:
1. Prompt — The alert must be quick. Once the neutrinos reach the detectors, it is only a matter of hours before the early stages of the shock are visible. The automated process that produces a GOLD warning takes approximately 5 minutes. However, a SILVER warning needs human input to determine whether it’s worth alerting astronomers. This checking process could take 20 minutes or more.
2. Pointing — The usefulness of SNEWS is severely diminished if we don’t know which direction in the sky we should look. It’s important to have some directional information provided by the detectors. The only detector capable of pointing is the Super-K in Japan. Technically a triangulation technique is possible by using timing information from multiple detectors, but it is not very accurate at this time.
3. Positive — “There must be no false supernova alerts to the astronomical community.” No single experiment can decrease the false alarm rate to zero. However, requiring a coincidence between experiments effectively can.
Since SNEWS has been developed, there have been no galactic supernovae and thus, no alerts sent out. Supernovae occur in the Milky Way approximately once in a 30-year period, so it’s only a matter of time before SNEWS gets put to the test. And when it does, you can be part of the action. Anyone can sign up to be alerted by SNEWS when a supernova is detected. Amateur astronomers are an integral part the world-wide observing network. Sign up to receive alerts here. | 2015-04-26 02:46:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5179887413978577, "perplexity": 1397.0893680688487}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652296.40/warc/CC-MAIN-20150417045732-00293-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=4160171 | # How can I solve this without using integrating factor?
by ktklam9
Tags: factor, integrating, solve
P: 3 Let the Wronskian between the functions f and g to be 3e$^{4t}$, if f(t) = e$^{2t}$, then what is g(t)? So the Wronskian setup is pretty easy W(t) = fg' - f'g = 3e$^{4t}$ f = e$^{2t}$ f' = 2e$^{2t}$ So plugging it in I would get: e$^{2t}$g' - 2e$^{2t}$g = 3e$^{4t}$ Which results in g' - 2g = 3e$^{2t}$ How can I solve for g without using integrating factor? Is it even possible? Thanks :)
Math Emeritus Sci Advisor Thanks PF Gold P: 39,556 It's always possible to solve "some other way" but often much more difficult. This particular example, however, is a "linear equation with constant coefficients" which has a fairly simple solution method. Because it is linear, we can add two solutions to get a third so start by looking at g'- 2g= 0. g'= 2g give dg/g= 2dt and, integrating ln(g)= 2t+ c. Taking the exponential of both sides, $g(t)= e^{2t+ c}= e^{2t}e^c= Ce^{2t}$ where C is defined as ec. Now, we can use a method called "variation of parameters" because we allow that "C" in the previous solution to be a variable: let $g= v(t)e^{2t}$. Then $g'= v'(t)e^{2t}+ 2v(t)e^{2t}$ so the equation becomes $g'- 2g= v'(t)e^{2t}+ 2v(t)e^{2t}- 2v(t)e^{2t}= v'(t)e^{2t}= 3e^{2t}$. We can cancel the "$e^{2t}$" terms to get $v'(t)= 3$ and, integrating, v(t)= 3t+ C. That gives the solution $g(t)= v(t)e^{2t}= 3te^{2t}+ Ce^{2t}$ where "C" can be any number. | 2014-09-01 07:46:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783454298973083, "perplexity": 1046.0816066035575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00188-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/84522/is-there-a-simplicial-volume-definition-of-chern-simons-invariants?sort=newest | # Is there a simplicial volume definition of Chern Simons invariants?
Suppose we have some compact hyperbolic 3-manifold $M=\Gamma\backslash\mathbb H^3$. Now we know that the hyperbolic volume of $M$ can be defined as (a constant times) the simplicial volume of the fundamental class $[M]\in H_3(M,\mathbb Z)$, which is a homotopy invariant.
Now the hyperbolic volume and Chern-Simons invariant $M$ are connected by the following definition: $$i(\operatorname{Vol}(M)+i\operatorname{CS}(M))=\frac 12\int_M\operatorname{tr}(A\wedge dA+\frac 32A\wedge A\wedge A)\in\mathbb C/4\pi^2\mathbb Z$$ where $A$ is any flat connection on the trivial principal $\operatorname{SL}(2,\mathbb C)$-bundle over $M$ whose monodromy is the isomorphism $\pi_1(M)=\Gamma$. This corresponds to a particularly natural homomorphism (based on a dilogarithm) in $H_3(\operatorname{SL}(2,\mathbb C),\mathbb Z)\to\mathbb C/4\pi^2\mathbb Z$ (see work of Neumann and Zickert).
This close connection between the two invariants $\operatorname{Vol}(M)$ and $\operatorname{CS}(M)$ motivates the following question:
Is there a definition of $\operatorname{CS}(M)$ within the framework of simplicial volume?
- | 2015-01-29 20:32:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786332845687866, "perplexity": 94.3278945133085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115868812.73/warc/CC-MAIN-20150124161108-00173-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://mkyongtutorial.com/money-and-happiness-an-enriching-relationship-6 | # Money and Happiness: An Enriching Relationship? Needless to say, earnings on it’s own isn’t the determinant that is sole
Money and Happiness: An Enriching Relationship? Needless to say, earnings on it’s own isn’t the determinant that is sole
The fact that cash cannot purchase delight the most popular values coming from research in therapy. currently, new research has challenged these popular presumptions. Scientists figured having more income has interracialpeoplemeet coupon a primary relationship with additional general life satisfaction.
Nevertheless, this relationship is certainly not a line that is straight. As earnings increases after having a specific point, its effect on joy has a tendency to reduce. And the ones with little to no cash felt happier with an increase of income. Even with fundamental requirements are covered, a rise in earnings still enhanced life satisfaction.
## Can Cash Purchase Joy?
of just exactly how people that are happy. And materialism that is excessive negative ethical and mental implications. However the impact of cash on pleasure may not be neglected.
An frequently misinterpreted research by Princeton University scientists agreed that increased income does increase “emotional well-being†as much as a spot. From an everyday study of 1,000 United States residents, the analysis discovered a yearly earnings of $75,000 to function as the point at which further rise in income didn’t guarantee further emotional health. This appears to explanation because individuals who make less than$75,000 have a tendency to stress furthermore addressing fundamental requirements such as meals, rent, and clothes. Conditions that cash can take care of easily. Nonetheless, when the necessities that are basic covered, the issues which come up aren’t so what can be resolved simply by tossing additional money at it.
In other words that wealthier individuals have problems that aren’t pertaining to not enough money.
## Exactly what Exactly is “Happiness�
For Deaton and Kahneman, the research authors, joy may be defined when it comes to “emotional well-being†and “life evaluation.†Emotional wellbeing can be explained as the feelings that are day-to-day individual experiences. These could possibly be emotions of joy, sadness, anger or stress. While life assessment is chiefly concerning the emotions folks have about their life whenever showing upon it.
The scientists concluded that only well-being that is emotional the part of pleasure that tops out at $75,000. That is maybe not the instance for a lifetime assessment that they discovered to boost with increased money. Consequently, they summarized that more money purchases life satisfaction although not pleasure while low earnings is related to both low psychological wellbeing and life evaluation that is low. To put it simply, when people make well above$75,000, they feel more pleased with exactly exactly how their life did away. Nonetheless it does not stop them from being cranky and every that is irritable after which.
In just one more research in to the correlation between cash and pleasure, the average life satisfaction of individuals who reside in wealthier countries are usually discovered to be greater than those who work in poorer nations. On the other hand, other researched has unearthed that those in poorer countries have a tendency to find meaning in life significantly more than their wealthier counterparts. This really is as a result of religiosity that is fervent of countries that will be usually missing in more affluent countries.
## More Findings Concerning the hyperlink Between Money and Joy.
A report by Elizabeth W. Dunn, Lara B. Aknin, and Michael I. Norton, published in Science, concluded that money does purchase delight, but only when allocated to some other person. In the 1st research, the study discovered a primary correlation amongst the quantity individuals used on presents to other people and charity and a rise in their emotions of satisfaction. This nevertheless is true even though the earnings had been managed.
The team surveyed employees at a company who had just been paid profit-sharing bonuses for their second study. The quantity of this bonus the employees used on other people predicted delight 6 to 8 days along the line, even though the part of the bonus they allocated to by themselves didn’t have impact on their pleasure.
The team gave research participants $5 or$20 and were directed to either spend the money either on themselves or on others in their third study. Then their joy had been determined. The research learned, unsurprisingly, that people whom spent theirs on other people were happier compared to those who didn’t.
Finally, the research had extra individuals to state whatever they think will make them happier. Interestingly, they mistakenly thought that investing the complete $20 on by themselves would. Only if they knew… In the event that you come up with most of the link between these studies, can it be safe to then say that the greater costly the presents to other people are, the happier the giver will undoubtedly be? Well, Dunn et al. did test that is n’t concept. Another relevant concern to ask is when the outcomes will stay the exact same if individuals had been expected which will make these contributions using their very very own cash as against handful of cash given by scientists. Studies after research reports have been carried off to realize the complex relationship between those two. While scientists could have seen a few perspectives to the age-long concern, what’s generally speaking accepted from all that scientific studies are that being pleased is certainly not a great deal about simply how much you make, but alternatively the method that you decide to invest it. Can we then state that joy can be purchased with cash? This is certainly in the event that you invest it appropriate. Most likely, lots of high-net-worth individuals nevertheless fight with alcohol or medication addiction, despair, plus some have also ended their very own everyday lives. In your experience that is own should have realized that obtaining a raise or bonus didn’t allow you to be happier long-lasting. The first euphoria quickly dissipates while you get accustomed to the pay that is new. Once more, purchasing a brand new smartphone, or even the latest trending device didn’t do much for the delight. Here a number of ways you can easily spend cash, supported by technology, that is going to offer you more pleasure that is lasting ## Purchasing More Hours A UCLA research of 4,400 People in america indicated that individuals who appreciate time over cash are usually happier compared to those whom don’t think cash is better. This is the reason you really need to obtain a va to manage those mundane tasks that keep you bogged down. You might be better of paying for a housekeeper, a grocery distribution solution or other solution that may free your time up when it comes to items that really matter – like spending some time with relatives and buddies or simply wandering outside and viewing the sunset. Your wellbeing will certainly enhance. ## Paying for an excellent Experience. Come to consider it, purchasing a$400 game system that you will continue to have into the near future should allow you to be happier than, state, purchasing an admission to go see your preferred musician perform along with your money is gone when the show is finished.
No. not necessarily. | 2021-08-01 10:18:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2410624921321869, "perplexity": 3672.0563485624634}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00440.warc.gz"} |
https://www.physicsforums.com/threads/proof-for-a-sequence-convergence.660321/ | # Proof for a Sequence Convergence
$\text{We need to prove that the sequence} \ a_{n} = \{n^{2}/2^{n}\} \ \text{converges to 0} \\ \text{Consider the sequence {n/ 2n} = { 0, 1/2, 1/2, 3/8, 1/4, 5/32, ...}. The terms get smaller and smaller.}\\ \\ \text{we can easily show that} \ n/2^{n}<=1/n \ \forall n>3 \\ \text{from the fact that} \ n^{2}<=2^{n} \ \text{(insert proof by induction here)}\\ then \ for \ \epsilon > 0 \ choose \ N = max\{3,1/\epsilon\} \\ \text{The idea is that for all n>N, we will have} \ 1/n<1/N< \epsilon. \\ \text{The problem is I think N needs to be greater than the max. i.e. } \ N> max\{3,1/\epsilon\} \ for \ this \ to \ work \\ \\ \text{otherwise we'll end up with a case where} \ 1/N= \epsilon$
Related Calculus and Beyond Homework Help News on Phys.org
LCKurtz
Homework Helper
Gold Member
So use ##\epsilon/2## to begin with or let ##
N = \hbox{max}\{3,1/\epsilon\}+1##.
So use ##\epsilon/2## to begin with or let ##
N = \hbox{max}\{3,1/\epsilon\}+1##.
OK. So my question is correct, putting $N = \hbox{max}\{3,1/\epsilon\}$ was incorrect.
I found this argument on a website and wasn't sure.
Thanks. | 2020-08-10 22:05:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2270444631576538, "perplexity": 8217.576895811815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00293.warc.gz"} |
https://clickchemistrytools.com/1455-instructions/ | # AFDye 488 Antibody Labeling Kit
### Product No. 1455
#### Introduction
The AFDye 488 Antibody Labeling Kit provides all of the necessary reagents to perform labeling of small amounts of antibodies or other proteins (except IgM antibodies) with green-fluorescent AFDye 488 (Alexa Fluor® 488 equivalent). Simply mix your antibody with the reaction additive and pre-measured dye provided, followed by a brief incubation, and the conjugate is ready for simple purification or staining without further purification (see Note 1). The antibody will be covalently labeled with an average of 4–6 dyes per label molecule per antibody molecule. This kit is optimized for the labeling of 100 μg of antibodies per reaction at 1 mg/mL.
This kit utilizes 2,3,5,6-tetrafluorophenyl (TFP) esters instead of succinimidyl ester (SE or NHS) often used in conventional labeling kits. TFP is another type of carboxylic acid derivative that reacts with primary amines to form covalent amide bonds. The amine linkage bond is identical to the one formed by the reaction between primary amines and NHS esters or sulfo-NHS esters. However, in most cases, TFP ester displays much better stability toward hydrolysis in aqueous media resulting in more efficiency and better reproducibility in labeling of biopolymers. As a result of improved efficiency, very little of the non-reactive (hydrolyzed) dye is left in the labeling mixture, which allows staining without further purification.
AFDye 488 produces pH insensitive (pH 3-10), more photostable, and brighter protein conjugates compared to the previous generation of dyes (fluorescein/FITC). AFDye 488 labeled antibodies can be used for different applications, such as flow cytometry, fluorescent microscopy, ELISA, and Western blotting.
#### Kit Contents
Component Amount Storage
AFDye 488 TFP Ester (Component A) 2 vials –20 °C to 4 °C, protect from light
Sodium Hydrogen Carbonate (Component B) 1 vial 4 °C powder, –20 °C solution
Desalting Columns (Component C) 2 columns 4 °C
#### Protein Preparation
For the most effective reaction, the protein should be in a buffer that is free of primary amines and ammonium ions, as they compete with the amine groups of the protein for the reactive dye. The presence of low concentrations of sodium azide (<3 mM or 0.02%) or thimerosal (<1 mM or 0.04%) will not interfere with the reaction. Antibodies stabilized with bovine serum albumin (BSA) or gelatin will not label well. In such cases, for purification of antibodies, one can use commercially available kits such as Abcam Antibody Purification Kit (Protein A) (ab102784), GOLD antibody purification kit (ab204909), Antibody Purification Kit (Protein G) (ab128747), or the BSA Removal Kit (ab173231).
#### Antibody Labeling
1. Add 0.5 mL of deionized water to the vial of sodium hydrogen carbonate to prepare a 1 M solution. Vortex until fully dissolved. This solution can be stored and remain stable at –20 °C for a long period of time or at +4 °C for two weeks.
2. Dilute the antibody to be labeled to 1 mg/mL with a suitable buffer and then add 1/10 v/v of 1 M sodium hydrogen carbonate.
If the antibody is lyophilized from a suitable buffer, prepare a 1 mg/mL solution of it by adding a suitable amount of 0.1 M sodium hydrogen carbonate.
Note: Sodium hydrogen carbonate, pH 8-9, is added to raise the pH of the reaction mixture, as TFP esters react efficiently only at alkaline pH.
3. Transfer 100 μl of the protein solution to the tube of reactive dye. Pipet the solution up and down or cap the tube and gently invert it a few times until the dye is fully dissolved. Do not vortex, as vortexing results in protein denaturation.
4. Incubate the solution for 1 hour at room temperature. It is recommended that the vial begently inverted several times every 15–20 minutes to increase the labeling efficiency and to prevent overlabeling.
#### Purification of the Labeled Antibody
To remove the unbound dye from the dye-conjugated antibody and reduce fluorescent background in further applications, this purification step can be followed
1. Place a desalting spin column in a 2 ml centrifuge tube.
2. Centrifuge at 1500x g for 2 minutes and discard flow-through.
3. Wash the column with 200 μl of PBS for 2 minutes at 1500 x g. Discard flow-through.
4. Repeat step 3 two more times.
5. Load the 100 μl reaction volume to the desalting spin column. Allow the resin to adsorb the solution.
6. Place the spin column into an empty centrifuge tube and centrifuge for 3 minutes at 1500 x g. Discard the spin column.
7. After centrifugation, the collection tube will contain the labeled antibody. If one needs to store the antibody for a long period of time, we suggest stabilizing it with 20–50% v/v Glycerol, 0.05% sodium azide, and 0.05–1% BSA.
#### Degree of Labeling (DOL) Measuring
1. Dilute a small amount of the purified conjugate in PBS or other suitable buffer and measure the absorbance in a cuvette with a 1-cm pathlength at 280 nm (A280) and at 494 nm (A494).
Calculate the concentration of protein in the sample:
where 203,000 is the molar extinction coefficient (ε) in cm–1 M–1 of a typical IgG, IgA, IgD, and IgE at 280 nm.
2. Calculate the degree of labeling:
$$\text{DOL}={\text{A494 }x\text{ dilution factor} \over \text{71,000 }x\text{ protein concentration (M)}}$$
where 71,000 is the molar extinction coefficient (ε) in cm–1 M–1 of AFDye 488 at 494 nm.
3. If using a NanoDrop®, the nominal pathlength is 1 mm. For the DOL calculation, use 2,030,000 and 710,000 instead of 203,000 and 71,000.
4. If using a cuvette of a pathlength smaller than 1 cm, multiply the 203,000 and 71,000 by the ratio of the cuvette pathlength per 1 cm (10 mm). For example, if using a cuvette with a 2 mm pathlength, (10 mm / 2 mm) = 5. Multiply the numbers by 5.
An optimal degree of labeling for whole IgG is 4–9.
#### Storage of prepared conjugates
Store the labeled antibody at +4 °C, protected from light. If it is necessary to store the antibody for a long period of time, we suggest stabilizing it with 20–50% v/v glycerol, 0.05% Sodium azide, and 0.05–1% BSA. Then, aliquot and store at –20 °C. Avoid repeated freezing and thawing. Protect from light.
#### Troubleshooting
##### Underlabeling
If calculations point out that the protein is labeled with significantly lower than optimal DOL, repeat the labeling using second tube of provided reactive dye. This might be caused by a number of conditions:
• Trace amounts of primary amines in the buffer. If the antibody has been dissolved in primary amine-containing buffers (Tris, glycine, BSA), purify it by dialysis versus PBS before labeling or use commercially available antibody purification kits.
• Low concentration of antibody. If the concentration of antibody solution is less than 0.5 mg/mL, the reaction of labeling will not proceed efficiently.
• Low pH. If the antibody is strongly buffered at a low pH, the addition of sodium hydrogen carbonate will not raise the pH to the optimal level. If so, more sodium hydrogen carbonate can be added or the buffer can be exchanged either to PBS, or to 0.1 M sodium bicarbonate, pH 8.3.
• Due to the unique properties of each protein, the standard protocol may not always result in optimal labeling. To increase the DOL, the labeling procedure can be repeated using a second vial of the reactive dye and underlabeled sample following the same protocol. For some proteins, better labeling can be achieved with overnight incubations at +4 °C after an initial incubation of one hour at room temperature.
##### Overlabeling
If calculations point out that the antibody-dye conjugate is significantly higher than the optimal DOL, the protein is most likely overlabeled. For some applications, conjugates with high DOL may be acceptable for use, however the overlabeling can cause aggregation of the antibodies and reduced specificity for the antigen, both of which can lead to nonspecific staining. In addition, overlabeling often results in fluorescence quenching of the conjugate. To reduce the DOL, add more protein to the reaction, use smaller amount of reactive dye (for example, make a solution of reactive dye in 20 μL of water and add 10 μL to the solution of a protein), or allow the reaction to proceed for a shorter period of time (15–30 minutes).
##### Notes
We have tested several secondary labeled antibodies prospered with and without purification by desalting for immunocytochemical staining. For most antibody-dye conjugates the standard protocol with column purification produced slightly higher signal/noise ratios. We encourage researchers to consider whether the column purification step would significantly alter the outcome of their experiments. In addition, addition of signal enhancers, such as Image–iT® FX (available from Thermo Fisher Scientific, cat# I36933) to the cells prior to staining with the labeled protein conjugate reduced the slight background fluorescence due to the presence of free dye, producing results that were nearly indistinguishable from those obtained with a column-purified conjugate.
Importantly, we found that even without the column purification step, labeling kits that utilize TFP ester produced fluorescent conjugates that were far superior to those of other one-step labeling kits tested, in terms of signal strength and background fluorescence. Thus, with improved labeling efficiency and simplified workflow, our labeling kits provide one step labeling convenience with high yields and bright results.
Alexa Fluor and Image-iT FX are registered trademarks of Thermo Fisher Scientific. | 2020-07-07 06:15:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3391222655773163, "perplexity": 7624.72764999427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891654.18/warc/CC-MAIN-20200707044954-20200707074954-00429.warc.gz"} |
http://clay6.com/qa/28895/density-of-a-gas-is-found-to-be-5-46g-dm-3-at-27-c-at-2-bar-pressure-what-w | Browse Questions
# Density of a gas is found to be $5.46g/dm^3$ at $27^{\large\circ}C$ at 2 bar pressure. What will be its density at STP?
$(a)\;3g/dm^3\qquad(b)\;4g/dm^3\qquad(c)\;5g/dm^3\qquad(d)\;6g/dm^3$
Given
d(g) = 5.46 $3g/dm^3$
T = 300K
P = 2 bar
d(g) = ?
T = 273K
P = 1 bar
Since $\large\frac{d_1}{d_2} = \large\frac{P_1m_1}{RT_1}\times\large\frac{RT_2}{P_2m_2}$
($m_1 = m_2$ for same gas)
$\large\frac{5.46}{d_2} = \large\frac{2\times273}{1\times300}$
$1\times300\times5.46 = 2\times273\times d_2$
$\large\frac{300\times5.46}{2\times273} = d_2$
$d_2 = 3g/dm^3$
+1 vote | 2017-04-25 06:29:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7003710865974426, "perplexity": 3262.197650156781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00617-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/302076/two-questions-on-foliation-by-geodesics | # Two questions on “foliation by geodesics”
I would appreciate if you consider the following two questions on $1$ dimensional foliations whose leaves are geodesic.
1)Assume that $M$ is a Riemannian manifold which is either an open manifold or is a compact manifold with zero Euler characteristic. Does $M$ admit a foliation by geodesics?
2)Assume that $M$ is a Riemannian surface which admit at least one foliation by geodesics. Does there necessarily exist a foliation of $M$ by geodesics which satisfy the "Isocline Locale property"?
The Isocline local property is defined as follows:
For every $x\in M$ there is locally a geodesic $\alpha$ which is transverse to the foliation and it intersect all leaves with the same angle.
No to the first question. Let $M$ be a Riemannian $2$-manifold whose universal cover is the hyperbolic plane $H$, and whose fundamental group is not cyclic. For any foliation of $H$ by geodesics (lines), it seems to me that the endpoints of these lines on the boundary circle of $H$ will fill up the whole circle except for two points. These two points will have to be preserved by deck transformations, so any discrete group of isometries of $H$ that preserves the foliation must be cyclic.
• @Thiku: The $M$ in this answer is an open manifold. – Lee Mosher Jun 16 '18 at 17:59
• To be honest I can not understand this answer. For example consider $M=H\setminus \{p,q\}$. Then is not $H$ the universal covering space of $M$? But M admit a foliation by geodesic. The vertical foliation. Am I mistaken? – Ali Taghavi Jun 17 '18 at 10:54
• This $M$ is not complete, as it would have to be if its universal covering space were to be metrically isomorphic to $H$. – Tom Goodwillie Jun 17 '18 at 11:44 | 2021-01-22 07:59:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7395037412643433, "perplexity": 138.15826176252443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529128.47/warc/CC-MAIN-20210122051338-20210122081338-00768.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-4-decimals-review-exercises-page-323/53 | ## Basic College Mathematics (10th Edition)
$3.5^{2}$+8.7(1.95) = 3.5$\times$3.5 + 8.7$\times$1.95 = 12.25 + 16.965 = 29.215 | 2022-09-29 12:16:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7507299780845642, "perplexity": 11932.401750093855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00326.warc.gz"} |
https://newbedev.com/what-happens-to-the-units-when-squaring-a-variable | # What happens to the units when squaring a variable?
Yes. If you square a variable, its unit of measurement is also squared, in the case of speed $v$ in $m/s$ ($ms^{-1}$), then $v^2$ is expressed in $m^2s^{-2}$. This is true for all physical variables (or constants).
Yes. Consider the equation for kinetic energy (KE):
$${\rm KE} = \frac{1}{2} mv^{2}$$
the dimensions of KE are:
$${\rm mass} \times {\rm velocity}^{2}=\frac{{\rm mass} \times {\rm length}^{2}}{{\rm time}^{2}}$$
or with SI units:
$$1\,{\rm J} = 1\,{\rm kg}\,{\rm m}^{2}\,{\rm s}^{-2}$$
Yes.The unit of $(\text{velocity})^2$ is $[\frac{\text{m}}{\text{s}}]^2$ .This is true for all calculations for any physical quantity.On squaring a physical quantity, its dimension gets squared. As a result, the unit is also squared. | 2023-03-21 23:00:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274993538856506, "perplexity": 566.0628166770421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00777.warc.gz"} |
https://support.atlas-sys.com/hc/en-us/article_attachments/1500007788622/Script-03a_-_The_Tree.html | # The Tree and Ordered Records¶
## The Tree¶
The structure that unites resources and archival objects into the archival hierarchy.
Please note that are are some depreciated tree endpoints as of this presentation, I am using the newer ones
### Explore the Morris Canal Collection tree¶
http://localhost:8080/resources/1#tree::resource_1
These demonstrations used the tree endpoints, such as [:GET] /repositories/:repo_id/resources/:id/tree/node. Dealing with the tree endpoints is not intuitive. I consider them an intermediate-to-advanced skill in your ArchivesSpace toolbox.
By demonstrating the tree endpoints I want your takeaway to be two things, the first of which is more important:
1. The hierarchy essential to archival description is represented in the data by this concept called a tree.
2. There are tree endpoints, but they are not the only way to see and use tree information. Just don’t think you need to use the tree endpoints because I showed them to you in this presentation. Make a distinction between the concept of the tree versus the specific endpoints that say "tree" in them.
Here are other ways to see and use tree information, including one of my favorite endpoints ever.
## /repositories/:repo_id/resources/:id/ordered_records¶
Get the list of URIs of this published resource and all published archival objects contained within. Ordered by tree order (i.e. if you fully expanded the record tree and read from top to bottom) | 2021-05-09 05:02:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4594023823738098, "perplexity": 1850.5835543248513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00638.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-absolute-value-inequalities-with-absolute-value-variables-on-bo#136260 | # How do you solve absolute value inequalities with absolute value variables on both sides abs(2x)<=abs(x-3)?
Apr 7, 2015
All values of x in the interval [-3,2]
To remove the absolute value sign, square both sides, so that both would be positive only. Accordingly,
4${x}^{2}$ $\le$ x-3)^2
4${x}^{2}$ $\le$ ${x}^{2}$ -6x+9
3${x}^{2}$ +6x - 9 $\le 0$
${x}^{2}$ +2x -3 $\le$0
(x+3)(x-2) $\le 0$.
Now, there can be two options for this inequality to hold good.
Case 1 x+3 is positive, that is x $\ge$-3. and x-2 is negative, that is x$\le$2.
Case 2 x+3 is negative, that is x$\le$ -3 and x-2 is positive, that is x$\ge$2
To understand the solution mark the above inequalities on the number line. The solution to the inequality would be x$\ge$-3 and x$\le$2. In the interval notation it would be [-3,2] | 2022-01-26 11:26:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6833896040916443, "perplexity": 1141.6507443641383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00253.warc.gz"} |
http://nrich.maths.org/public/leg.php?code=71&cl=4&cldcmpid=1946 | # Search by Topic
#### Resources tagged with Mathematical reasoning & proof similar to Halving the Triangle:
Filter by: Content type:
Stage:
Challenge level:
### There are 186 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Mathematical reasoning & proof
##### Stage: 4 Challenge Level:
Four jewellers possessing respectively eight rubies, ten saphires, a hundred pearls and five diamonds, presented, each from his own stock, one apiece to the rest in token of regard; and they. . . .
### Golden Eggs
##### Stage: 5 Challenge Level:
Find a connection between the shape of a special ellipse and an infinite string of nested square roots.
### Plus or Minus
##### Stage: 5 Challenge Level:
Make and prove a conjecture about the value of the product of the Fibonacci numbers $F_{n+1}F_{n-1}$.
### Pair Squares
##### Stage: 5 Challenge Level:
The sum of any two of the numbers 2, 34 and 47 is a perfect square. Choose three square numbers and find sets of three integers with this property. Generalise to four integers.
### Thousand Words
##### Stage: 5 Challenge Level:
Here the diagram says it all. Can you find the diagram?
### Continued Fractions II
##### Stage: 5
In this article we show that every whole number can be written as a continued fraction of the form k/(1+k/(1+k/...)).
##### Stage: 5 Challenge Level:
Find all positive integers a and b for which the two equations: x^2-ax+b = 0 and x^2-bx+a = 0 both have positive integer solutions.
### Target Six
##### Stage: 5 Challenge Level:
Show that x = 1 is a solution of the equation x^(3/2) - 8x^(-3/2) = 7 and find all other solutions.
### Pent
##### Stage: 4 and 5 Challenge Level:
The diagram shows a regular pentagon with sides of unit length. Find all the angles in the diagram. Prove that the quadrilateral shown in red is a rhombus.
### The Golden Ratio, Fibonacci Numbers and Continued Fractions.
##### Stage: 4
An iterative method for finding the value of the Golden Ratio with explanations of how this involves the ratios of Fibonacci numbers and continued fractions.
### Areas and Ratios
##### Stage: 4 Challenge Level:
What is the area of the quadrilateral APOQ? Working on the building blocks will give you some insights that may help you to work it out.
### Pythagorean Triples II
##### Stage: 3 and 4
This is the second article on right-angled triangles whose edge lengths are whole numbers.
### Janine's Conjecture
##### Stage: 4 Challenge Level:
Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . .
### Proof of Pick's Theorem
##### Stage: 5 Challenge Level:
Follow the hints and prove Pick's Theorem.
### Whole Number Dynamics I
##### Stage: 4 and 5
The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases.
### Whole Number Dynamics IV
##### Stage: 4 and 5
Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens?
### Whole Number Dynamics III
##### Stage: 4 and 5
In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again.
### Whole Number Dynamics II
##### Stage: 4 and 5
This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point.
### Napoleon's Hat
##### Stage: 5 Challenge Level:
Three equilateral triangles ABC, AYX and XZB are drawn with the point X a moveable point on AB. The points P, Q and R are the centres of the three triangles. What can you say about triangle PQR?
### Pythagorean Triples I
##### Stage: 3 and 4
The first of two articles on Pythagorean Triples which asks how many right angled triangles can you find with the lengths of each side exactly a whole number measurement. Try it!
### Square Mean
##### Stage: 4 Challenge Level:
Is the mean of the squares of two numbers greater than, or less than, the square of their means?
### There's a Limit
##### Stage: 4 and 5 Challenge Level:
Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely?
### Long Short
##### Stage: 4 Challenge Level:
A quadrilateral inscribed in a unit circle has sides of lengths s1, s2, s3 and s4 where s1 ≤ s2 ≤ s3 ≤ s4. Find a quadrilateral of this type for which s1= sqrt2 and show s1 cannot. . . .
### Whole Number Dynamics V
##### Stage: 4 and 5
The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values.
### Polite Numbers
##### Stage: 5 Challenge Level:
A polite number can be written as the sum of two or more consecutive positive integers. Find the consecutive sums giving the polite numbers 544 and 424. What characterizes impolite numbers?
### Proof: A Brief Historical Survey
##### Stage: 4 and 5
If you think that mathematical proof is really clearcut and universal then you should read this article.
### Binomial
##### Stage: 5 Challenge Level:
By considering powers of (1+x), show that the sum of the squares of the binomial coefficients from 0 to n is 2nCn
### Big, Bigger, Biggest
##### Stage: 5 Challenge Level:
Which is the biggest and which the smallest of $2000^{2002}, 2001^{2001} \text{and } 2002^{2000}$?
### Recent Developments on S.P. Numbers
##### Stage: 5
Take a number, add its digits then multiply the digits together, then multiply these two results. If you get the same number it is an SP number.
### Contrary Logic
##### Stage: 5 Challenge Level:
Can you invert the logic to prove these statements?
### Direct Logic
##### Stage: 5 Challenge Level:
Can you work through these direct proofs, using our interactive proof sorters?
### Mind Your Ps and Qs
##### Stage: 5 Short Challenge Level:
Sort these mathematical propositions into a series of 8 correct statements.
### Iffy Logic
##### Stage: 4 and 5 Challenge Level:
Can you rearrange the cards to make a series of correct mathematical statements?
### Notty Logic
##### Stage: 5 Challenge Level:
Have a go at being mathematically negative, by negating these statements.
### The Clue Is in the Question
##### Stage: 5 Challenge Level:
This problem is a sequence of linked mini-challenges leading up to the proof of a difficult final challenge, encouraging you to think mathematically. Starting with one of the mini-challenges, how. . . .
### Dodgy Proofs
##### Stage: 5 Challenge Level:
These proofs are wrong. Can you see why?
### Advent Calendar 2011 - Secondary
##### Stage: 3, 4 and 5 Challenge Level:
Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas.
### A Long Time at the Till
##### Stage: 4 and 5 Challenge Level:
Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem?
### The Great Weights Puzzle
##### Stage: 4 Challenge Level:
You have twelve weights, one of which is different from the rest. Using just 3 weighings, can you identify which weight is the odd one out, and whether it is heavier or lighter than the rest?
### Symmetric Tangles
##### Stage: 4
The tangles created by the twists and turns of the Conway rope trick are surprisingly symmetrical. Here's why!
### Sums of Squares and Sums of Cubes
##### Stage: 5
An account of methods for finding whether or not a number can be written as the sum of two or more squares or as the sum of two or more cubes.
### Leonardo's Problem
##### Stage: 4 and 5 Challenge Level:
A, B & C own a half, a third and a sixth of a coin collection. Each grab some coins, return some, then share equally what they had put back, finishing with their own share. How rich are they?
### Magic Squares II
##### Stage: 4 and 5
An article which gives an account of some properties of magic squares.
### Picturing Pythagorean Triples
##### Stage: 4 and 5
This article discusses how every Pythagorean triple (a, b, c) can be illustrated by a square and an L shape within another square. You are invited to find some triples for yourself.
### Transitivity
##### Stage: 5
Suppose A always beats B and B always beats C, then would you expect A to beat C? Not always! What seems obvious is not always true. Results always need to be proved in mathematics.
### Impossible Sandwiches
##### Stage: 3, 4 and 5
In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot.
### Sprouts Explained
##### Stage: 2, 3, 4 and 5
This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . .
### Water Pistols
##### Stage: 5 Challenge Level:
With n people anywhere in a field each shoots a water pistol at the nearest person. In general who gets wet? What difference does it make if n is odd or even?
### Unit Interval
##### Stage: 4 and 5 Challenge Level:
Take any two numbers between 0 and 1. Prove that the sum of the numbers is always less than one plus their product? | 2016-08-25 02:51:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.400335431098938, "perplexity": 1819.5878374600968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292734.19/warc/CC-MAIN-20160823195812-00205-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://paperswithcode.com/paper/fair-near-neighbor-search-independent-range | # Fair Near Neighbor Search: Independent Range Sampling in High Dimensions
5 Jun 2019 · , , ·
Similarity search is a fundamental algorithmic primitive, widely used in many computer science disciplines. There are several variants of the similarity search problem, and one of the most relevant is the $r$-near neighbor ($r$-NN) problem: given a radius $r>0$ and a set of points $S$, construct a data structure that, for any given query point $q$, returns a point $p$ within distance at most $r$ from $q$. In this paper, we study the $r$-NN problem in the light of fairness. We consider fairness in the sense of equal opportunity: all points that are within distance $r$ from the query should have the same probability to be returned. In the low-dimensional case, this problem was first studied by Hu, Qiao, and Tao (PODS 2014). Locality sensitive hashing (LSH), the theoretically strongest approach to similarity search in high dimensions, does not provide such a fairness guarantee. To address this, we propose efficient data structures for $r$-NN where all points in $S$ that are near $q$ have the same probability to be selected and returned by the query. Specifically, we first propose a black-box approach that, given any LSH scheme, constructs a data structure for uniformly sampling points in the neighborhood of a query. Then, we develop a data structure for fair similarity search under inner product that requires nearly-linear space and exploits locality sensitive filters. The paper concludes with an experimental evaluation that highlights (un)fairness in a recommendation setting on real-world datasets and discusses the inherent unfairness introduced by solving other variants of the problem.
PDF Abstract
## Datasets
Add Datasets introduced or used in this paper
## Results from the Paper Add Remove
Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers. | 2022-05-21 16:01:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6905869841575623, "perplexity": 592.0349636874914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00367.warc.gz"} |
https://mathematica.stackexchange.com/questions/213575/understanding-code-generated-by-microcontrollerembedcode/213578 | # Understanding code generated by MicrocontrollerEmbedCode
I've been using MicrocontrollerKit for a while now to generate code for little projects, inverse pendulums, filters, etc. Generally it works well.
I am now currently trying to design a LQI (Linear quadratic integrator) Or atleast trying to understand them and test on an already working system with a steady state error.
The design and simulation seems to work well, so now it comes to generating the code and trying it out.
lineargains = {26.8592, 2.85176, -0.0189932};
controlforce = -lineargains.{\[Theta][t] - \[Pi], \[Theta]'[t], \[Phi]'[t]};
integrator = TransferFunctionModel[ki/z, z]
ssm = NonlinearStateSpaceModel[{{}, {controlforce}}, {}, {\[Theta][t], \[Theta]'[t], \[Phi]'[t]}]
lqid = SystemsConnectionsModel[{ToDiscreteTimeModel[integrator /. ki -> 56, 0.001], ToDiscreteTimeModel[ssm, 0.001]}, {{1, 1} -> {2, 1}}, {{1, 1}, {2,2}, {2, 3}}, {2, 1}] // SystemsModelMerge
\[ScriptCapitalM]1 = MicrocontrollerEmbedCode[lqid, <|"Target" -> "ArduinoUno", "Inputs" -> {"A0" -> "Analog", "A1" -> "Analog", "A2" -> "Analog"},"Outputs" -> {"Serial"}|>, <|"ConnectionPort" -> None|>, <|"Language" -> "Wiring"|>]
\[ScriptCapitalM]1["SourceCode"]
At this point, I believe anyways, I have properly set up the system I want.
What I don't understand, is the generated Wiring code and how it applies to the discrete LQI controller. Here is a snippet to the relevant parts, and a pastebin for the full codes.
double u[3] = {0, 0, 0};
double y[1] = {84.38063590995105};
void update_nssm(double *u, double *y)
{
static double x[1] = {0};
double x1[1];
y[0] = 84.38063590995105 - 0.22561720114594416*x[0] - 0.11280860057297208*u[0] - 2.8517595773639597*u[1] + 0.018993162921974713*u[2];
x1[0] = x[0] + u[0];
x[0] = x1[0];
}
void loop()
{
u[0] = uu0;
u[1] = uu1;
u[2] = uu2;
update_nssm(u, y);
Serial.write(19);
Serial.print(y[0], 2);
Serial.write(17);
delay(1);
}
There is a mysterious ~84 within the code for y[1] that is generated and never called...and within update_nssm() the state-space controller itself an 84 that I can't imagine would ever result in a value that makes sense. (the controller should create a signal that would be between -2 and 2 to control the system.
This also never appears when generating the code for the more simple LQR controller.
As seen here:
\[ScriptCapitalM] = MicrocontrollerEmbedCode[ToDiscreteTimeModel[ssm, 0.001], <|"Target" -> "ArduinoUno", "Inputs" -> {"A0" -> "Analog", "A1" -> "Analog", "A2" -> "Analog"},"Outputs" -> {"Serial"}|>, <|"ConnectionPort" -> None|>, <|"Language" -> "Wiring"|>]\[ScriptCapitalM]["SourceCode"]
And the it's code:
void update_nssm(double *u, double *y)
{
y[0] = -26.8591906126124*(-M_PI + u[0]) - 2.8517595773639597*u[1] + 0.018993162921974713*u[2];
}
When comparing to the two, one could assume that the 84~ should be some kind of subtraction from $$\pi$$...however thus far, I have never seem Mathematica output $$\pi$$ in anything but radians, as such I can't understand if the generated code is anything but correct, or how this code is generated from the given lqid StateSpaceModel.
In the simpler LQR case, I can directly read and understand what was generated (and have used successfully), however in the lqid this doesn't seem to add up.
How is MicrocontrollerEmbedCode generating this code?
I fully admit I may be the one making the mistakes here...in which case, what would be the proper way to generate such a controller?
The value 84.38063590995105=-26.8592*π is a residue that pops up if the equilibrium value is not set correctly.
There are a few things that need to be changed to straighten things out.
You need to specify ssm with the correct equilibrium value of $$\pi$$ for $$\theta$$.
ssm = NonlinearStateSpaceModel[{{}, {controlforce}}, {}, {{θ[t], N@π}, θ'[t], ϕ'[t]}]
That way when $$\theta$$ is $$\pi$$, controlforce is 0.
(The N@π is needed due to a bug that fails to generate the correct code for π.)
Next, if you use SystemsModelMerge to merge the connections the equilibrium value gets blown away. SystemsModelMerge can preserve the equilibrium values of states and the net inputs and outputs, but equilibrium values in any intermediate connections is lost in the merge.
To prevent that you need to keep it in the unreduced state.
lqid = SystemsConnectionsModel[{ToDiscreteTimeModel[integrator /. ki -> 56, 0.001],
ToDiscreteTimeModel[ssm, 0.001]}, {{1, 1} -> {2, 1}}, {{1, 1}, {2,2}, {2, 3}}, {2, 1}];
With these changes the relevant generated code will be as follows and when u[0](which is u_1[0]) is π, y[0] will also be 0.
• Thank you for the quick answer and solution, The solution works and i will try the generated code on the real thing when i get the chance! – morbo Jan 28 '20 at 10:49 | 2021-06-15 01:35:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.250174343585968, "perplexity": 4103.406397850247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487614006.8/warc/CC-MAIN-20210614232115-20210615022115-00423.warc.gz"} |
http://johnhawks.net/research/cochran-harpending-hawks-overdominance-2011/ | ### john hawks weblog
#### Introduction
The last decade has seen an explosion of interest in ongoing and recent natural selection in human populations. The HapMap project and other surveys of SNPs in human populations revealed many regions in the genome where one SNP allele is surrounded by large regions of linkage disequilibrium while the other is on a more heterogeneous background accelPickrell:2009Voight:2006Wang:2006. The inference has been that the former were undergoing or had undergone a selective sweep, and had experienced an increase in frequency too great to be explained by genetic drift. There have been attempts to construct bottlenecked population histories that would explain these patterns, but the elevated concentration of this pattern in genic regions apparently makes such explanations unlikely accelPickrell:2009.
A puzzling finding of this line of research is that the sweeping SNPs are disproportionately at intermediate frequencies rather than near fixation. Part of the explanation is that the technology for discovering these regions usually relies on contrasting the linkage disequilibrium around the putative sweeping SNP with that around its allele, so both kinds of chromosomes must be present. There is also a shortage’’ of fixed genomic regions that show high disequilibrium as would be produced by complete sweeps.
Why are there so many incomplete sweeps? Some have suggested that most phenotypic adaptation occurs in the form of soft sweeps'', in which evolution of a phenotype occurs because of slight changes in frequency of many standing genetic variants <bib>Pritchard:soft:2010</bib>. But soft sweeps of standing genetic variants do not explain the pattern of long-range LD haplotypes of intermediate frequency. These appear to be young haplotypes that have risen to intermediate frequencies quite rapidly, not old haplotypes that have changed in frequency marginally. One good possibility is what we have called thestooge effect’’ - after the Three Stooges all trying to get through a doorway at the same time. Selection following an environmental change will favor any allele (either standing variation or new mutation)that provides increased fitness in the new environment, perhaps leading to change at many loci that decreases the fitness advantage of any one advantageous mutation. For example, the fitness advantage of sickling hemoglobin must have been much greater long ago when there were no high frequency genetic responses to malaria. The competition among adaptive alleles may slow down and perhaps stop the sweep of any one new mutant.
We know that some sweeping alleles, for example some of the hemoglobinopathies, are loss-of-function mutations, broken versions of the ancestral version. They can have strong negative effects in homozygotes, who have no working copy of the gene. Such overdominant mutations are easy to recognize and understand.
Our analysis (based on Fisher’s geometric model) suggests that overdominance is common in newly originated beneficial alleles of large effect, even when the mutation changes gene function, rather than reducing or eliminating it. This is a natural consequence of a high degree of pleiotropy. Such overdominant alleles will never fix. While homozygote fitness will be lower than heterozygote fitness in these cases, usually it will not be extremely low - will not cause death or obvious disease, and so will not be easily observed.
#### Fisher's geometric model
Fisher considered adaptation as an aspect of an organism’s phenotype.
An organism is regarded as adapted to a particular situation, or to the totality of situations which constitute its environment, only in so far as we can imagine an assemblage of slightly different situations, or environments, to which the animal would on the whole be less well adapted; an equally only in so far as we can imagine an assemblage of slightly different organic forms, which would be less well adapted to that environment Fisher:1930.
Fisher defined the organism’s fitness as its capacity for intrinsic population growth, the ‘Malthusian parameter’. He imagined an optimal phenotype from which no change in the phenotype, however slight, could increase the organism’s fitness—in some particular environment, obviously. A well-adapted organism would be one whose phenotype was very close to that optimum. In his discussion, he did not distinguish between the case where the optimal form is the phenotype of an individual, or the average phenotype of a population. Fisher used this model as part of his argument that most evolutionary changes are small, suggesting that most individuals would be very near the average phenotype of the population most of the time.
Fitness can thus be considered as a function of position in a multidimensional phenotype space. Fisher assumed that the distribution of fitness in that phenotype space has a single optimum (point O). Coordinates are normalized so that fitness is a function of the distance from O and declines monotonically as that distance increases.
Consider a non-optimal phenotype, which we model as a point A in this phenotype space, at a distance d/2 from O. Fisher considered this distance as the radius of a multidimensional sphere centered on O. All the points on that sphere have the same suboptimal fitness; you might say that they are all equally poorly adapted.
Now imagine a mutation that shifts the phenotype a distance r in some random direction, moving it to a new position B. If B is inside the hypersphere, it will be closer to O than is the initial point A, and so the mutation will be favored by selection. If B is on the hypersphere, the mutation will have equal fitness to the wild type, if B is outside the hypersphere, the mutation is deleterious relative to the wild type. Hence, the probability that the mutation is adaptive is the same as the probability that B is inside the hypersphere.
We can derive this probability by considering the boundary condition in which the new position B lies exactly on the hypersphere: that is, when the distance |AO| = |BO|. The angle θ’ between AO and AB in that case determines the maximum angle of a change that improves fitness: θ’ = arccos(r/d).
[Figure 1 here]
If the angle between </strong>AB</strong> and </strong>AO</strong> is less than θ, B is closer to the optimum than A and fitness increases. If the angle is greater, B is farther from the optimum than A and fitness decreases.
In order to determine the probability that a random change of size r will increase fitness, we need to find what fraction of the surface of a hypersphere of dimension n lies within the cap with half-angle θ’. Hartl and Taubes Hartl:Taubes:1998 gave an exact expression for this probability:
$$\frac{\int_0^{\theta’} \sin ^{n-2} \theta d\theta}{\int_0^{\pi} \sin^{n-2} \theta d\theta} \label{eq:surface-fraction-adaptive}$$
Fisher argued that n, the effective dimensionality of the phenotype space, was likely to be large in real cases because many different traits influence biological success. He proceeded to develop a large-n approximation for this probability integral that gives considerable insight. Back in 1930, it must also have been considerably easier to calculate than the exact integral.
$$\frac{1}{\sqrt{2 \pi}} \int_{x}^{\infty} e^{-t^2/2}dt, x=r (n/d)^{\frac{1}{2}}$$
One can see from this expression that the probability of a favorable change is close to 1/2 when r is small, while decreasing rapidly as the size of the change becomes larger than d/√n - which one might call the “standard magnitude” of change. Fisher concluded that mutations with small favorable effects are the main players in adaptive evolution.
But there are other factors that Fisher did not consider in his analysis. Kimura Kimura:1957 considered an additional aspect of selective dynamics that influences the effect size of adaptive phenotypic changes: the fact that new mutations that confer small increases in fitness are likely to be lost by chance, almost as likely as a neutral mutation. The probability of success of a beneficial mutation increases linearly with its fitness benefit Haldane:5:1927. Thus, although larger phenotypic changes are less likely to be adaptive, changes of large effect that are adaptive are more likely to persist in the population. Kimura showed that a population is most likely to undergo adaptive changes that are intermediate in effect-large enough to survive genetic drift, but small enough that they remain relatively likely to move the phenotype closer to the optimum.
Orr Orr:2003 considered not only the first adaptive change, but the entire series of adaptive changes as a population approaches a phenotypic optimum. He showed that Kimura’s relation held for the first adaptive change, bringing the population a considerable distance toward the optimum. Subsequent changes are likely to be smaller. As the phenotype nears the optimum, large phenotypic changes are less and less likely to approach it more closely, so the entire sequence is dominated by small changes. Considering the process as a whole, Orr showed that the effect sizes of adaptive changes will be exponentially distributed. The exponential distribution is also the expectation drawn from extreme value theory of the effect sizes of beneficial mutations.
#### Diploid genotypes and Fisher's model
Fisher’s geometric analogy holds up well for a haploid, because the phenotypic change induced by a mutation may be thought of as a vector, just as in Fisher’s model. The difference between individuals and populations need not be strictly defined in the model, because the effect of a mutation on the fitness of an individual will be the same as on the fitness of a population. Given the amount of effort put into extending Fisher’s phenotype model to mutational effects - even in diploid organisms like Drosophila — it seems remarkable that nobody has observed that the analogy does not hold for diploid genotypes. Diploids are difficult to treat with this geometric model, because each genotype may have a distinct phenotypic effect. Unlike the haploid case, we cannot assume that the effect of a mutation in an individual is the same as the effect of a substitution in the population. For an autosomal mutation to proceed to fixation in the population — thereby becoming a substitution — a mutant homozygote must have fitness equal to or greater than that of a heterozygote. Even if a heterozygote has a phenotype that is intermediate between original-allele and mutant homozygotes, its fitness may not be.
For simplicity, we assume that phenotypic change is a linear function of gene dosage. In that case, two copies of the mutant allele result in exactly twice the change in phenotype caused by one copy. We designate the original phenotype as point A and the phenotype resulting from one copy of the mutation as point B, as before. We will designate the phenotype in mutant homozygotes as point C. This means that the distance AC is exactly double AB (that is, 2r in Fisher’s model). Given this constraint, we can find the angle φ at which the heterozygote and homozygotes have equal fitness, that is, where |BO| = |CO|. Since the displacement is larger, φ, the critical angle for homozygotes, is smaller than θ, the critical angle for heterozygotes.
[Figure 2 here]
We can use this expression to calculate the probability that individuals who are homozygotes for a beneficial mutation of effect size r are fitter than heterozygotes. If r is small, 50% of mutations increase heterozygote fitness, almost all of which confer even higher fitness in homozygotes. If r is large, most mutations will not increase heterozygote fitness, and even those that do are unlikely to increase fitness further in homozygotes.
\begin{table}[h]\centering \begin{tabular}{|c|ccccc|} %\multicolumn{5} {c} {\bf Overall Heading}
\hline & $|r|$ & $0.01$ & $0.1$ & $0.25$ & $0.5$
\hline N & & & & &
\hline 16 & & 0.4924 & 0.4244 & 0.3163 & 0.1666
32 & & 0.4890 & 0.3911 & 0.2441 & 0.0803
64 & & 0.4842 & 0.3462 & 0.1606 & 0.0223
128 & & 0.4776 & 0.2868 & 0.7918 & 0.0021
\hline \end{tabular} \caption{Probability that fitness tncreases in heterozygotes} \end{table}
\begin{table}[h] \centering \begin{tabular}{|c|ccccc|r|} \hline & $|r|$ & $0.01$ & $0.1$ & $0.25$ & $0.5$
\hline N & & & & &
% \hline 16 & & 0.9846 & 0.8277 & 0.5266 & 0.1230
32 & & 0.9775 & 0.7411 & 0.3289 & 0.0190
64 & & 0.9675 & 0.6181 & 0.1388 & 0.0005
128& & 0.9532 & 0.4524 & 0.0270 & 0.0000004
\hline \end{tabular} \caption{Probability that homozygotes are fitter than heterozygotes} \label{tab:second} \end{table}
In other words, there are three tests that an adaptive mutation must pass in order to reach fixation. First, it must increase fitness in heterozygotes, which, as Fisher showed Fisher:1930, is unlikely if its phenotypic effect is large. Second, it must avoid stochastic loss when rare. Haldane showed that a favorable mutation’s probability of avoiding stochastic loss is about 2s, when s is the selective advantage. This means that a new allele will probably be lost if its effect size is small, because a small phenotypic effect leads to a small selective advantage. Third, it must confer higher fitness in homozygotes than in heterozygotes, which is unlikely if it has a large phenotypic effect. The first two tests are considered in most treatments of the genetics of natural selection, but the third has seldom been discussed.
Our assumption of linearity is optimistic. If the phenotypic change induced in homozygotes is nonlinear and in a significantly different direction in phenotype space than the change in heterozygotes, fitness will almost certanly be lower than in heterozygotes, since favorable changes are possible only in a narrow range of angles in a high-dimensional space.
True loss-of-function mutations, in which the gene’s function is eliminated, rather than merely reduced, make up an important class of alleles with nonlinear effects. These changes, which eliminate rather than merely reducing gene function, are usually nonsense mutations, frameshifts, radical amino acid changes, etc. Many such alleles have moderate phenotypic effects in single dose — effects that can increase fitness — while causing drastic fitness decreases in homozygotes, which have no working copy of the gene. Many are lethal. Such alleles cause some of the most common human genetic diseases, such as cystic fibrosis (the ΔF508 mutation) and the β0 thalassemias.
Generally speaking, this kind of direction-changing nonlinearity should become more and more likely as effect size increases. If it is significant, the phenotypic change in homozygotes will be essentially unrelated to the beneficial change in heterozygotes, it will not even be in the same general direction in phenotype space, and so such mutations are almost always deleterious in homozygotes.
Lower fitness in homozygotes than in heterozygotes doesn’t necessarily imply that homozygote fitness is extremely low or obviously depressed. For example, consider a case in which one copy of a new allele increased fitness in past environments by 5%, while two copies increased fitness by 2%. The new allele would never go to fixation: it would eventually approach an equilibrium frequency of about 71%. But, quite possibly, none of these genotypes (0,1, or 2 copies) would be obviously ill or seek medical attention. This is particularly the case in modern environments, which are less harsh in many ways than those our ancestors experienced. For example, alleles that conferred protection against famine or smallpox might have reached high frequencies in modern populations, but their advantages would be unnoticeable and effectively unmeasurable in modern populations. We would have next to no chance of determining small differences in the fitness effects of that allele in heterozygotes and homozygotes.
#### The X chromosome
The fate of an adaptive variant that appears on the X chromosome is quite different in eutherian mammals — humans, for example. Males have only a single copy of the X, so their gene dosage does not vary. Females have two copies, but only one copy of the X chromosome is active in each cell, while the other copy is condensed and inactive Carrel:Willard:2005. Upwards of 85% of all genes on the condensed chromosome are inactive, except in the pseudoautosomal regions. Since one of the X chromosomes is randomly inactivated in each cell, female heterozygotes will have the wild-type allele in some cells and the adaptive variant in others, while female homozygotes will have the same effective gene dosage as males with their one copy.
So, if a variant on the X chromosome increases fitness in males, it is likely to have the same effect in females with two copies. The effective dose of the new allele will be half as large in heterozygous females, but if the phenotypic effects are linear with gene dosage, heterozygote fitness should still be higher than wild-type In Fisher’s model, a fitness increase for a given displacement in a particular direction implies a fitness increase for any smaller displacement in that same direction.
Under these assumptions, most X-chromosome gene variants that increase fitness in males should go to fixation, as long as they escape stochastic loss. Interestingly, the tight regulation of gene dosage on the X chromosome implies that changes in dosage do indeed influence fitness — why else would X-inactivation have evolved?
Such X-linked beneficial recessive alleles sweep more slowly than an autosomal allele with the same selective advantage, since only one third (those in males) manifest the advantage when the allele is rare. However, the elimination of the requirement that the new variant confer higher fitness in homozygotes than heterozygotes is more important, and should result in a disproportionate number of completed sweeps on the X chromosome. As it happens, that is exactly what we see in humans.
SNPs with high allele frequency differences are relatively rare in humans, but they are particularly common on the X chromosome Lambert:punctuated:2010Casto:2010 Out of the 3.2 million SNPs in the HapMap data set, only 479 have FST greater than or equal to 0.90. Of those 479 high-Fst SNPs, 379 are on the X chromosome . The majority of those highly differentiated SNPs cluster into five distinct regions. Those five regions apparently correspond to six selective sweeps. Two of these sweeps happened near the centromere (in different populations). Of the six sweeps, five are in populations outside sub-Saharan Africa, with the sweep reaching fixation in East Asia, existing at lower frequencies in Europe and West Asia, while being rare in Sub-Saharan Africa. One of the two sweeps near the centromere has the opposite pattern — essentially fixed in sub-Saharan Africa and rare outside. That second pattern is unusual: there are few high-Fst SNPs for which the derived alleles are near fixation in sub-Saharan Africa. The best-known example is the mutation responsible for the Duffy Fy*O blood type.
#### Conclusion
Our conclusion is that most mutations with strong effects that increase fitness in heterozygotes confer lower fitness in homozygotes — that is, are overdominant.
This effect may not matter much in a well-adapted, stable species. Over many generations, overdominant alleles that partially solve some adaptive problem should eventually be replaced by alleles that confer high fitness in both heterozygotes and homozygotes and so go to fixation. This could occur through the evolution of modifier loci and by rare favorable mutations that are essentially additive. In steady-state, there should be relatively few common overdominant alleles, except for cases of frequency-dependent selection.
It may, however, play an important role in species that have experienced strong selection, ones whose environment has changed drastically. As it happens, this is the case for a number of species of interest: we would put humans and most domesticated species in this category. | 2017-10-22 15:26:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.6397868990898132, "perplexity": 1434.1105601149193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00673.warc.gz"} |
https://8chan.se/t/res/8151.html | /t/ - Technology
Discussion of Technology
Options Max message length: 8000 Drag files to upload or click here to select them Max file size: 32.00 MB Max files: 5 Supported file types: GIF, JPG, PNG, WebM, OGG, and more
E-mail (used to delete files and postings) No flaggentoo
The backup domain is located at 8chan.se. .cc is a third fallback. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 2.0. Please be aware of the Site Fallback Plan! In case outages in Eastern Europe affect site availability, we will work to restore service as quickly as possible. WE ARE PLANNING THE 2.8 SITE UPGRADE FOR THIS WEEKEND, MONDAY EVENING 6-27. DOWNTIME WILL BE BRIEF, AND THEN WE HUNT FOR BUGS. (Estamos planeando la actualización del sitio 2.8 para este fin de semana, del lunes 6 al 27 por la tarde o por la noche en CST. El tiempo de inactividad será breve y luego buscaremos errores.) 8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.
Board Nuking Issue should be resolved. Apologies for any missing posts.
Hydrus Network General #4 Anonymous Board volunteer 04/16/2022 (Sat) 17:14:57 No. 8151
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
Is there a way to set the media viewer to use integer scaling (I think that's what it's called) rather than fitting the view to the window, so that hydrus chooses the highest zoom where all pixels are the same size and the whole image is still visible. My understanding is that nearest neighbor is a lossless scaling algorithm when the rendered view size is a multiple of the original, otherwise you get a bunch of jagged edges from the pixels being duplicated unevenly. It looks like Hydrus only has options to use "normal zooms" (what you set manually in the options? I'm confused by this), always choosing 100% zoom, or scaling to canvas size regardless of if that's with a weird zoom level (like 181.79%) that causes nearest-neighbor to create jagged edges.
When I deleted a file in Hydrus, how sure can I be that it is COMPLETELY gone? Are there any remnants that are left behind?
>>8156 yeah all the metadata for the file (tags and urls and such) are still there. There isn't currently a way to remove that stuff.
>>8154 Yeah, under options->media, and the filetype handling list, on the edit dialog is 'only permit half and double zooms'. That locks you to 50%, 100%, 200%, 400% etc... It works ok for static gifs and some pngs, if you have a ton of pixel art, but I have never really liked it myself. Set the 'scale to the canvas size' options to 'scale to the largest regular zoom that fits', I think that'll work with the 50/100/200/400 too. Let me know if it doesn't. >>8156 >>8157 Once the file is out of your trash, it will be sent to your OS's recycle bin, unless you have set in options->files and trash to permanently delete instead. Its thumbnail is permanently deleted. In terms of the file itself, it is completely gone from hydrus and you are then left with the normal issues of deleting files permanently from a disk. If you really need to remove traces of it from the drive, you'll need a special program that repeatedly shreds your empty disk sectors. In terms of metadata, hydrus keeps all other metadata it knows about the file. Information like the file's hash (basically its name), its resolution, filesize, a perceptual hash that summarises how it looked, and tags it has, ratings you gave it, URLs it knows the file is at, and when it was deleted. It may have had some of this information before it was imported (e.g. its hash and tags on the PTR) if you sync with the public tag repository. Someone who accessed your database and knew how hydrus worked would probably be able to reconstruct that you once imported this file. There are no simple ways to tell the client 'forget everything you ever knew about this file' yet. Hydrus keeps metadata because that is useful in many situations. Deletion records, for instance, help the downloader know to not re-import something your previously deleted. That said, I am working on a system that will be able to purge file knowledge on command, and other related database-wide cleanup of now-useless definition records, but it will take time to complete. There are hundreds of tables in the database that may refer to certain definitions. If you are concerned about your privacy (and everyone should be!), I strongly recommend putting your hydrus database inside an encrypted container, like with veracrypt or ciphershed or similar software. If you are new to the topic, do some searching around on how it works and try some experiments. If you are very desperate to hide that you once had a file, I can show you a basic hack to obscure it using SQLite. Basically, if you know the file's hash, you go into your install_dir/db folder, run the sqlite3 executable, and then do this: (MAKE A BACKUP FIRST IN CASE THIS GOES WRONG) .open client.master.db update hashes set hash = x'0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef' where hash = x'06b7e099fde058f96e5575f2ecbcf53feeb036aeb0f86a99a6daf8f4ba70b799'; .exit That first hash, "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef", should be 64 characters of random hex. The second should be the hash of the file you want to obscure. This isn't perfect, but it is a good method if you are desperate.
I just updated to the latest version, and there seems to be a serious (well, seriously annoying, but not dangerous) bug where frames/panels register mouse clicks as being higher up when you scroll down, as if you didn't scroll down. It's happening with the main tag search box drop down menu, and also in the tag edit window where tags are displayed and you can click on them to select them. I'm on Linux.
>>8159 Sorry, yeah, I messed something up last week doing some other code cleaning. I will fix it for next week and add a test to make sure it doesn't happen again. Sorry for the trouble. I guess I don't scroll and click much when I dev or use the client IRL.
>>8159 >on Linux I confirm that.
>>8159 I've got this problem on windows as well. Also, am I the only one experiencing extremely slow PTR uploads? Now instead of uploading 100 every 0.1 seconds, it is more like 1-4 every 0.1s
Apologies if the answer is already somewhere on the /hydrus/ board somewhere, I hadn't been able to quite find it, yet. I'm wondering how to make hydrus be able to download pictures from 8chan (using hydrus companion) when direct access results in a 404? I was assuming some fuckery with cookies but sending the cookies from 8chan trough hydrus companion to hydrus client seemingly made no difference
>>8166 afaik there's no way to import directly from urls of "protected" boards, but I'd love to be proven wrong.
>>>/hydrus/17585 >Is there a way to automatically add a file's filename to the "notes" of a Hyrdrus file when importing? Some of the files have date info or window information if they are screenshots and I'd like to store that information somehow. If not, is there some other way to store the filenames so that they can be easily accessible after importing? >>>/hydrus/17586 >>notes >I think notes are for when there's a region of an image that gets a label (think gelbooru translations), it's not the best thing for your usecase. The best way would be to have them under a "filename" namespace. I'm not either of these people, but a filename namespace is useless if the filename cares about case. Hydrus will just turn it all into lowercase. In those scenarios I've had to manually add the filename to the notes for each one... painful. Also, somewhat related: hydrus strips the key from mega.nz urls, so I have to manually add those to notes as well. More pain. >>8166 Have you tried giving hydrus your user-agent http header as well as the cookies?
>>8174 >Have you tried giving hydrus your user-agent http header as well as the cookies? No I haven't, however I'm still quite inexperienced when it comes to using hydrus so I don't really know how I'd be able to do that. Using the basic features of hydrus companion is pretty much as far as my skillset goes atm. Would you please kindly explain how I might do what you had described?
Trying to add page tags to my imported files is turning out to be an even bigger headache than I expected. The page namespace doesn't specify what it is a page of, so you can end up with multiple contradictory page tags. For example, an artist uploads sets of 1-3 images frequently to his preferred site, but posts larger bundles less frequently to another site. Or he posts a few pages at a time of a manga in progress, and when it's finished he aggregates all the pages in a single post for our convenience. Either way, you can end up with images that have two different page tags, both of which are technically correct for a given context, but the tags themselves don't contain enough information to tell which context they're correct in. If I wanted to be really thorough, I could make a separate namespace for each context a page can exist in, but then I'd be creating an even bigger headache for myself whenever I want to sort by pages. The best I can imagine would be some kind of nested tag system, so you can specify the tags "work:X" and "page:Y(of work:X)", and then sort by "work-page(of work)". As an added bonus, it would make navigation a lot smoother in a lot of contexts. For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter.
>>8183 Hydrus sucks at organizing files that are meant to be a sequential series. This has been a known problem for a long time unfortunately.
>>8183 >For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter. You may use kinda nested namespaces: 1 - namespace:whatever soap opera you want (to identify the group) 2 - namespace:chapter 1 (to identify the sub-group) 3 - namespace:chapter 1 - page 01 (to identify the order) So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. Done.
>>8190 >So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. At that point you're basically navigating folders in a file explorer, just more clumsy. That's exactly what I was trying to get away from when I installed hydrus.
I had a great week of simple work. I fixed some bugs--including the scrolled taglist selection issue--and improved some quality of life. The release should be as normal tomorrow.
>>8192 >At that point you're basically navigating folders in a file explorer What are you talking about? In Hydrus all files are in a centralized directory and searched with a database. I understand the hassle to tag manually, but not software is clairvoyant and reads your mind about what exactly you are searching for.
>>8813 if ordered sets are important to you installing danbooru is an option, they do put their source up on github. Last I tried it it was a pain in the ass to get working but I did eventually get it. Though it did lack a number of hydrus features I've gotten used to.
>>8183 Hydrus works off of individual files. It can adapt it to multi-file works, but the more robust of a solution you need the more you’ll butt up against Hydrus’ core design. The current idiomatic solution of generic series, title, chapter, page, etc. namespaces works for 90% of things (with another 9% of things being workable by ignoring all but one context), but if you need a many to many relationship the best you can do is probably use bespoke namespaces for each collection (e.g. “index of X:1” “index of Y:2”) and then use the custom namespace sort to view the files in whatever context you've defined. I guess an ease of use that could get added would be an entry in the tag context menu to sort by namespace. That way you wouldn't need to type it out every time.
>>8197 >That way you wouldn't need to type it out every time. In the future drag and drop tags may be the solution.
I want to remove the ptr from my database. Is there a way to use the tag migration feature to migrate tag relationships only for tags used in my files? You can do it with the actual tags, but I don't see an option to do something similar for relationships, and I'd rather not migrate over thousands of parents/children and siblings for tags I'll never see.
>>8166 Looks like you need to send the referral URL with your request. The 8chan.moe thread downloader that comes with hydrus already takes care of that, so I assume you're trying to download individual files or something? I think the proper thing here would be for the hydrus companion to attach the thread you found the image in as the referral URL, but I'm not sure if the hydrus API even supports that at the moment. So failing that, you can give 8chan.moe files an URL class and force hydrus to use https://8chan.moe/ as the referral URL for them when no other referral URL is provided. Hopefully this won't get you banned or anything.
I hope collections will be expanded upon in the future. It's very nice to be able to group together images in a page, but often I want an overview of the individual images of a group. Right now I have to right click a group and pick open->in a new page, which is awkward. Here's a quick mock-up of how I'd like it to work. Basically, show all images, but visually group them together based on the selected namespaces.
>>8210 The png I posted contains the URL class. Just go to network > downloaders > import downloaders and drag and drop the image from >>8203
Any way to stop hydrus from running maintenance (in my case ptr processing) while it's downloading subscriptions? I think that should prevent maintenance mode from kicking in. It always happen when I start Hydrus and leave it to download subs, because i have idle at 5 minutes. The downloads slow to a crawl because ptr processing is hogging the database. I could raise the time to idle but i still want it that low once hydrus has finished downloading subs...
Is any way to export the notes, like the file and tags? Something like: File: test.jpg Tags: test.jpg.txt Notes: test.jpg.notes.txt
>>8219 I get the impression that notes are a WIP feature. Personally I'm hoping we'll get the option to make the content parser save stuff as notes soon.
>>8212 Bruh
Are there plans to add dns over https support to hydrus? Most browsers seem to have that feature now, so it'd be cool if hydrus did too.
How do I enable a web interface for my Hydrus installation, so others can use it by my external IP? I need something simple like hydrus.app, but unfortunately it refuses to work with my external IP, only accepts the localhost, even though I enabled non-local access in API and entering my external IP in browser opens the same API welcome page as with localhost. Who runs that app, anyway, where do I see support for it?
>>8209 Thanks. Yeah, this is exactly what I want to do too. I am in the midst of a long rewrite to clean up some bad decisions I made when first making the thumbnail grid, and as I go I am adding more selection and display tools. Once things are less tangled behind the scenes, I will be able to write a 'group by' system like this, both the data structure behind and the new display code needed. Unfortunately it will take time, but I agree totally. >>8216 There's no explicit way at the moment. I have generally been comfortable with both operations working at the same time, since I'm generally ok if subs run at, say, 50% speed. I designed subs to be a roughly background activity and don't mean for them to run as fast as possible. If your machine really struggles to do both at once though, maybe I can figure out a new option. I think your best shot in the meantime, since PTR processing only works in idle time but subs can run any time, is to tweak the other idle mode options. The mouse one might work, if you often watch your subs come in live, or the 'consider the system busy if CPU above' might work, as that stops PTR work from starting if x cores are busy. If you are tight on CPU time anyway, that could be a good test for other situations too. You can also just turn off idle PTR processing and control it manually with 'process now' in services->review services. I don't like suggesting this solution as it is a bit of a sledgehammer, but you might like to play with it. >>8219 >>8220 Yeah, not yet, but more import/export options will come. If you know scripting, the Client API can grab them now: https://hydrusnetwork.github.io/hydrus/client_api.html https://hydrusnetwork.github.io/hydrus/developer_api.html
>>8223 For advanced technical stuff like that, I am limited by the libraries I use. My main 'go get stuff' network library is called 'requests', a very popular python library https://docs.python-requests.org/en/latest/ although for actual work I think it uses the core urllib3 python library https://pypi.org/project/urllib3/ . So my guess is when python supports it and we upgrade to that new version of python, this will happen naturally, or it will be a flag I can set. I searched a bit, and there might be a way to hack it in using an external library, but I am not sure how well that would work. I am not a super expert in this area. Is there a way of hacking this in at the system level? Can you tell your whole OS to do DNS lookups on https with the new protocol in the same way you can override which IP to use for DNS? If this is important to you, that might be a way to get all your software to work that way. If you discover a solution, please let me know, I would be interested. Otherwise, I think your best simple solution for now is to use a decent VPN. It isn't perfect, but it'll obscure your DNS lookups to smellyfeetbooru.org and similar from your ISP.
>>8232 The various web interfaces are all under active development right now. All are in testing phases, and I am still building out the Client API, so I can't promise there are any 'nice' solutions available right now. All the Client API tools are made by users. Many hang out on the discord, if you are comfortable going there. https://discord.gg/wPHPCUZ The best place to get support otherwise is probably on the gitlab/github/whatever sites the actual projects are hosted on, if they have issue trackers and etc.. For Hydrus.app I think that's here https://github.com/floogulinc/hydrus-web I'm not sure why your external IP access isn't working. If your your friend can see the lady welcome page across the internet, they should be able to see the whole Client API and do anything else. Sometimes http vs https can be a problem here.
>>8233 >If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. Is it even possible to download mega links through hydrus? I've been using mega.py for automating mega downloads, and looking at the code for that, it seems quite a bit more complicated than just sending the right http request. https://github.com/odwyersoftware/mega.py/blob/master/src/mega/mega.py#L695 I'd love to be proven wrong, but looks to me like this is a job for an external downloader. Speaking of which, any plans to let us configure a fallback options for URLs that hydrus can't be configured to handle directly? At very least, I want to be able to save URLs for later processing.
>>8238 My problem is that some of the galleries I subscribe to might occasionally contain external links. For example, some artists uploading censored images, but also attaching a mega or google drive link containing the uncensored versions. I can easily set up the parser to look for these URLs in the message body and pursue them, but if hydrus itself doesn't know how to handle them, they get thrown out. Would be nice if these URLs could be stored in my inbox in some way, so I can check if I want to download them manually or paste them into some other program. Even after you implement a way to send the URL to an external program (which sounds great), it would be useful to see what URLs hydrus found but didn't know what to do with, so the user can know what URL classes they need to add.
>>8233 >For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. Oh wow, I never knew what that option did. Thanks! I made url classes. Note: one of the mega url formats (which I think is an older format) has no parameters at all, it's just "https://mega.nz/#blah". So if you just give it the url "https://mega.nz/" it will match that url. Kind of weird, but not really a huge issue. >>8184 I mean, that's not really particular to hydrus. It's true for almost any booru.
Hey, After exiting the duplicate filter I was greeted with two 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' I'm running the AUR version, if you need any more info let me know.
could the downloading black/white list be adjusted to work on matching a search, rather than just specific tags? There's a lot of kinds of posts I'd rather not download, but most of the time they aren't simple enough to be accurately described with a single tag.
I was ill for the start of the week and am short on work time. Rather than put out a slim release, I will spend tomorrow doing some more normal work and put the release off a week. 483 should be on the 4th of May. Thanks everyone! >>8246 Sorry, I messed up some duplicate logic that will trigger on certain cases where it wants to back up a pair! This is fixed in 483 along with more duplicate filter code cleanup, please hang in there.
>>8260 Get well anon.
Is there an (easy) way to extract the data used to make the file history chart into a CSV? I'd like to play around with that data myself.
Minor bug report: hovering over tags while in the viewer and scrolling with the mouse wheel causes the viewer to move through files as if you were scrolling on the image itself. May be related to the bug from a few weeks ago.
I had a good couple of weeks. There are a variety of small fixes and quality of life improvements and the first version of 'multiple local file services' is ready for advanced users to test. The release should be as normal tomorrow.
>>8326 hello mr dev I just found out about this software and from reading the docs I have only this to say: based software based dev long live power users
Hey h dev, moveing to a new os soon, along with whatever happened recently in hydrus made video more stable so I can parse it. I know I asked about this a while ago, having a progress bar permanently under the video as an option, im wondering if that ever got implemented as an option or if it's something you haven't gotten to yet? I run into quite a few 5 second gifs next to 3 minute long webm's and me hovering the mouse over them takes up a non insignificant amount of the video, at least enough that I have to move the mouse off of it just to move it back to scrub. thanks in advance for any response.
just want to confirm the solution for broken mpv from my half sloppy debian install like in this issue: https://github.com/hydrusnetwork/hydrus/issues/1130 as suggested, copying just the system libgmodule-2.0.so to Hydrus directory helps although the path may be different, because I have such files at /usr/lib/x86_64-linux-gnu/
>>8333 sounds great, with this I will be able to have Inbox Seen to parse Parse nsfw Parse sfw Archive nsfw Archive sfw if i'm able to search across everything, I get unfiltered results, but refine it down to specific groups outside of just a rating filter that would be great
>>8333 Does copying between local file services duplicate the file in the database?
Is it just me or is there a bug preventing files from being deleted in v483? I can send them to trash but trying to "physically delete" them doesn't work. Hitting delete with files selected does nothing, neither does right clicking and hitting "delete physically now".
>>8317 Not an easy way, but attached is the original code that a user made to draw something very similar in matplotlib. If you adjust this, you could pipe it to another format, or look through the SQL to see how to extract what you want manually. My code is a bit complicated and interconnected to easily extract. The main call is here-- https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/db/ClientDB.py#L3098 --but there's a ton of advanced bullshit there that isn't easy to understand. If you have python experience, I'd recommend you run the program from source and then pipe the result of the help->show file history call to another location, here: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/gui/ClientGUI.py#L2305 I am also expecting to expand this system. It is all hacked atm, but as it gets some polish, I expect it could go on the Client API like Mr Bones recently did. Would you be ok pulling things from the Client API, like this?: https://hydrusnetwork.github.io/hydrus/developer_api.html#manage_database_mr_bones
>>8330 Awesome, thank you. I will update the help to reference this specifically. >>8335 Yeah, I think my next step here is to make these sorts of operations easier. You can set up a 'search everything' right now by clicking 'multiple locations' in the file domain selector and then hitting every checkmark, but it should be simpler than that. ~Maybe~ even favourite domain unions, although that seems a bit over-engineered, so I'll only do it if people actually want it. Like I have 'all local files', which is all the files on your hard disk, I need one that is all your media domains in a nice union. Also want some shortcuts so people like you will be able to hit shift+n or whatever and send a file from inbox to your parse-nsfw domain super easy. As you get into this, please let me know what works well and badly for you. All the code seems generally good, just some stupid things like a logic problem when trying to open 'delete files' on trash, so now I just need to make the UI and workflow work well. >>8340 No, it only needs one copy of the file in storage. But internally, in the database, it now has two file lists. >>8356 Yes, sorry! Thank you for the report. This is just an accidental logic bug that is stopping some people from opening the dialog on trash--sorry for the trouble! I can reproduce it and will fix it. If you really want to delete from trash, the global 'clear trash' button on review services still works, and if you have the advanced file deletion dialog turned on, you can also short-circuit by hitting shift+delete to undelete and then deleting again and choosing 'permanently delete'.
First of all, thank you for all your hard work HydrusDev I have small feature request, now that we have multiple local services For the Archive/Delete filter, there should be keyboard shortcuts for "Move/Copy to Service X" as well as "Move to Trash with reason X" and "Delete Permanently with reason X" The latter two would be nice because having to bring up the delete dialog every time is kind of clunky
>>8361 >Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? Yes, it is from Hydrus Companion, I forgot that it was a separate program since I started using it at the same time that I started using Hydrus. Now that I think about it though, just avoiding Pixiv probably isn't the best solution either, since there's plenty of content that can only be found on Pixiv. If there is a way to download the English translations of the tags, then that would mostly solve the issue, since I could then use parent/sibling tagging to align them with the other tags. I don't know how doable that would be though, so for now the best solution is probably to import a sibling tag file that changes all the Japanese pixiv tags to their English tags, assuming that someone has already made this.
>>8330 I was able to get it working by copying libmpv.so.1 and libcdio.so.18 from my old installation (still available on my old drive) to the hydrus installation folder.
I entered the duplicate filter, and after a certain point it wouldn't let me make decisions any more. I'd press the "same quality duplicate" button and it just did nothing. I exited the filter, then the client popped up a bunch of "list index out of range" errors. here's the traceback for one of them: v483, linux, frozen IndexError list index out of range File "hydrus/client/gui/ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus/client/gui/ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3548, in ProcessApplicationCommand self._MediaAreTheSame() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3149, in _MediaAreTheSame self._ProcessPair( HC.DUPLICATE_SAME_QUALITY ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3259, in _ProcessPair self._ShowNextPair() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3454, in _ShowNextPair self._ShowNextPair() # there are no useful decisions left in the queue, so let's reset File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3432, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ): I reentered the duplicate filter, and I got through a few more pairs before it stopped letting me continue again. It seems like it was on the same file as last time too. Could this bug have corrupted my file relationships?
>>8359 >Python script That'll help a lot, thanks! >Would you be ok pulling things from the Client API, like this? Yeah, definitely.
>>8361 a 3 pixel tall scan bar... that honestly wont be a bad option, my only concern would be the immediate visibility of it, and i'm not sure there is a good way to do that... would it be possible to have custom colors for it, both when its small and when its large? when its large that light grey with dark grey isn't a bad option, but small it would kind of be a constantly moving needle in the haystack. but if for instance, I had the background of the smaller bad be black with a marginally think red strip, I would only see that red strip move, this may not be a great option for everyone, but I could see various different colors for higher contrast being a good thing especially when its 3 pixels big. yea I think it's a great idea, it would make it readily available from the preview how long the video is and it would be so out of the way that nothing is massively covered up. if its an option would the size it is be changeable/user settable? its currently 60 pixels if my counting is right, but I could see something maybe 15 or so being something I could leave permanently visible, if it can't than it doesn't matter, but if its possible to make it an option I think this would be a fantastic middle ground till you give it a serious pass. anyway, whatever you decide on will help no matter what path it is.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
Sorry for the double post. Verification was acting up.
>>8367 This issue isn't just with the one pair now. It's happened with multiple pair when trying to go through the filter. And it's not just happening when I mark them as same quality. It also happens when I mark them as alternates. I also noticed that when this bug happens, the number in the duplicate filter (the one that's like "13/56") jumps up a bunch
I had an ok week. I fixed some bugs (including non-working trash delete, and an issue with the new duplicate filter queue auto-skipping badly), improved some quality of life, and integrated the new multi-service 'add/move file' commands into the shortcuts system and the media viewer. The release should be as normal tomorrow. >>8367 >>8396 Thank you for this report, and sorry for the trouble! Should be fixed tomorrow, please let me know if you still have any problems with it.
Are sorting/collection improvements on the to-do list? I sometimes have to manually sort video duplicates out and being able to collect by duration/resolution ratio and sort by duration and then by resolution ratio would be extremely helpful. Sorting pages by total filesize or by smallest/largest subpage could have some uses as well, but that might be too autistic for other users.
>>8409 >>8377 Nice, the scan bar is far more visible than I thought it may have been. I think there is the possibility that other colors may also help legibility but for me its just fine as is.
ok h dev, probably my last question for a while, I have so far parsed thought about 5000-10000 "must be pixel dups" I have yet to find one where I have ever decided 'lets keep the one with the larger file size' I have decided, at least for the function of exact dupes, i'm willing to trust programs judgement is there any automation in the program for these yet? from what I can see a few of my subscriptions are generating a hell of alot of these, and even then, I had another 50000 to go though, if there is a way to just keep the smaller file and yeet the larger with the same settings I have assigned to 'this is better' this would be amazing. I dont recall if anything has been added to hydrus yet, I would never trust this for any speculative match as I constantly get dups that require hand parsing with that, but holy shit is it mind numbing to go though pixel dups... scratch that, when I have all files, I have 325k must be pixel dupes (2 million something potential dups, so this isn't a case of the program lagging behind options)
Can't seem to do anything with these files. I can't delete them, and setting a job to remove missing or invalid files doesn't touch them. They don't have URLs so I can't easily redownload them either. What do?
>>8418 Note, they do have tags, sha256sums, and file IDs, but nothing else as far as I can tell. If I manage to redownload one by searching for each file manually based off the tags it appears and can be deleted. Maybe I could do some sqlite magic and remove the records via the file IDs using the command line, but I don't know how. The weird thing is how they appear in searches. They don't show up when I search only system:everything, but do show up when searching for tags that the missing file is tagged with. I tried adding a dummy tag to all of my working files and searching with -dummy, and the missing files didn't show up. If I search some tag that matches a missing file and use -dummy, the missing files that are tagged with whatever other tag I used to search do show up. Luckily all of these files had a tag in common so I can easily make a page with all of the missing files, 498 total. I can open the tag editor for these, and adding tags works but I cannot search for tags that only exist on missing files (I tried adding a 'missing file' tag, can't search it). Nothing interesting in the logs, unless I try to access one which either gives KeyError 101 or a generic missing file popup. Hydev, if you're interested in a copy of my database folder, I could remove most of the large working files and upload a copy somewhere if you want to mess with it. I'm open to trying whatever you want me to if that's more convenient though.
Got this error when after updating. (def jumped multiple versions, not sure how much) Manually checking my files seems that all of them are fine. It's just that hydrus can't seem to make sense of it for some reason...? FYI my files are on a separate hdd and my hydrus installation is on an ssd. Neither are on the same drive as my OS
>>8363 Thanks. I agree. I figured out the move/add internal application commands for 484, so they are ready to be integrated. 'Delete with reason x' will need a bit of extra work, but I can figure it out, and then I will have a think about how to integrate it into archive/delete and what the edit UI of this sort of thing looks like. Ideally, although I doubt I will have time, it would be really nice to have multiple archive/delete filters. >>8364 Yeah, this sounds tricky. Although it is complex, I think your best bet might be to personally duplicate and then edit the redirection scripts or tag parsers involved here. You may be able to edit the hydrus pixiv parser to grab the english tags (I know we used to have this option as an alternate parser, but I guess it isn't available any more? maybe pixiv changed how this worked?), or change whatever is parsing SauceNao, although I guess that is part of Hydrus Companion. EDIT: Actually, if your only solid problem with pixiv is you don't want its japanese tags, hit up network->downloaders->manage default tag import options, scroll down to 'pixiv file page api' and 'pixiv manga_big page' and set specific defaults there that grab no tags. Any hydrus import page that has its tag import options set to 'use the current defaults' will then default to those, and not grab any tags. >>8366 Thank you! >>8376 Thanks. I'll make a job to expose this data on the Client API.
>>8377 >>8413 I'm glad. I am enjoying it too in my IRL use. I thought it would be super annoying, but after a bit of use, it just blends into my view and is almost unconsciously useful. Just FYI: The options are a ugly debug/prototype, but you can edit the scanbar colours now. Hit up install_dir/static/qss and duplicate 'default_hydrus.qss'. Then edit your duplicate so the 'qproperty' values under HydrusAnimationBar have different hex colour values. Load up the client, switch your style to your duplicated qss file, and the scanbar should change colour! If you already use a QSS style, then you'll want to copy the custom HydrusAnimationBar section to a duplicate of the QSS style file you use and edit that. >>8379 Thank you, I will investigate this. I was actually going to try exposing all the modified timestamps on the Client API and the client, not just the aggregate value, so I will do this too, and that will help to figure out what is going on here. >>8408 I would like to do this. It can sometimes be tricky, but that's ok--the main problem is I have a lot of really ugly UI code behind the scenes that I need to clean up before I can sanely extend these systems, and then when I extend them I will also have to update the UI to support more view types. It will come, but it will have to wait for several rounds of code cleaning all across the program before I dive properly back in here. Please keep reminding me. Sorting pages themselves should be easier. You can already do a-z name and num_files, so adding total_filesize should be ok to do. I'll make a job. >>8417 Thanks. There is no automation yet, but this will be the first (optional) automated module I add to the duplicate filter, and I strongly expect to have it done this year. I will make sure it is configurable so you can choose to always get rid of the larger. Ideally, this will process duplicates immediately upon detection, so the client will negotiate it and actually delete the 'worse' file as soon as file imports happen.
>>8446 Missing files anon here, it said "my files". I should have mentioned this in my first post, but I had to restore my database from a backup a while back and these first appeared then. I'm assuming they were in the database when I backed it up, but had been deleted in between making the backup and restoring it. I fucked around with file maintenance jobs and managed to fix it. It didn't work the first time because "all media files" and/or "system:everything" wasn't matching the missing files. The files did all have a tag in common that I didn't care to remove from my working files, and for some reason this tag would match the missing files when searched for. I ran the maintenance search on that tag and did the job, and now they're gone.
>>8446 >>8447 Actually, scratch that. The job was able to match the files and reported them as missing, put their sha256sums into a file in the database folder, and made them vanish from the page that had the tag searched, but refreshing it shows that they weren't actually removed and I still encounter them when searching for other tags. Not sure what to do now.
Hello. Is there a way to make sure that when scraping tags, the imaged that were deleted aren't going to be downloaded again?
Can someone help me? Since the last 3 releases Hydrus has been pretty much unusable for me. Having it open after a while it ends up on (not responding) and it can stay that way for hours or until i force close it. I asked on the discord but no one has replied me (I can't complain tho they have helped me a lot in the past) I have a pretty decent PC. R7 1700, 32GB of RAM and I have the main files on a NVM drive and the rest on a 4TB HDD. Please help, I haven't been able to use Hydrus for almost a month.
Trying to download by Pixiv bookmarks, but everytime I enter the url "https://www.pixiv.net/en/users/numbers/bookmarks/artworks" I get an error saying "The parser found nothing in the document". Only trying to grab public bookmarks and I've got Hydrus Companion setup with the API key. Not sure what I'm doing wrong, unless there's some alternate URL I'm supposed to use for bookmarks.
could you change the behavior of importing siblings from a text file so that if a pair would create a loop with siblings you already have, it just asks if you want to replace those pairs you already have that would be part of the loop with the ones from the file? The way it works now, there's no way to replace those siblings with the ones from the file except for manually going through each one yourself, but that defeats the purpose of importing from a file. This would be an exception in the case of you clicking "only add pairs, don't remove" but that's okay because the dialog window would ask you first. As it is right now, the feature is unfortunately useless for my purposes, which is a shame because I thought I finally found a solution for an issue with siblings I've been having for a while. A real bummer.
I had a good simple week. I cleaned some code, improved some quality of life, and made multiple local file services ready for all users. The release should be as normal tomorrow.
I'm pretty new to using this but, is there a way to tag a media with a gang of niggers tag without including its parent tags?
I'm looking to use an android app (or equivalent) that lets me manage (archive/delete) my collection hosted on a computer within a local network, so say if I had no internet I could still use it. Is this a thing? Is there a program that will do this? The available apps out there are a bit confusing as to what their limitations or features are.
Is it possible to download pics from Yandex Images with Hydrus, or can someone suggest a good program that can? Thanks.
is there a setting to make it so hydrus adds filenames as tags by default, such as when importing local files?
>>8453 Isn't that the default behavior of downloaders? Make sure "exclude previously deleted files" is checked. Or are you trying to add tags to files you've already deleted without redownloading them? I don't know if you can do that. >>8468 If you want to give something a tag without including its parent tags, it sounds like that tag shouldn't have those parent tags in the first place. >>8487 Import folders can do that. You can just have a folder somewhere that you can dump files in, and you can set hydrus to periodically check it and do things like add the filename or directory as tags.
>>8446 The cloning process seems to have worked in the sense that the integrity checks now pass. However now I get this message when I boot up hydrus. Is it safe to proceed or am I in deeper shit?
>>8491 It seems I already had "all local files" on, but changing it back to just "my files" seems to have no effect. I tried "clear orphan file records" and it nearly instantly completed without finding any.
>>8493 >For now, your best bet is the Client API Managed to figure it out, thanks. I used gallery-dl to download the metadata for all the files, gathered the md5 and tags from the metadata, searched up the md5 in the API and got the sha256, then added the tags to the sha256.
Hi, I didn't use Hydrus (Linux version) for three months, and after update to the latest version I noticed the following: when you start a selection in file manager (e. g. press shift and repeatedly press → to select multiple files) the image preview is freezing at selection begin, but the tag list is reflecting your movements. Old behavior is that both preview and tag list were changing synchronously.
>>8475 >>8493 Okay, thanks for the response. When the development is finished, I assume there will be an announcement. I had considered the VNC option. I'm not sure who's developing the app, if it's you or someone else, but do you know if it will be like a remote control of hydrus on a host computer, if it'll be a kind of a port of existing hydrus, or if it'll have functionality of both options? I'm also curious as of an approximate timeframe as well.
>>8455 I got it work via URL like that thouhg Hydrus url import page: https://www.pixiv.net/ajax/user/YOURPIXIVID/illusts/bookmarks?lang=en&limit=48&offset=96&rest=show&tag= I didn't try to change the limit key (was afraid of ban), so whole process was page by page - increasing offset by 48 every input of URL
>>8505 update: Hydrus finally booted, thank god, however it's completely empty. All the files are still on my HDD I can check, hydrus just seems to have forgotten about them. I suspect it might have also forgotten about pretty much all other settings as well, such as my thumbnails and files drive location. (thumbnails on ssd, files on hdd, originally, as suggested)
>>8515 Would I be able to do a "restore from a database backup", select my old, now seemingly "unlinked"/"forgotten" db, and proceed?
>>8518 Alright lemme give just a little more context to the current state of things then. This is how my setup [b]used[/b] to be set up client.exe in (SSD) E:\Hydrus Network\ thumbnails in (SSD) E:\Hydrus Network\thumbnails\ files in (HDD) F:\Hydrus Network Files\files\ (from f00 to fff) after this whole fuckery happened, I manually checked and all files remained in their place and continue to be fully intact and viewable from the file explorer, and also able to be opened and viewed without a fuss. Coming home from work I checked and it seems my suspicions were right. All my settings were reset to default, including the default file locations so for example were I to save a picture from 8chan it would by default put it in: E:\Hydrus Network\db\client_files\ There are currently no files actually saved in this location. It's empty. To clarify I didn't "create a backup" before this, but since my previous files in (F:) still remain there completely fine and viewable I was wondering if I could simple instruct hydrus to "look here for pictures" basically. At this point I don't care about tags, watches, and all that stuff, I'm just glad my files are safe and I want to get hydrus back in shape where it's useable for me.
>>8520 PS.: It's as if hydrus had uninstalled then reinstalled itself. Quite bizarre...
Can Hydrus have audio WavPack (.wv) files support, even only just for storing, not playback? That will be a good addition to the already available .flac and .tta.
down the line this will probably be obsolete, but before than it will help quite a bit. with duplicates, when they are pixel matches, is there a way to either set the lower file size one to be green and the bigger one to be red? its already this way with jpeg vs png ones, but same vs same just has both as blue, and with a pixel duplicates there would never be a reason to choose the larger file size. for me I want the duplicate deciding process to be as speedy as possible, at least with these exact duplicate ones, and I have been watching things while doing this, however and this may be my monitor, unless im staring straight at the numbers, they kind of blend, making 56890 all kind of look alike, requiring me to sit up and look at it straight on. I think if the lower number was green on exact dupes it would speed the process up significantly, at least until an auto discard for exact dupes (and hopefully this takes the smaller file size as the better pair) gets implemented and we no longer have to deal with exacts. I don't know if this would be simple to implement, but if it is, it would be much appreciated.
I'm trying to download a thread from archived.moe and arciveofsins.com but it keeps giving errors with a watcher and keeps failing with a simple downloader. it seems like manually clicking on the page somehow redirects to a different link then when hydrus does it.
>>8158 >In terms of metadata, hydrus keeps all other metadata it knows about the file. If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? Also, what does telling Hydrus to forget previously deleted files actually remove if it still keeps the files' hashes? I don't feel comfortable (or desperate) enough to use the method you gave, but I also don't want to go through the trouble of exporting all my files, deleting the database, reinstalling Hydrus, and then importing and tagging the files all over again.
My autocompleted tag list displays proper tag counts, but when I search them I get dramatically less images. I can still find these images in the database through system:* searches and they're still properly tagged. My tag siblings and parents aren't working for some tags either. But all the database integrity checks say everything is okay. What's my next step?
Still getting some errors in the duplicate filter, I think it has something to do when I'm choosing to delete images v485, win32, frozen IndexError list index out of range File "hydrus\client\gui\ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3598, in ProcessApplicationCommand command_processed = CanvasWithHovers.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2776, in ProcessApplicationCommand command_processed = CanvasWithDetails.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1581, in ProcessApplicationCommand self._Delete() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2928, in _Delete self._SkipPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3488, in _SkipPair self._ShowNextPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3442, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ):
>>8494 I have had a report from another user about a situation a bit similar to yours related to the file service that holds repository update files. I am going to investigate it this week, please check the changelog for 487. I can't promise anything, but I may discover a bug where some files aren't being cleanly removed from services at times and have a fix. >>8496 Yes, hit up options->gui pages and check the new preview-click focus options. Note that shift-click is a bit more clever now, too--if you go backwards, you can 'rewind' the selection. >>8499 Yeah, I like to highlight neat new apps in the release posts or changelogs. I do not make any of the apps, but I am thinking of integrating 'do stuff with this other client' tech into the client itself, so you'll be able to browse a rich central client with a dumb thin local client. Timeframe I can't promise. For me, it'll always be long. I'm expecting my 'big' jobs for the next 12-18 months to be a mix of server improvements, smart file relationships, and probably a downloader object overhaul. I'll keep working on Client API improvements in that time in my small work, and I know the App guys are still working, so I just expect the current betas to get better and better over time, a bit like Hydrus, with no real official launch. Check in again on the links in the Client API help page in 4-6 months, is probably a good strategy.
>>8547 >If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? It depends on what 'OK' means, I think. If you want to remove the hash record, sure, you can delete it if you like, but you might give yourself an error in two years when some maintenance routine scans all your stuff for integrity or something. Renaming the hash to a random value would be better. Unfortunately I just don't have a scanning routine in place yet to categorise every possible reference to every hash_id in your database to automatically determine when it is ok to remove a hash, and then to extend that to enable a complete 'ok now delete every possible connection so we can wipe the hash' command. Telling hydrus to remove a deletion record only refers to the particular file domain where the file was deleted from. It might still be present in other places, and other services, like the PTR, may still have tags for it. It basically goes to the place in the database where it says 'this file was deleted from my files ten days ago' and removes that row. If you really really need this record removed, please don't rebuild your whole client. Make a backup (which means making a copy of your database), then copy/paste my routine into the sqlite terminal exactly, then try booting the client. If all your files are fucked, revert to the backup, but if everything seems good, then it all went correct. Having a backup means you can try something weird and not worry so much about it going wrong. More info here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
>>8553 The nuclear way to fix this sort of problem, if it is a miscounting situation, is database->regenerate->tag storage mappings cache (all, deferred...). If the bad tag counts here are on the PTR, this operation could take several hours unfortunately. If the tags are just your 'my tags' or similar, it should only be a couple of minutes. Once done, you'll have to wait for some period for your siblings and parents to recalculate in idle time. But even if that fixes it, it does not explain why you got the miscount in the first place. I think my recommendation is see if you can find a miscounted tag which is on your 'my tags' and not on the PTR in any significant amount. A 'my favourites' kind of tag, if you have one. Then regen the storage cache for that service quickly and see if the count is fixed after a restart. If it is, it is worth putting the time into the PTR too. If it doesn't fix the count, let me know and we can drill more into what is actually wrong here. >>8555 Damn, thank you, I will look into this.
>>8565 This seems to have fixed it, thank you! However, it's left quite a few unknown tags. I guess those tags were broken, which was the problem in both my counts and parent/siblings. Is there any way to restore those "unknown tag" namespaced tags, or is it better to just try to replace them one by one?
>>8563 Here is some samples of WavPack from the web: https://telparia.com/fileFormatSamples/audio/wavPack/ But just in case I attached short random laugh compressed with recent release of encoder on Linux. Format seems have magic number "wvpk" as stated on wikipedia or github repo: https://github.com/dbry/WavPack/blob/master/doc/WavPack5FileFormat.pdf
Will it be possible at some point to edit hydrus images without needing to import it as a brand new image? It's annoying opening images in an external editor, making the edit, saving the image, importing said image, transferring all the tags back onto it, and then deleting the old version when all I'm doing usually is cropping part of it.
I had an ok week. I didn't have time to get to the big things I wanted, but I cleared a variety of small bug fixes and quality of improvements. The release should be as normal tomorrow.
>>8555 Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work.
How long until duplicates are shown properly? Also, are transitive duplicates sorting (as in files which aren't possible duplicates but have duplicates in common) in the to do list?
>>8563 nice, hopefully the rules come soonish, would make going through them a bit easier, definitely want to check out some things in 487 as they are things I made work arounds for like pushing the images to a page, I currently have a rateing that does something similar when i want to check the file a bit closer, be it a comic page I want to reverse search or something I want to see where it came from, this may be a better option.
>switch to arch linux from windows >get hydrus running >use retarded samba share on nas for the media folder >permission error from the subscription downloader >can view and search my images fine otherwise, in both hydrus and file manager Any idea which permissions would be best to change? I'm retarded when it comes to fstab and perms, but I know not to just run everything as root. I just can't figure out if its something like the executable's permissions/owner, the files permissions/owner, or something retarded in how I mount it. Pictured are the error, fstab entry, the hydrus client's permissions, and what the permissions for everything in the samba share are. The credentials variable in fstab is a file that only root can read, for slight obfuscation of credentials according to the internet. The rest to the right was stuff I added to allow myself to manipulate files in the samba share, again just pulled from random support threads.
>>8618 >Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work. Appears fixed for me with v487 - Thanks.
Perhaps, another bug?: >file>options>files and trash>Remove files from view when they are sent to trash. Checking/Unchecking has the desired result with watchers and regular files but does not seem work anymore with newly downloaded files published to their respective pages. Here, the files are merely marked with the trash icon but not removed from view, as it had been the case (for me) until version 484.
>>8627 It seems like I can manipulate files within the samba drive but it spits out an error when moving from the OS drive to there. So I guess it's some kind of samba caching problem.
I have noticed some odd non-responsiveness with the program. It is hosted on an SSD. While in full-screen preview browsing through files to archive or delete, sometimes the program will stop responding for approximately 10 seconds when browsing to the next file (usually a GIF but not always). The next file isn't large or long or anything. I'm not sure what's causing this issue. Is it just the program generating a new list of thumbnails?
>>8641 I also wanted to note this issue is not unique to this most recent update. It has been there for a while.
>>8641 >>8642 I guess I should also reiterate that the program AND the database are both hosted on the same drive (default db location)
well this is a first, the png on a pixel for pixel against a jpeg was smaller... i'm guessing that jpeg is hiding something.
>>8618 >>8630 Great, thanks for letting me know. >>8619 I expect to do a big push on duplicates in Q4 this year or Q1 2023. I really want to have better presentation, basically an analogue to how danbooru shows 'hey, this file has a couple of related files here (quicklink) (quicklink)'. Estimating timeframes is always a nightmare, so I'll not do it, but I would like this, and duplicates are a popular feature for the next 'big job'. At the moment, there is a decent amount of transitive logic the duplicates system. If A-dup-B, and B-dup-C, then A-dup-C is assumed. Basically duplicates in the hydrus database are really a single blob of n files with a single 'best' king, so when you say 'this is better' you are actually merging two blobs and choosing the new king. I have some charts at the bottom of this document if you want to dive into the logic some more. https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced But to really get a human feel for this, I agree, we need more UI to show duplicate relationships. It is still super technical, opaque, and not fun to use. >>8627 >>8636 I'm afraid I am no expert on this stuff. The 'utime' bit in that first traceback is hydrus trying to copy the original file's modified time from a file in your temp directory to the freshly imported file in the hydrus file system, so if the samba share has special requirements for that sort of metadata modification, that's your best bet I think.
>>8643 >>8647 There is no downloading or synching being done. Client is basically running stock, with no tags or anything (not even allowed to access the internet yet). Think it might be AV? Running Kaspersky on Low (uses very little resources for automated scanning).
>>8648 >>8647 Also, no active running imports. Just an open import window with about 60k files for me to sift through.
>>8649 >>8647 I tried it with an exclusion for the entire Hydrus folder for automated scanning but the problem persists so I don't think its AV related.
Would it be possible to add a sort of sanity check to modified times to prevent obviously wrong ones from being displayed? I've noticed a few files downloaded from certain sites since modified times were added to Hydrus show a modified time of over 52 years, which makes me think that files from sites which don't supply a time are given a 0 epoch second timestamp. In this case I think it would be better to show a string like "Unknown modification time" or none at all.
>>8652 Also, if I try to download the same file from a site that does have modified times, the URL of the new site is added but the modified time stays the incorrect 52 years. Maybe there could be an option to replace modified times for this query/always if new one found/only if none is already known (or set to 1970). I also couldn't find a way to manually change modified time, but maybe I didn't look hard enough.
>>8647 I would send it to ya but I dumped the trash before I saw your response, so far I have seen a few of these, if I find another ill send it to ya.
>>8656 Update on this issue: I tried exporting all my parent tags, then deleting all the parent tag configurations and using the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to indicate there's no work to do. I then added back in one parent tag from my original set (that only applied to 5 files in the repository) and the "maintenance" window says there's now one parent to sync, but isn't actually processing that one parent.
>>8648 >>8649 >>8650 Hmm, if you have a pretty barebones client, no tags and no clever options, then I am less confident what might be doing this. I've seen some weird SSD driver situations cause superlag. I recommend you run the profile so we can learn more. >>8652 >>8655 Thanks, can you point me to some example URLs for these? I do have a sanity check that is supposed to catch 1970-01-01, but it sounds like it is failing here. The good news is I store a separate modified time for every site you download from, so correcting this retroactively should be doable and not too destructive. I want to add more UI to show the different stored modified times and let you edit them individually in future. At the moment you just get an aggregated min( all_modified_times ) value.
>>8656 >>8662 Damn, this is not good. I'm sorry for the trouble and annoyance. Have you seen very slow boots during this? That thumbnail cache is instantiated during an early stage of boot, so it looks like the sibling/parent sync manager is going bananas as soon as it starts. I have fixed the bug, I think, for tomorrow's release. That may help your other issue, which is the refusal to finish outstanding work, but we'll see. Give tomorrow's release a go, and if it gets to a '95% done' mode again and won't do the last work, please try database->regenerate->tag parents lookup cache. While the 'storage mappings cache' reset will cause the siblings and parents to sync again, the 'lookup' regen actually does the mass structure that holds all the current relationships. It sounds like I have a logical bug there when you switch certain parents around. You don't have to say the exact tags if you don't want, but can you describe the exact structure of the revisions you made here? Was it simply flipped parent-child relationships, so you had 'evangelion->ayanami rei', and it should have been 'ayanami rei->evangelion'? Were there any siblings involved with the tags, and did the parent tags that were edited have any other parent relationships? I'm wondering if there is some weird cousin loop I am not detecting here, or perhaps detecting but not recognising as creating outstanding sync work. Whatever the case, let me know how you get on with this!
I had a good week. I did some simple work to make a clean release before my vacation. The release should be as normal tomorrow.
>>8665 Yes, I did have a few very slow startups: a few times it took like two hours for the UI to show, though I could see the process was indeed started in task manager. Thanks; I'll try tomorrow's release and see if that helps anything. Parent-tag-wise, the process I think I was doing right before it failed was I had a bunch of things tagged with something generic, which had one level of namespacing (e.g. "location:outdoor"), and I decided to make a few more-specific tags (e.g. "location:forest", "location:driving", and "location:beach"; all of which should also get "location:outdoor" as a "parent"). But I first created the parent relationship the wrong way and didn't notice it (so everything that was "outdoor" would now get three additional tags added to it). I saved the parent config and started manually re-tagging (e.g. remove "outdoor" and add "beach" for those that were in that subgroup), and after doing a few I noticed the F3 tagging window wasn't showing the "parent" tag yet (wasn't showing "outdoor" nested under "beach"), and so I went back to the tag manager and realized they were wrong, so deleted the relationship and re-added them the right way and continued re-tagging. After a while I noticed it still hadn't synced, and realized it didn't seem to be progressing any more, and started triaging to see if it was a bug. None of them had siblings defined.
>>8664 >Thanks, can you point me to some example URLs for these? It looks like this is only affecting permanent booru. I'm using pic related posted in one of these threads. Here's a SFW example URL: http://owmvhpxyisu6fgd7r2fcswgavs7jly4znldaey33utadwmgbbp4pysad.onion/post/3742726/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa It may be of note that the "direct file URL" is from IPFS, and the following onion gateway URL is added to the file's URLs as well: http://xbzszf4a4z46wjac7pgbheizjgvwaf3aydtjxg7vsn3onhlot6sppfad.onion/ipfs/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa The same file is available here with a correct modification time (2022-02-27): https://e621.net/posts/3197238 The modified time in the client shows 52 years 5 months, which is in January 1970. Not sure if there's an easy way to see the exact time.
>>8645 >but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech Couldn't you just make a temporary "import these files and use _ as _ to find alternates, then do _ if _" for now? Like "import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller"? I mean it sounds like too much when you write it out like that, but the underlying logic should be pretty simple.
trying to use Hydrus for the first time; is there a way to add subscription for videos specifically? So that it leaves out photos?
>>8675 Have a nice vacation OP and watch out for fucking normies.
id:6549088 from gelbooru. (nsfw) with download dec. bomb deactivated. When downloading this specific picture, before it finishes downloading, it makes the program jump to 3 gb of ram until i close it. Is opens normally with browser, but spikes to 3 gb on hydrus. and since i only have 4 gb it makes the pc freeze. Just wanted to report on that. Also, no native enflish speaker here.
>>8679 forgot, using version 474
>>8668 Reporting in that v488 seems to have fixed both these bugs. There's no longer the thumbnail exception being logged, the startup time to get to a UI window is quicker, and the parent-sync status un-stuck itself. Hooray!
>>8645 This is about what I figured. I pulled the database from a dying hard drive a few months ago. Every integrity scan between now and then ran clean, but I had a suspicion something had gotten fucked up somewhere along the line. Since it's been a minute, any backups are either also corrupted, or too old to be useful. Luckily, re-constructing them hasn't been too painful. I made an "unknown tag:*anything*" search page, then right-click->search individual tags to see what's in them. Most have enough files in to give context to what it used to be, so I'll just replace it. It's been a good excuse to go through old files, clean up inconsistent tags, set new and better parent/sibling relationships, etc, so it's actualy been quite pleasing to my autisms. I had 80k files in with an unknown tag back when I started cleaning up, and now I'm down to just under 40k. I'm sure I've lost some artist/title tags from images with deleted sources, or old filenames, but all in all, it could be much worse.
Thanks man! Have a good vacation!
>>8676 if you're just subscribing to a booru, they will generally have a "video" tag. you can add "video" to the tag search.
>>8703 nope, not a booru. So there isn't a way to filter that. awh.
Is there any way to get Hydrus to automatically tag images with the tags present in the metadata? Specifically the tags metadata field, why whole collection was downloaded using Grabber.
>>8710 my*
>>8709 What website is it? You might be able to add to/alter the parser to spit out the file type by reading the json or file ending, then use a whitelist to only get certain file endings (i.e. videos)
I've been using hydrus for a while now and is in the process of importing all my files. Is there any downside to checking the "add filename? [namespace]" button while importing? Think i got over 300k images so it would create a lot of unique tags if that would be a problem.
About how long do you estimate it might take before hydrus will be able to support any files. I specifically need plaintext files and html files (odd, I know) if that makes a difference. The main thing is just that it'd be nice for me to have all my files together in hydrus instead of needing to keep my html and (especially) my text files separate from the pics and vids. Also. I'm curious. Why can't hydrus simply "support" all filetypes, by just having an "open externally" button for files that it doesn't have a viewer for? It already does that for things like flash files, afterall.
>>8627 >>8636 >>8646 It seems to be working now, not sure what changed but somehow arch doesn't always mount the samba directory anymore and needs a manual command on boot now, which it didnt before. Maybe it was some hiccup, maybe some package I happened to install as I installed more crap, maybe it was a samba bug that got updated.
Is there a way to reset the file history graph, under Help?
>>8668 >>8681 Great, thanks for letting me know! >>8671 Thank you. The modified date for that direct file was this: Last-Modified: Thu, 01 Jan 1970 00:00:01 GMT I thought my 'this is a stupid date m8' check would catch this, but obviously not, so I will check it! Sorry for the trouble. I'll have ways to inspect and fix these numbers better in future. >>8674 I'm sorry to say I don't understand this: >"import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller" But if you mean broadly that you want some better metadata algebra for mass actions, I do hope to have more of this in future. In terms of copying metadata from one thing to another, I just need to clean up and unify and update the code. It is all a hellish mess from my original write of the duplicates system years ago, and it needs work to both function better and be easier to use
>>8676 >>8703 >>8709 >>8716 In the nearish future, I will add a filetype filter to 'file import options', just like Import Folders have, so you'll be able to do this. Sorry for the trouble here, this will be better in a bit! >>8679 >>8680 I'm sorry, are you sure you have the right id there? gif of the frog girl from boku no hero academia? I don't have any trouble importing or viewing this file, and by it looks it doesn't seem too bloated, although it is a 30MB gif, so I think your memory spike was something else that happened at the same time as (and probably blocked) the import. Normally, decompression bombs are png files, stuff like 12,000x18,000 patreon rewards and similar. I have had several reports of users with gigantic memory spikes recently, particularly related to looking at images in the media viewer. I am investigating this. Can you try importing/opening that file again in your client and let me know if the memory spike is repeatable? If not, please let me know if you still get memory spikes at other times, and more broadly, if future updates help the situation. Actually, now I think of it, if you were on 474, I may have fixed your gigantic memory issue in a recent update. I did some work on more cleanly flushing some database journal data, which was causing memory bloat a bit like you saw here, so please update and then let me know if you still get the problem. >>8688 Good luck!
>>8723 Great, let me know how things go in future! >>8725 What part would you like to 'reset'? All the data it presents is built on real-world stuff in your client, like actual import and archive times. Do you want to change your import times, or maybe clear out your deleted file record?
I had a good week. I did a mix of cleanup and improvements to UI and an important bug fix for users who have had trouble syncing to the PTR. The release should be as normal tomorrow.
when trying to do a file relationship search, is there a way to search for same quality duplicates. I don't see any way to do that, and every time I look at the relationships of a file manually, it's always a better/worse pair. Does Hydrus just randomly assign one of the files as being better when you say that they're the same quality?
>>8743 Yes, 'same quality' actually chooses the current file to be the better, just as if you clicked 'this is better', but with a different set of merge options. The first version of the duplicate system supported multiple true 'these are the same' relationships, but it was incredibly complicated to maintain and didn't lend itself to real world workflows, so in the end I reinvented the system to have a single 'king' that stands atop a blob of duplicates. I have some diagrams here: https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced I don't really like having the 'this is the same' ending up being a soft 'this is better', but I think it is an ok compromise for what we actually want, which is broadly to figure out the best of a group of files. If they are the same quality, then it doesn't ultimately matter much which is promoted to king, since they are the same. I may revisit this topic in a future iteration of duplicates, but I'm not sure what I really want beyond much better relationship visibility, so you can see how files are related to each other and navigate those relationships quickly. Can you say more why you wanted to see the same quality duplicate in this situation? Hearing that user story can help me plan workflows in future.
>>8151 What do I do for this? I'm just tyring to have my folder of 9,215 images tagged.
What installer does Hydrus use? I'm trying to set up an easy updating script with Chocolatey (since whoever maintains the winget repo is retarded).
>>8755 Figured it out, Github artifacts shows InnoSetup. Too bad Chocolatey's docs are half fucking fake and they don't do shit unless you give them money. This command might work, but choco's --install-arguments command doesn't work like the fuckwads claim it does. choco upgrade hydrus-network --ia='"/DIR=C:\x\Hydrus Network"'
>>8756 No, actually, that command doesn't work, because the people behind chocolatey are lying fucking hoebags. Seeing this horseshit, after THEY THEMSELVES purposfully obfuscated this bullshit is FUCKING INFURIATING.
>>8745 The main thing I wanted to do is compare the number of files that were marked as a lower-quality duplicates across files from different url domains with files that aren't lower-quality duplicates (either kings, or alts, or no relationships) to see which domains tend to give me the highest ratio of files that end up being deleted later as bad-dupes, and which ones give me the lowest, so I know which ones I should be more adamant about downloading from, and which ones I should be more hesitant about. This doesn't really work that well if same-quality duplicates can also be considered "bad dupes" by hydrus, because that means I'm getting a bunch of files in the search that shouldn't be there, since they're not actually worse duplicates, but same-quality duplicates that hydrus just treats as worse arbitrarily. Basically, I was trying to create a ranking of sites that tend to give me the highest percentage of low-quality dupes and ones that give me the lowest. I can't do that if the information that hydrus has about file relationship is inaccurate though. It's also a bit confusing when I manually look at a file's relationships, because I always delete worse duplicates, but then I saw many files that are considered worse duplicates and I thought to myself "did I forget to delete it that time". Now this makes sense, but it still feels wrong to me somehow.
>>8757 >2022 and still using windoze Time to dump the enemy' backdoor.
>>8753 The good catch-all solution here is to hit up services->review services and click 'refresh account' on the repository page. That forces all current errors to clear out and tries to do a basic network resync immediately. Assuming your internet connection and the server are ok again, it'll fix itself and you can upload again. >>8755 >>8756 >>8757 Yeah, Inno. There's some /silent or something commands I know you can give the installer to do it quietly, and in fact that's one reason the installer now defaults to not checking the 'open client' box on the last page, so some automatic installer a guy was making can work in the background. I'm afraid I am no expert in it though. If I can help you here, let me know what I can do. >>8758 Ah, yeah, sorry--there's no real detailed log kept or data structure made of your precise decisions. If you do always delete worse duplicates though, then I think you can get an analogue for this data you want. Any time you have a duplicate that is still in 'my files', you know that was set as 'same quality', since it wasn't deleted. Any time a duplicate is deleted, you know you set it as 'worse'. If you did something like: 'sort by modified time' (maybe a creator tag to reduce the number of results) system:file relationships: > 0 dupe relationships then you switch between 'my files' and 'all known files' (you need help->advanced mode on to see this), you'll see the local 'worse' (you set same quality) vs also the non-local worse (you set worse-and-delete), and see the difference. In future, btw, I'd like to have thumbnails know more about their duplicates so we can finally have 'sort files by duplicate status' and group them together a bit better in large file count pages. If you are trying to do this using manual database access in SQLite and want en masse statistical results, let me know. The database structure for this is a pain in the ass, and figuring out how to join it to my files vs all known files would be difficult going in blind.
>>8759 >Unironically being that guy Buddy, you just replied to a reply about easier updating with something that would make it ten times harder. Not to mention that hilariously dated meme. >>8760 Yeah, Choco passes /verysilent IIRC, and /DIR would work, but Powershell's quote parsing is fucking indecipherable, Choco's documention on the matter is outright wrong, and I can't 'sudo' in cmd. I'm considering writing a script to just produce update PRs for the Winget repo myself, since it's starting to seem like that would be easier, but I don't want to go through all of Github's API shit.
Pyside is nearly PyPy compatible (see https://bugreports.qt.io/browse/PYSIDE-535). What work would need to be done in Hydrus to support running under PyPy?
Forms
Delete
Report | 2022-06-27 05:10:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3370037376880646, "perplexity": 1876.801395793689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00433.warc.gz"} |
https://crypto.stackexchange.com/questions/41308/computational-assumption-for-the-extended-discrete-logarithm | # Computational Assumption For the extended discrete logarithm
Choose randomly $P\in G_1, (s,a \in Z_q)$.
let the attackers know $a,P$ and keep $s$ as secret. Also the following is given. $$sP,(a+s)^{-1}P$$
Individually,
From $sP$, trying to reveal ($s$) will be discrete logarithm problem.
However, I don't know (computational assumption) how to prove $s$ value cannot be revealed from $(a+s)^{-1}P$.
Moreover, are there any computation assumption to prove the secrecy of $s$ value from both $sP,(a+s)^{-1}P$ instead of individually.
As there is no pairing operation, BDH, xBDH, wBDH cannot be used here.
Let $\mathbb{G}$ be a (multiplicatively written) group of order $q$ and let $g$ be a generator of $\mathbb{G}$. The $r$-SDH assumption (Strong Diffie-Hellman) [BB08] states that given $$g,g^x, g^{x^2}, \dots, g^{x^r}$$ as input, it is hard to compute a pair $(a, g^{1/(x+a)})$ for some $a \in \mathbb{Z}_q$.
Writing group $\mathbb{G}$ additively and letting $P$ be a generator of $\mathbb{G}$, the $r$-SDH assumption is: Given $$P, sP, s^2P, \dots, s^rP$$ as input, it is hard to compute a pair $(a, 1/(s+a)P)$ for some $a \in \mathbb{Z}_q$.
Your assumption is related to the $1$-SDH assumption in a cyclic subgroup of points on an elliptic curve over a finite field. It is however weaker as the attacker is given the value of $a$ (in the SDH assumption, the attacker is free to choose the value of $a$).
[BB08] D. Boneh and X. Boyen, Short Signatures Without Random Oracles and the SDH Assumption in Bilinear Groups, Journal of Cryptology, 21(2), pp. 149-177, 2008.
• How about using $G$ as additive group of order $q$? Can I use same assumption? – myat Nov 7 '16 at 5:25
• @myat: I added a description with additive notation. – user94293 Nov 7 '16 at 5:32
• it is hard to compute a pair $(a, g^{1/(x+a)})$ for some $a \in \mathbb{Z}_q$. In my case, as $a$ is known, can I use it? – myat Nov 7 '16 at 5:32
• In your recommended paper, $1/(x+a)$ value is computed using modulo p. But, here, no modulo value is used. Will this affect the security ? – myat Nov 7 '16 at 5:42
• @myat: In my answer, since $\mathbb{G}$ has order $q$, the value of $1/(x+a)$ is defined modulo $q$. The $r$-SDH problem is to find a pair $(a, [(s+a)^{-1} \bmod q]P)$. – user94293 Nov 7 '16 at 5:48 | 2020-08-07 04:08:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135374426841736, "perplexity": 627.7336963773846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00135.warc.gz"} |
https://sudonull.com/post/30770-Types-of-infinities-and-brain-stem | # Types of infinities and brain stem
For this we need ZFC - the theory of sets Zermelo, Frenkel + Choice. Choice is the axiom of choice, the most controversial axiom of set theory. She deserves a separate article. It is assumed that you know what the "power" of the set is. If not, then google, for sure this is stated better than I can. Here I will only remind some
## Known facts
• The power of a set of integers is denoted by . This is the first infinite power; such sets are called countable.
• The power of any infinite subset of integers is simple, even, etc. - also countable.
• The set of rational numbers, that is, the fractions p / q, is also countable; they can be passed by a snake.
• For any power, there is a powerset operation - the set of all subsets that creates more power than the original one. Sometimes this operation is referred to as raising a two to a power, i.e.. powerset from the calculated power is the power of the continuum.
• Continuum power is possessed by: finite and infinite segments, planar and volumetric figures, and even n-dimensional spaces as a whole
• For ordinary math, the following power, practically not needed, usually all work happens with countable sets and continuum power sets
Now
## Little known facts
In ZFC, not all collections of elements can be sets. There are collections so wide that it is impossible to allow them to be sets; paradoxes arise. In particular, the “ set of all sets ” is not a set. However, there are set theories where such sets are allowed.
Further. Set Theory ... What Objects? Numbers? An apple? Oranges? Oddly enough, ZFC does not need any objects. Take the empty set {} and agree that it means 0. 1 denote by {{}} the deuce as {{{}}} and so on. {5,2} is {{{{{{{}}}}}}, {{{}}}}. Using integers, we can create real ones, and collections of real ones can create any shapes.
So set theory is ... how to say ... hollow theory. This theory is about nothing. More precisely, about how you cannest (i.e., nest in each other) braces.
The only operation defined in set theory is- a symbol of belonging. But what about unification, exclusion, equality, etc.? These are all macros, for example:
That is, in translation into Russian, two sets are considered identical when, when testing any element for belonging to them, we will get the same results. The
sets are not ordered, but this can be fixed: let the ordered pair (p, v) be {{p} , {p, v}}. Inelegant from the point of view of the programmer, but enough for a mathematician. Now the set of all param-value pairs sets a function, which is now also set! Et voila! all mathematical analysis, which works at the level of second-order languages , since it speaks not of the existence of numbers , but of the existence of functions , collapses into a first-order language!
Thus, set theory is a poor theory without objects and with one relation icon, which has absolutely monstrous power - without any new assumptions, it generates from itself formal arithmetic, real numbers, analysis, geometry and much more. This is a kind of TOE mathematics.
## Continuum Hypothesis - CH
Is there power between and ? Cantor could not solve this problem; the “king of mathematicians” Hilbert praised its importance, but only later it was proved that this hypothesis can neither be proved nor disproved. She is independent of ZFC.
This means that you can create two different maths: one with ZFC + CH, the other with ZFC + (not CH). In fact, even more than two. Suppose we reject CH, that is, we will believe that between and there is still power. How many can there be? One, two? Godel believed that only one. But, as it turned out, the assumption that there are 2, 17, 19393493 of them does not lead to contradictions. Any number, but not infinite!
When in formal arithmetic we come across an unprovable statement, for certain reasons we know that, nevertheless, this statement, although not provable, is actually either true or false. In set theory this does not work, we really get different mathematicians. How to relate to this? There are three philosophical approaches:
Formalism: why, in fact, be surprised? We set the rules of the game of symbols, different rules - a different result. No need to look for a problem where it does not exist
Platonism:But how then to explain that completely different theories, for example ZFC and New Foundations, built on completely different principles, almost always give the same result? Does this mean that behind the formulas is some kind of reality that we are studying? This view was held, for example, by Godel
Multiverse: We can have many axiomatics, sometimes giving the same result, sometimes not. We must perceive the picture as a whole - if color is associated with different systems of axioms, then the colored tree of effects is mathematics. If something is true everywhere - it is white, but there are also colored branches.
## Higher and higher.
In the future, for simplicity, we will accept the continuum hypothesis, i.e. - it is very comfortable. In fact, we will also accept the stronger axiom, the generalized continuum hypothesis that between x and powerset (x) there are never intermediate powers. Now we iterate the powerset and everything is simple:
How far can we go? After an infinite number of iterations, we get to- infinite power in order! By the way, its existence was not obvious to Cantor. But a second! After all, the powerset function is always defined, therefore can't be the last!
To obtain it is necessary to repeat powerset infinity and three more times . Have you already started to demolish the roof? It's only the beginning. Because again, having iterated powerset an infinite number of times, we get to, after which, naturally,
Having reached infinity an infinite number of times , we obtain the index. How do you like this power, for example:? While we iterated powerset over the list of ordinals, here are the initial ordinals:
but there are many, many more. So we will skip it all right away and do it
## Big step right away
Attention! What is written below may be dangerous for your brain! We iterated powerset a countable number of times, but do not we wave to the continuum ? Honestly, I myself am a little bit sausage from the fact that the cycle can be performed a continuum of times, but set theory requires existence
Next we will go faster:
The last Alef has an index of zero, but the local latex does not allow it to be put - there are too many levels. But the main thing is that you understood, no matter how much new monstrous power we would create, we can say - yeah, this is just a repeater , and put this whole construction in the form of an index to the new Aleph. Now the capacities are growing like a snowball, we can not be stopped, the pyramid of Alephs is higher and we can create any power ... Or not?
## Unreachable power
What if there is power so big that no matter how we try to reach it “from below”, building structures from Alephs, we will not achieve it? It turns out that the existence of such power is independent of ZFC. You can accept its existence or not.
I hear the whisper of Occam's razor ... No, no. Mathematicians adhere to the opposite principle, which is called ontological maximalism - let everything that is possible exist. But there are at least two more reasons why I want to accept this hypothesis.
• Firstly, this is not the first unattainable power that we know. First ... this is the familiar counting power. Oddly enough, it has all the properties unattainable - it's just not customary to call it that:
• There is no way to get infinite power “from below” - neither adding elements a finite number of times, nor iterating powerset () a finite number of times, using finite sets for seeding, you will not get infinity. To get infinity, you must already have it somewhere.
• The existence of infinite power is introduced by a special axiom - the axiom of infinity. Without it, the existence of infinite power is unprovable.
Second: if we reject the axiom of infinity, we get FinSet, a simple toy set theory with finite sets. Let's write down all these sets (the so-called theory model )
{}
{{}}
{{{}}, {}}
{{{{}}}}
{{{{}}}, {{}}}
{{{{ {}}}, {{}}}
{{{{}}}, {{}}, {}}
...
And we get ... an infinite set of finite sets ... That is, the modelThe theory of finite sets is infinite, and plays the role of the “set of all sets” in it. Maybe this will help to understand why the theory cannot talk about the “set of all sets” - such a set always exists as a model outside the theory and has other properties than sets inside. You cannot add the infinite to the theory of finite sets.
And yesit is the “multitude of all multitudes" of ZFC theory. In this video, at the end it is very beautifully said about unattainable power, but we have to go on.
## Even further.
Of course, we can go further by iterating . After going through all the steps described above, building huge repeater towers, we again run into an unattainable cardinal (but now we do not need new axioms, with the axiom of the existence of unattainable power that we just added, this has become provable). And again and again.
Note that now the arrow does not make sense to us like executing the Powerset () function, but GetNextInaccessible (). Otherwise, everything looks very similar, we have:
Now then we will definitely achieve anything ... Or not?
## Hierarchy of large capacities.
Yes, with GetNextInaccessible we run into hyper-unattainable power. Its existence requires one more axiom. There are hyper-hyper-unreachable powers. Etc. But there are other ways to determine power , not only through unattainability:
As a rule, behind each link there is a whole endless hierarchy with an arbitrary number of prefixes of hyper- and repeaters. However, the total number of formulas that determine unattainable cardinals is not that big - because the number of formulas is countable !!! Therefore, sooner or later they will end. Where they end, a red line is drawn. Everything below this line is defined more unsteadily, albeit formally.
The red line itself marks the end of Gödel’s universe (but do not forget that Gödel created TWO different universes) - the universe of sets constructed from below using formulas. Capacities above the red line are called hmm, "small", and below - large: The
main idea in them is that the universe of sets becomes so large that it begins to repeat itself in different senses. Each line, as always, requires a separate axiom, and several. And more interestingly, all this is not as useless as you might think. For example, the strongest axiom (rank-into-rank), in the very bottom line, is needed to prove the fact about the tablets .
Below is a survey, the last choice is decrypted here .
Only registered users can participate in the survey. Please come in.
## Which point of view is closer to you:
• 25.8% Formalism 72
• 10.4% Platonism 29
• 19% Multiverse 53
• 44.6% Complex-difficult! 124 | 2021-04-16 22:54:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133107423782349, "perplexity": 809.5596318250707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00236.warc.gz"} |
http://math.stackexchange.com/questions/127838/calculating-int-0-infty-sinx2dx | # Calculating $\int_{0}^{\infty}\sin(x^{2})dx$
I am supposed, in an exercise, to calculate the above integral by integrating $f(z) = e^{-z^{2}}$ on the following countor:
I began by separating the path $\gamma$ into three paths (obvious from the picture), and parametrizing each as follows:
$\gamma_{1} : [0, R] \rightarrow \mathbb{C}$ with $\gamma_{1}(t) = t$
$\gamma_{2} : [0, \frac{\pi}{4}] \rightarrow \mathbb{C}$ with $\gamma_{2}(t) = Re^{it}$
$\gamma_{3} : [0, \frac{\sqrt{2}R}{2}] \rightarrow \mathbb{C}$ with $\gamma_{3}^{-}(t) = t + it$ (with reverse orientation).
Then we can say that $\displaystyle\int_{\gamma} f(z) dz = \displaystyle\int_{\gamma_{1}} f(z) dz + \displaystyle\int_{\gamma_{2}} f(z) dz - \displaystyle\int_{\gamma_{3}^{-}} f(z) dz = 0$ since the path is closed.
Now $\displaystyle\int_{\gamma_{1}} f(z) dz = \displaystyle\int\limits_{0}^{R} e^{-t^{2}} dt$. We also get $\displaystyle\int_{\gamma_{3}^{-}} f(z) dz = -(i + 1) \displaystyle\int\limits_{0}^{\frac{\sqrt{2}R}{2}}e^{-2it^{2}} dt$. After playing around with sine and cosine a bunch to evaluate that last integral, I get:
$$0 = \int\limits_{0}^{R} e^{-t^{2}} dt + \int\limits_{\gamma_{2}} f(z) dz - \frac{i + 1}{\sqrt{2}} \int\limits_{0}^{R} \cos(u^{2}) du + \frac{i - 1}{\sqrt{2}} \int\limits_{0}^{R} \sin(u^{2}) du$$
I could not evaluate the integral along the second path, but I thought it might tend to 0 as $R \rightarrow \infty$. Then taking limits and equating real parts we get
$$\frac{\sqrt{2 \pi}}{2} = \displaystyle\int\limits_{0}^{\infty} \sin(u^{2}) du + \displaystyle\int\limits_{0}^{\infty} \cos(u^{2}) du$$
If I could argue that the integrals are equal, I would have my result.. But how do I?
So I need to justify two things: why the integral along $\gamma_{2}$ tends to zero and why are the last two integrals equal.
-
Just as a comment $$\int\limits_0^\infty {\sin \left( {a{x^2}} \right)\cos \left( {2bx} \right)dx} = \sqrt {\frac{\pi }{{8a}}} \left( {\cos \frac{{{b^2}}}{a} - \sin \frac{{{b^2}}}{a}} \right)$$ $$\int\limits_0^\infty {\cos \left( {a{x^2}} \right)\cos \left( {2bx} \right)dx} = \sqrt {\frac{\pi }{{8a}}} \left( {\cos \frac{{{b^2}}}{a} + \sin \frac{{{b^2}}}{a}} \right)$$ – Peter Tamaroff Apr 4 '12 at 2:41
Your parametrisation of the third integral is rather complicated. Why not just write $\gamma_3:[0,R]\rightarrow \mathbb{C}$, $t\mapsto -e^{\pi i/4}t$. Then the integral becomes $$\int_R^0 e^{-e^{\pi i/2}t^2}e^{\pi i/4}dt = \int_R^0 e^{-it^2}e^{\pi i/4}dt = e^{\pi i/4} \int_R^0 \cos t^2 - i \sin t^2 dt.$$ I am sure you can take it from there.
As for bounding the integral $\int_{\gamma_2}e^{-z^2}dz$, the length of the contour grows linearly with $R$. How fast does the maximum of the integrand decay? It's the standard approach, using the fact that $$\left|\int_\gamma f(z) dz\right|\leq \sup\{|f(z)|: z \in \text{ image of }\gamma\}\cdot \text{length of }\gamma.$$
-
$$\int_0^R e^{-t^2}dt= \frac{\sqrt{\pi}}{2}+o(1). \tag{1}$$
$$t\in\left[0,\frac{\pi}{4}\right] \implies \operatorname{Re}(Re^{it})\ge \frac{R}{\sqrt{2}} \implies \left|\int_{\gamma_2}e^{-z^2}dz\right|\le R\frac{\pi}{4}e^{-R^{\,2}/2}\to0 \tag{2}$$
$$\alpha=\int_0^M e^{it^2}dt \implies \int_0^M \sin(t^2)dt=\frac{\alpha+\overline{\alpha}}{2}. \tag{3}$$
Try using these deductions for $\gamma_1$, $\gamma_2$ and $\gamma_3$ respectively.
-
An easy way to evaluate $\int_{0}^{\infty}\sin(x^{2})dx$
$$\int_0^{\infty}e^{-ax^2}dx=\frac{\sqrt{\pi}}{2\sqrt{a}}$$ Now replace $a\rightarrow ia$
$$\int\limits_0^\infty \cos \left( {a{x^2}}\right)dx-i \int\limits_0^\infty \sin \left( {a{x^2}}\right)dx= \frac{\sqrt{\pi}}{2\sqrt{a}\sqrt{i}}$$ But
$$\frac{1}{\sqrt{i}}= \frac{1}{\sqrt{2}}-\frac{i}{\sqrt{2}}$$ So
$$\int\limits_0^\infty \cos \left( {a{x^2}}\right)dx=\int\limits_0^\infty \sin \left( {a{x^2}}\right)dx= \frac{\sqrt{\pi}}{2\sqrt{2a}}$$
- | 2013-05-20 07:46:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809872508049011, "perplexity": 253.4608191267923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698554957/warc/CC-MAIN-20130516100234-00098-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/geometry-problem-involving-dot-cross-product.341248/ | # Geometry problem involving dot/cross product
1. Sep 28, 2009
### hanelliot
1. The problem statement, all variables and given/known data
Let r be a line and pi be a plane with equations
r: P + tv
pi: Q + hu + kw (v, u, w are vectors)
Assume v · (u x w) = 0. Show that either r ∩ pi = zero vector or r belongs to pi.
2. Relevant equations
n/a
3. The attempt at a solution
I get the basic idea behind it but I'm not sure if my "solution" is formally good enough. I know that cross product gives you a normal to a plane, which is u x w here. I also know that the dot product = 0 means that they are perpendicular to each other.. so v and (u x w) are perpendicular to each other. If you draw it out, you can clearly see that r must be either in pi or out of pi. Is this good enough to warrant a good mark? Thanks!
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Sep 28, 2009
### lanedance
if r is parallel to pi, but not "in" pi their intersection will not be the zero vector, it will be the empty set
try the separate cases when P is "in" pi, and when P is not "in" pi and try and see if you can show whether any arbitrary point on r is in, or not in P, knowing that v = a.(uXw) where a is some non-zero constant
3. Sep 29, 2009
### hanelliot
you are right, my mistake
I can show that with intuition and a sketch of plane/line but not sure how I should go about proving it formally.. maybe this Q is that simple and I'm overreacting
4. Sep 29, 2009
### lanedance
as you know
v.(u x w) = 0
then
v = au + bw
for constants a & b - why?
equation of you line is P + vt
now, i haven't tried, but using your equation of a line & the equation of a plane, considering the following cases, should show what you want:
Case 1 - P is a point in pi. Now show any other point on the line, using the line equation, satisfies the equation defining pi.
Case 2 - P is not a point in pi. Now show any other point on the line, using the line equation, does not satisfy the equation defining pi. | 2018-01-24 09:51:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255792856216431, "perplexity": 620.2501948871325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893629.85/warc/CC-MAIN-20180124090112-20180124110112-00589.warc.gz"} |
https://hranalytics101.com/a-simple-model-with-major-insights-on-selection-bias/ | ## A Simple Model with Major Insights on Selection Bias
I mentioned earlier that I have been reading (and rereading) Scott Page’s masterful book “The Model Thinker”. If you are serious about understanding models and data as either a leader or as an analyst, it’s a valuable and accessible read.
Today I want to take an example from Chapter 3 about biased selection processes and how even somewhat moderate biases can quickly compound to yield major differences in long-term outcomes. The example domain is the number of male v. female CEOs but the principles apply broadly.
If you learn something in the following, give credit to Scott Page….and if you see any errors, blame me.
## Female CEOs and Selection Processes
Here is the basic setup:
1. Suppose that to reach the level of CEO, one must be promoted 15 times over the course of 30 years.
2. Further suppose that the probability of a male being promoted in a given two-year period ($$P_m$$) is .5 while that of a female ($$P_f$$) is .4.
Males in this hypothetical process would therefore enjoy a 25% advantage in promotion probability (.5/.4 = 1.25), a non-trivial leg up but perhaps not seen as absurdly high on the surface.
But then we think about the setup some more, realizing that this 50%/40% advantage plays out over and over, once at each of the 15 possible promotion points. If we play out this simplified probabilistic process 15 times over 30 years, we would end up with close to 30 times as many male CEOs than female CEOs.
Really? 30:1 from a 10% difference?
It turns out we can frame this as an $$X^n$$ problem, where $$X$$ is the probability of promotion and $$n$$ is the number of promotion steps. Applied to our hypothetical case we get the following probabilities for reaching CEO:
1. Males: $$.5^{15}$$
2. Females: $$.4^{15}$$
For reference, this is just like the probability of flipping a coin and getting a string of heads: we have a 50% chance of getting heads on the first flip, a 25% chance of getting two heads on the first two flips($$.5^2$$), a 12.5% chance of getting three heads on the first three flips ($$.5^3$$) and so on.
We are just applying this same idea to our respective male and female promotion rates of .5 and .4.
What we get is that the chances of a male becoming a CEO are $$.5^{15}$$ v. females at $$.4^{15}$$.
Independently, those numbers are both tiny (there are very few CEOs after all), but their ratio is roughly 28:1 in favor of males ($$.5^{15}/.4^{15}$$).
The figure below represents this simple process and the emerging differences through eight promotion steps (where area corresponds to the proportion promoted). I stopped at eight rounds to keep the circles visible.
As you can see, a repeated 50% to 40% advantage quickly shifts the total balance of promotions.
In hard numbers, think of it like the following:
• If you start with 1 million males in this process, you’ll end up with 30.5 CEOs at the end of 30 years ($$1,000,000 \times .5^{15} = 30.5$$)
• If you start with 1 million females, you’ll end up with 1.07 CEOs ($$1,000,000 \times .4^{15} = 1.07$$)
## Time, Hierarchies, and Process
The lesson above is that small differences/ biases/ preferences can compound over time to create major differences in endpoint outcomes. The challenge of human cognition is that our daily intuitions and smaller-scale thinking makes it impossible for us to immediately apprehend that a 28:1 difference can emerge from a repeated 50% to 40% selection advantage.
This is a strong reminder that we need thoughtful caution when trying to understand and address large differences at process endpoints. Said differently, big differences in output don’t necessarily imply big differences in input.
To further our understanding (based on another Page example) consider now a different subset of organizations in which the male-to-female CEO ratio is now 3:1, roughly one tenth of that in the above example.
Clearly the promotion process for this second group is moving toward something more fair/less imbalanced, right?
Not necessarily.
If we still require 15 promotions to be CEO then yes, this 3:1 ratio could only be met by a much smaller difference in promotion rates (here $$(.5/.465) = 1.075$$) which, when raised to the 15th power, gives us 3 (i.e. 3:1 males to females).
But what if our seemingly “fairer” 3:1 organizations are just flatter and therefore require fewer promotions to reach the CEO slot?
What if we keep the same .5/.4 promotion probability advantage for males but instead only require 5 promotions to become CEO instead of 15?
The math is direct: $$.5^{5}/.4^{5} = 3.05$$.
In this case then, our second set of organizations have a less skewed endpoint outcome (3:1 instead of 28:1) but it’s not because they have reduced the bias at each decision step. Rather, the same biased process is just played out fewer times. The results are better, but not for the reasons we expect.
## Summary
Granting numerous caveats, I would suggest at least the following two lessons from our $$X^n$$ model in the context of possible selection bias/preference analysis:
1. Consider the possible role of compounded $$X^n$$-type processes when evaluating endpoint differences of interest at your organization
2. Consider how the same basic process can play out differently in different organizations or even in different business units. Start with the basic $$X^n$$ model, asking about the $$X$$ value for our groups of interest as well as the $$n$$ value. Remember too that organizations can differ by both $$X$$ and $$n$$ among many other differences.
To this I would add that correctly detecting, let alone solving, any actual problem with a genuinely biased selection process won’t come down to just estimating $$X$$ and $$n$$ and calling it a day.
It will likely be limited in scale.
It will also be subject to real-world contexts, constraints, and caveats.
All of this is absolutely guaranteed to muddy the waters.
And yet our basic $$X^n$$ model is still incredibly useful. Why?
First, it’s a tractable tool that forces us to think concretely about process, measurement, and outcome.
Second, as a result, we can apply the model to our HR data and get some rough insights on any differences in promotion rates, frequency, and steps required to reach a given level.
This then serves as a concrete starting point for discussions with managers, directors, and other leaders about consistency, talent processes, and talent process improvement.
Together this represents a major step towards developing mature HR analytics processes and improving human capital decisions. | 2022-07-01 08:26:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4388850927352905, "perplexity": 1233.8427471328046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00475.warc.gz"} |
https://pos.sissa.it/334/203/ | Volume 334 - The 36th Annual International Symposium on Lattice Field Theory (LATTICE2018) - Physics beyond the Standard Model
Composite phenomenology as a target for lattice QCD
T. Degrand* and E. Neil
Full text: pdf
Published on: May 29, 2019
Abstract
Some recent beyond Standard Model phenomenology is based on new strongly interacting dynamics of $SU(N)$ gauge fields coupled to various numbers of fermions. When $N=3$ these systems are analogues of QCD, although the fermion masses are typically different from -- and heavier than -- the ones of real world QCD. Many quantities needed for phenomenology from these models have been computed on the lattice. We are writing a guide for these phenomenologists, telling them about lattice results. We'll tell you (some of) what they are interested in knowing.
DOI: https://doi.org/10.22323/1.334.0203
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access | 2023-01-26 22:15:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5082360506057739, "perplexity": 1608.6528845739879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00348.warc.gz"} |
https://socratic.org/questions/does-this-identity-exist-if-so-what-is-it-equivalent-to-sin-2-x-cos-2-x | # Does this identity exist? If so, what is it equivalent to? -sin^(2)x+cos^(2)x = ?
Mar 18, 2018
$\cos \left(2 x\right)$
#### Explanation:
Yes this is an identity:
$- {\sin}^{2} x + {\cos}^{2} x =$
${\cos}^{2} x - {\sin}^{2} x =$
$\cos x \cos x - \sin x \sin x =$
Look familiar yet?
$\cos x \cos x - \sin x \sin x = \cos \left(2 x\right)$
Mar 18, 2018
$- {\sin}^{2} x + {\cos}^{2} x$
$= {\cos}^{2} x - {\sin}^{2} x$
$= {\cos}^{2} x - \left(1 - {\cos}^{2} x\right)$
$= 2 {\cos}^{2} x - 1$
$= \cos \left(2 x\right)$ | 2019-03-19 16:45:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420083165168762, "perplexity": 13417.622530348952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202003.56/warc/CC-MAIN-20190319163636-20190319185636-00307.warc.gz"} |
https://itecnotes.com/electrical/electronic-how-protective-diode-protects-transistor-from-breakdown/ | # Electronic – How protective diode protects transistor from breakdown
diodesnpnreverse-breakdowntransistors
Please, explain the process of breakdown; how exactly this protective diode protects the transistor?
In a Horowitz & Hill book, "The Art of Electronics", 2nd edition, in "Chapter 2 – Transistors" (page 68), i read the following:
1. Always remember that the base-emitter reverse breakdown voltage for silicon transistors is small, quite often as little
as 6 volts. Input swings large enough to take the transistor
out of conduction can easily result in breakdown (with consequent degradation of hFE unless a protective diode is added
(Fig. 2.10).
Can't figure out how this diode protects the transistor from breakdown if the current goes only in one direction in this diode.
simulate this circuit – Schematic created using CircuitLab
The diode is in place to protect the transistor from reverse \$V_{BE}\$ breakdown. If you reverse-bias the base-emitter junction by taking the transistor's emitter to ~\$6\rm{V}\$ more positive than the base, it will breakdown and begin conducting.
This reverse breakdown damages the base-emitter junction, causing a degradation in \$h_{FE}\$.
By placing a diode as shown, the reverse-bias voltage is limited to ~\$0.7\rm{V}\$; attempts to apply more voltage will be futile, since the diode will conduct a lot of current and prevent increased voltage. This protects the transistor's base-emitter junction. | 2023-03-20 19:28:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6852763891220093, "perplexity": 4977.402106172773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00071.warc.gz"} |
https://www.physicsforums.com/threads/finding-acceleration-from-distance-and-time.858786/ | # Homework Help: Finding acceleration from distance and time
Tags:
1. Feb 22, 2016
### ajd2000
1. The problem statement, all variables and given/known data
A bullet with a mass of m=15.5 g is shot out of a rifle that has a length of L=1.02 m. The bullet spends t=0.16 s in the barrel.
Write an expression, in terms of given quantities, for the magnitude of the bullet's acceleration, a, as it travels through the rifle's barrel. You may assume the acceleration is constant throughout the motion.
So I'm supposed to write an acceleration equation using L and t but everything I try it says is wrong and the only hints are to use the given information and that theres a 2 somewhere in the numerator.
2. Relevant equations
I'm trying to find the equation. But I assume that V= d/t is used somehow and then a=v(final)+v(initial) / t is also used
3. The attempt at a solution
I tried just putting d/t (or L/t) where v goes in the acceleration equation but it says there's a 2 somewhere in the numerator that is also somehow related to L. So I tried squaring L and also L/t but neither of those was right and I also tried putting (2(L/t))/t which was also wrong. I only have one guess left before I just get a 0 on this part of the problem and the rest of the problem requires this formula (I tried to figure out the rest without doing this part but couldn't).
2. Feb 22, 2016
### Staff: Mentor
3. Feb 22, 2016
### Cozma Alex
In an uniform accelerated motion the space travelled by a particle at any time is given by x(t)=1/2at^2
With the initial condition: V(initial)= 0 and x(initial)=0
Last edited: Feb 22, 2016 | 2018-07-22 23:05:48 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075721263885498, "perplexity": 761.1032363387823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594018.55/warc/CC-MAIN-20180722213610-20180722233610-00536.warc.gz"} |
https://amplifyblog.com/dlvmwfs/f3475b-11th-chemistry-evaluate-yourself-answers-2020 | Please confirm the official website for updated information or related authority. Tamilnadu State Board Samacheer Kalvi 11th Chemistry Book Volume 1 Solutions. CBSE printable worksheets for classes 3 to 8 with answers are also available for download with us. Bookmark this page to get upcoming updates. You can download the question paper solution and check your answers. Answer Key will be released in both the languages i.e. All Chapter 3 - Classification of Elements and Periodicity in Properties Exercises Questions with Solutions to help you to revise complete Syllabus and boost your score more in examinations. In Class 11, Samacheer kalvi books of Chemistry include several important topics like Basic Concepts of Chemistry and Chemical Calculations, Hydrogen, Chemical bonding, etc. Provide a timeline and location for all expected exercise events C. Document evaluator team roles, responsibilities, and assignments D. Provide instructions for controllers and evaluators TN 11th Public Exam Chemistry Answer Key 2020:- Directorate of Examination, Tamil Nadu conducts class 11th exam annually at various exam centers throughout the state.This year class 11th exam for science, arts, and commerce will be conducted from 4th March – 26th March 2020. CBSE practice papers and unit tests for SA1 and SA2. CBSE chapter-wise test papers with solution and answer key for cbse class-9, cbse class-10, cbse class-11 and cbse class-12. XI Chemistry Question Bank(Updated) XII Chemistry Question Bank(Updated) ... Could have become great if the answers were also included. Choose from 7 study modes and games to study Science. Finals begin Monday, December 14 and end Saturday, December 19, 2020. Leaves from outdoors can provide pigments. 12th Chemistry Volume 1 Study Material (Evaluate yourself - Questions with Answers) | Mr. S. Shanmugam - English Medium - Preview & Download (MAT.NO. 11th Question Paper Answer Key TN 11th Paper Answer Key 2019: Tamilnadu State Board of Higher Education is conducting 11th Standard examinations from March 6th to March 22nd, 2019. They will find all type of questions like short answer, long answer type question, assertion and reasoning questions. sir i want maths portion for common quarterly exam 2019-2020, Sir I want English portion for quarterly exam 2019-2020, sir please include physics question papers also, Sir u want physics important collect questions, sir i want quarterly biology question paper 2019, Pls send the 11 quarterly exam botany answer key, Chemistry answer key send pannunga sir plz, Bio-zoology answer key in Tamil medium send me plsssszzz, Sir, pls send 11 quarterly exam Accountancy answer key in English medium, Bio-zoology answer key in Tamil medium sent as soon as. The quick and accurate answer keys guide the students to know the exact answers and helps to prepare for the next day exams well. The maximum marks that can be scored are 720 marks and consists of 180 questions. Answer: λ = $$\frac{h}{mv}$$ Potential difference of an electron = V = 1 keV. Class 11th Chemistry exam will be conducted on 26th March 2020. Samacheer Kalvi 11th Chemistry Guide Book Back Answers. The Tamilnadu board examination department likely […] Mathematics, 11.12.2020 21:37 Gilbert wants to build a triangular flower garden by the front porch of his house. NCERT Exemplar Class 11 Chemistry is very important resource for students preparing for XI Board Examination.Here we have provided NCERT Exemplar Problems Solutions along with NCERT Exemplar Problems Class 11. Candidates stay connected to our portal here we will inform you when the answer key is officially released. Along with the syllabus, the marking scheme is also changed. Replies. Find correct step-by-step solutions for ALL your homework for FREE! Replies. Samacheer Kalvi 12th Chemistry Guide Book Back Answers. The NCERT solutions for Class 11 Physics Chapter 1 physical world will help students to get familiar with the exam pattern and the syllabus of the exam. So that the students and teachers can analyse the best one for their evaluation. The NCERT Solutions to the questions after every unit of NCERT textbooks aimed at helping students solving difficult questions. All those students who attempted the exam are eagerly waiting for the answer key to check their estimate scores. The lakhs of candidates appeared every year. To save you time we’ve developed material which helps you meet the new … Class 11 is a very crucial point and students are required to be thorough with all the topics in Chemistry as the concepts are required in Class 12. They will find all type of questions like short answer, long answer type question, assertion and reasoning questions. Here on AglaSem Schools, you can access to NCERT Book Solutions in free pdf for Chemistry for Class 11 so that you can refer them as and when required. Contact Us. Students can also read Tamil Nadu 11th Physics Model Question Papers 2019-2020 English & Tamil Medium. Features of NCERT Solutions for Class 11 Chemistry. Sir.. pls sent botony tamil quartaly exam answer key.. Bio zoology ans key in tamil medium upload pannuga plzzzzz. NCERT Solutions Class 11 Chemistry Chapter 1 Some Basic Concepts Of Chemistry. An Exercise Evaluation Guide (EEG) is used to: A. Grab the opportunity to find free assignment answers related to all subjects in your Academic. Price and stock details listed on this site are as accurate as possible, and subject to change. Find correct step-by-step solutions for ALL your homework for FREE! Question 1. NCERT Solutions Class 11 Chemistry Chapter 1 Some Basic Concepts Of Chemistry. Students if you face any problem to download the TN HSC Plus One Exam Answer Key 2020 then you can ask your queries by leaving the comment in the comment box below. You must read this article for all details related to the Karnataka PUC answer key 2020. The class 11 NCERT solutions for chemistry provided by BYJU’S feature: In-depth explanations for all logical reasoning questions. In Class 11, Samacheer kalvi books of Chemistry include several important topics like Basic Concepts of Chemistry and Chemical Calculations, Hydrogen, Chemical bonding, etc. Color table with atomic numbers, element symbols, element names, atomic weights, periods, and groups. 1) If 5.6 g of KOH is present in (a) 500 mL and (b) 1 litre of solution, calculate the molarity of each of these solutions. FHSST Authors The Free High School Science Texts: Textbooks for High School Students Studying the Sciences Chemistry Grades 10 - 12 Version 0 November 9, 2008 The Tamilnadu board examination department likely […] In chemistry class, we came to know the difference between a mixture, compound, and an element. Browse and find MILLIONS OF ANSWERS from Every Subject to Improve Your Grade. You can visit the official portal www.dge.tn.gov.in and download TN 11th Public Exam Chemistry Answer Key 2020. TN 11th Public Exam Chemistry Answer Key 2020:- Directorate of Examination, Tamil Nadu conducts class 11th exam annually at various exam centers throughout the state. 11th Question Paper Answer Key TN 11th Paper Answer Key 2019: Tamilnadu State Board of Higher Education is conducting 11th Standard examinations from March 6th to March 22nd, 2019. Replies. CBSE Class 11 Marking Scheme: The Central Board of Secondary Education (CBSE) has issued the CBSE Class 11 marking scheme for all subjects.CBSE has reduced the syllabus for Class 9 to 12 for the 2020-21 academic session due to COVID-19. 10th, 11th, 12th - First Revision Test 2020 - Question Papers & Answer Keys Download 11th Public Exam March 2020 - Question Papers, Answer Keys, Time Table Download 11th / +1 / Plus One - Public Exam March 2020 - Question Papers & Answer Keys & Time Table Download Answer: Samacheer Kalvi 11th Chemistry Quantum Mechanical Model of Atom In-Text Questions – Evaluate Yourself. Students preparing for the Class 11 final exam can check the new marking scheme here. Free PDF download of NCERT Solutions for Class 11 Chemistry Chapter 1 - Some Basic Concepts of Chemistry solved by Expert Teachers as per NCERT (CBSE) textbook guidelines. Chemistry is the study of matter and the changes it undergoes. English Medium Answer Key {Previous Year}. NCERT Solutions Class 11 Chemistry Chemistry Lab Manual Chemistry Sample Papers. Suresh -, 11th English - Original Question Paper for Quarterly Exam 2019 -, 11th English - Answer Key for Quarterly Exam 2019 Question Paper | Mr. R. Hendry Earnest Raja -, 11th English - Answer Key for Quarterly Exam 2019 Question Paper | Mr. M. Palanivel -, 11th English - Answer Key for Quarterly Exam 2019 Question Paper | Shri Krishna -, 11th English - Answer Key for Quarterly Exam 2019 Question Paper | Mr. N. Ramesh Kumar, SVB -, 11th English - Answer Key for Quarterly Exam 2019 Question Paper | WTS -, 11th Communicative English - Original Question Paper for Quarterly Exam 2019 | Mr. Samuel -, 11th Communicative English - Answer Key for Quarterly Exam 2019 | Mr. R. Hendry Earnest Raja -, 11th French - Original Question Paper for Quarterly Exam 2019 (Chennai District) | Mrs. Jeena Jabez -, 11th French - Original Question Paper for Quarterly Exam 2019 (Kanchipuram District) | Mrs. Jeena Jabez -, 11th Maths - Original Question Paper for Quarterly Exam 2019 -, 11th Maths - Answer Key for Quarterly Exam 2019 | Mrs. K. Anuradha -, 11th Maths - Answer Key for Quarterly Exam 2019 | Mr. G. Rajesh -, 11th Maths - Answer Key for Quarterly Exam 2019 | Mr. K. Muthukumar -, 11th Maths - Answer Key for Quarterly Exam 2019 | Mr. Prabhu George -, 11th Maths - Answer Key for Quarterly Exam 2019 | SVB -, 11th Maths - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Maths - Answer Key for Quarterly Exam 2019 | Mr. Prabhu George -, 11th Maths - Answer Key for Quarterly Exam 2019 | -, 11th Physics - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Physics - Original Question Paper for Quarterly Exam 2019 | Mr. Srinivasan -, 11th Physics - Answer Key for Quarterly Exam 2019 | Mr. E.Devadinesh -, 11th Physics - Answer Key for Quarterly Exam 2019 | Mrs. S. Maheshwari, SVB -, 11th Physics - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Physics - Answer Key for Quarterly Exam 2019 | -, 11th Chemistry - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Chemistry - Original Question Paper for Quarterly Exam 2019 | Mr. Srinivasan -, 11th Chemistry - Answer Key for Quarterly Exam 2019 | Mr. S. Shanmugam -, 11th Chemistry - Answer Key for Quarterly Exam 2019 | SVB -, 11th Chemistry - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Chemistry - Answer Key for Quarterly Exam 2019 | Sami -, 11th Chemistry - Answer Key for Quarterly Exam 2019 | GHSS, Thirumanjolai -, 11th Chemistry - Answer Key for Quarterly Exam 2019 | SVB -, 11th Chemistry - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Biology - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Biology - Original Question Paper for Quarterly Exam 2019 | Mr. Srinivasan -, 11th Bio-Zoology - Answer Key for Quarterly Exam 2019 | Mr. A. Bharathiraja -, 11th Bio-Zoology - Answer Key for Quarterly Exam 2019 | Dr. J.S. Below we will give the direct link to download the class 11th Chemistry Answer key when it is officially released. This video explains how to solve the given evaluate yourself problems in chemistry. Offline Apps based on latest NCERT Solutions for (+1) are available to download along with the answers given at the end of the book. Reply Delete. Samacheer Kalvi 11th Physics Guide Book Back Answers. of moles = 5.6/56 = 0.1 mol. Calculate the molecular mass of the following: (i) H 2 0(ii) C0 2 (iii) CH 4 Answer: (i) Molecular mass of H 2 O = 2(1.008 amu) + 16.00 amu=18.016 amu (ii) Molecular mass of CO 2 = 12.01 amu + 2 x 16.00 amu = 44.01 amu (iii) Molecular mass of CH 4 = … You can download the official question paper solution on the same day and tally your answers. Calculate the molecular mass of the following: (i) H 2 0(ii) C0 2 (iii) CH 4 Answer: (i) Molecular mass of H 2 O = 2(1.008 amu) + 16.00 amu=18.016 amu (ii) Molecular mass of CO 2 = 12.01 amu + 2 x 16.00 amu = 44.01 amu (iii) Molecular mass of CH 4 = … Share this article your friends and other social networking websites. Calculate the de Broglie wavelength of an electron that has been accelerated from rest through a potential difference of 1 k eV. 216648) 12th Chemistry 1 Mark Study Material (Book Interior Questions with Answers) | Mr. D. Printhkumar - Tamil Medium - Preview & Download (MAT.NO. Question 1. The garden will be 6 feet long near the front of the house and 4 feet long next to the walkway. Looking out for your assessment answers online? CBSE Class 11 Marking Scheme: The Central Board of Secondary Education (CBSE) has issued the CBSE Class 11 marking scheme for all subjects.CBSE has reduced the syllabus for Class 9 to 12 for the 2020-21 academic session due to COVID-19. TN 11th Standard State Board School - Guide, Book Back answer and Solution, Important Questions and Answers, Online 1 mark Test - 11th Standard (Plus One) TamilNadu State Board School Textbook, Manual, Answer Key Download, online study, Important Questions and Answers, One mark questions with solution, Book back problems with solution and explanation Download NCERT Books and apps based on latest CBSE Syllabus. Nov 20 Friday. JISHNU March 11, 2020 at 2:59 PM. Tomorrow's answer's today! The 11th grade is an important milestone because it lays the foundation for your final board exams the next year. It is an objective type, pen-paper test comprising 4 sections – Physics, Chemistry, Botany and Zoology. Lakhs of candidates appear for class 11th public exam at the end of the academic year. molarity = number of moles of solute / volume of solution (in L) i) Volume of the solution = 500 ml = 0.5 L Reply. Students can also read Tamil Nadu 11th Chemistry Model Question Papers 2019-2020 English & Tamil Medium. Start with these 11 must know Chemistry questions to assess your exam readiness. Latest Update – Below You can Download the TN 11th Public Exam Chemistry Answer Key with Question Paper 2020 PDF Officially. A collection of previous year question papers and model question papers for the DHSE Kerala +1 class examination is available from the links given below. The www.freeresultalert.com is not an official web site or any other web site of the government. A coffee filter works well, though if you don't drink coffee you can substitute a paper towel. TN 11th Public Exam Chemistry Answer Key 2020:- Directorate of Examination, Tamil Nadu conducts class 11th exam annually at various exam centers throughout the state.This year class 11th exam for science, arts, and commerce will be conducted from 4th March – 26th March 2020. The Tamilnadu board examinations are conducted every year in the offline examination mode system. Chapter-wise Important Questions for Class 11 Chemistry: Students can access the chapter wise Important Questions for Class 11 Chemistry by clicking on the link below. Candidates with the help of Tamil-Nadu School Board TN 11th Public Chemistry Answer Key 2020, you can assume their marks So please regularly visit our web page because we will upload TN 11th Chemistry Exam Solution Key 2020 In Pdf format after released by the examination board. I need chemistry 2nd volume evaluate yourself. So that the students and teachers can analyse the best one for their evaluation. Samacheer Kalvi 12th Chemistry Book Back Answers. The lakhs of candidates appeared every year. Chapter-wise Important Questions for Class 11 Chemistry: Students can access the chapter wise Important Questions for Class 11 Chemistry by clicking on the link below. The result will take some time to be released till then you can download the answer key when it is released. Directorate of Higher Secondary Education, Government of Kerala will announce for Plus One Examination March 2020. NCERT based CBSE Syllabus for Class 11 Physics 2020-21 (Revised & Reduced By 30%) is available here for download in PDF format. Reply Delete. 50% of the paper is dedicated to Biology, as the NEET exam primarily aims … Every Year Tamil Nadu state Government conducts board examination for HSC +1 as per the schedule. Download notes, study material, question with answers and practice questions for Physical education class 11 … This syllabus can be downloaded from our website in a free PDF format. ... 11th Chemistry - Answer Key for Quarterly Exam 2019 | Mr. S. Shanmugam - Tamil Medium Download Here; ... 12th Public Exam Official Model Question Papers and Answer Keys 2019-2020 . Color Printable Periodic Table - Pretty much everything you need that can fit on a page and still be readable. The questions cover the four modules of the new Year 11 Chemistry course. Students preparing for the Class 11 final exam can check the new marking scheme here. NCERT Solutions for Class 11 Chemistry are given for the students so that they can get to know the answers to the questions in case they are not able to find it.It is important for all the students who are in Class 11 currently. You can also devise a project comparing the separation you get using different brands of paper towels. Reply. Expert Teachers at SamacheerKalvi.Guru has created Tamilnadu State Board Samacheer Kalvi 11th Computer Science Book Solutions Answers Guide Pdf Free Download of Volume 1 and Volume 2 in English Medium and Tamil Medium are part of Samacheer Kalvi 11th Books Solutions.Here we have given TN State Board New Syllabus Samacheer Kalvi 11th Std Computer Science Guide Pdf of Text … Class 11 Maths, Physics, Chemistry, Biology, English, Business Studies and Economics NCERT textbook solutions are given below. NCERT Solutions for Class 11 Physical Education is given below to download in PDF form free updated for academic year 2020-21. All Chapter 3 - Classification of Elements and Periodicity in Properties Exercises Questions with Solutions to help you to revise complete Syllabus and boost your score more in examinations. Michael -, 11th Bio Botany - Answer Key for Quarterly Exam 2019 | SVB -, 11th Bio Botany - Answer Key for Quarterly Exam 2019 | Mr. C. Fransis -, 11th Bio Botany - Answer Key for Quarterly Exam 2019 | SVB -, 11th Bio Zoology - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Bio Zoology - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Botany (PS) - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Botany (PS) - Original Question Paper for Quarterly Exam 2019 | Mr. Srinivasan -, 11th Botany (PS) - Answer Key for Quarterly Exam 2019 | SVB -, 11th Botany (PS) - Answer Key for Quarterly Exam 2019 | SVB -, 11th Botany (PS) - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Botany (PS) - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Zoology - Original Question Paper for Quarterly Exam 2019 -, 11th Zoology - Answer Key for Quarterly Exam 2019 | SVB -, 11th Zoology - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Business Maths - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Business Maths - Original Question Paper for Quarterly Exam 2019 | Mr. S. Venkatesan -, 11th Business Maths - Answer Key for Quarterly Exam 2019 | Mr. S. Venkatesan -, 11th Business Maths - Answer Key for Quarterly Exam 2019 | Mr. C. Selvam -, 11th Computer Science - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Computer Science - Original Question Paper for Quarterly Exam 2019 | Mr. Subanesh -, 11th Computer Science - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Computer Science - Answer Key for Quarterly Exam 2019 | Mr. Marimuthu, SVB -, 11th Computer Science - Answer Key for Quarterly Exam 2019 | Mr. T. Parkunan -, 11th Computer Science - Answer Key for Quarterly Exam 2019 | -, 11th Computer Applications - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Computer Applications - Original Question Paper for Quarterly Exam 2019 | Vidhya Kendra -, 11th Computer Applications - Answer Key for Quarterly Exam 2019 | Mr. G. Sudharsan -, 11th Computer Applications - Answer Key for Quarterly Exam 2019 | -, 11th Computer Technology - Original Question Paper for Quarterly Exam 2019 | Mr. P. Manogar -, 11th Computer Technology - Original Question Paper for Quarterly Exam 2019 | Mr. Srinivasan -, 11th Computer Technology - Answer Key for Quarterly Exam 2019 -, 11th Computer Technology - Answer Key for Quarterly Exam 2019 | Mrs. A. Baseera Nasrin -, 11th Commerce - Original Question Paper for Quarterly Exam 2019 | Mrs. A. Vennila -, 11th Commerce - Original Question Paper for Quarterly Exam 2019 -, 11th Commerce - Answer Key for Quarterly Exam 2019 | Mrs. V. Megala, SVB -, 11th Commerce - Answer Key for Quarterly Exam 2019 | Mrs. A. Vennila -, 11th Commerce - Answer Key for Quarterly Exam 2019 | Shri Krishna -, 11th Commerce - Answer Key for Quarterly Exam 2019 | Mr. M. Khader -, 11th Commerce - Answer Key for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Commerce - Answer Key for Quarterly Exam 2019 | Mr. M. Muthuselvam -, 11th Commerce - Answer Key for Quarterly Exam 2019 | Mr. A. Basker -, 11th Accountancy - Original Question Paper for Quarterly Exam 2019 | Mr. B. Balaji -, 11th Accountancy - Original Question Paper for Quarterly Exam 2019 | Mrs. A. Vennila -, 11th Accountancy - Answer Key for Quarterly Exam 2019 | Mrs. A. Vennila -, 11th Accountancy - Answer Key for Quarterly Exam 2019 | SVB -, 11th Accountancy - Answer Key for Quarterly Exam 2019 | Mrs. S.V. Deadline to submit Pass/ No Pass grading option is Friday, November 20, 2020. NCERT Solutions for Class 11 Physics in PDF format are available to download updated for new academic session 2020-2021, solutions of Exercises, Additional Exercises, Supplementary material and NCERT books 2020-2021. This year class 11th exam for science, arts, and commerce will be conducted from 4th March – 26th March 2020. CBSE Class 11 Chemistry practical syllabus is available here. Reply Delete. Tamilnadu State Board Samacheer Kalvi 12th Chemistry Book Volume 1 Solutions. Tomorrow's answer's today! Authority will take some to evaluate and release the results. Fully worked solutions and suggested answers to the activity book can be found in the Teacher Support on Pearson Places. Critical tasks C. Performance ratings D. Core capabilities Solution: mass of KOH = 5.6 g. no. Thank you very much for seeing good information. [2013 Edition] [2012 Edition]Black/white Printable Periodic Table - Black/white table with atomic numbers, element symbols, element names, atomic weights, periods. Stuck on a puzzling chemistry problem? NCERT is specially made to clear most of the topics of the syllabus of chemistry. The Karnataka 2nd PUC answer key 2020 Physics, Maths, History, Chemistry, Biology, English, Computer, Sanskrit and other subjects will be uploaded to this page on time. Privacy Policy We keep the library up-to-date, so … Those students who are going to appear for TN Plus state board examination next year can download these answer keys so that you can have an idea about questions asked in the exam and you have to answer them. 10th, 11th, 12th - First Revision Test 2020 - Question Papers & Answer Keys Download, 11th / +1 / Plus One - Public Exam March 2020 - Question Papers & Answer Keys & Time Table Download, 12th Public Exam Official Model Question Papers and Answer Keys 2019-2020, 10th Quarterly Exam Question Papers and Answer Keys Download 2019 - 2020, 11th Quarterly Exam Question Papers and Answers Keys Download 2019-2020, 10th New Study Material ( New Books Based ), பாடசாலை வலைதளத்தின் செய்திகளை உடனுக்குடன் உங்கள் Telegram குழுவில் பெற, Click Here & Join - https://t.me/Padasalai_official, Please Send Your Answer Keys to Our Email ID- [email protected], 11th Quarterly Exam 2019 - Original Question Papers with Answer Key Download, Latest Quarterly Exam 2019 - Model Question Papers, 11th Quarterly Exam Question Papers with Answer Keys Download, 10th, 11th, 12th - First Revision Test Question Papers & Key Answer Download, 11th Public Exam March 2020 - Question Papers, Answer Keys, Time Table Download, 12th Public Exam March 2020 - Question Papers, Answer Keys, Time Table Download, 12th / Plus Two - Official Model Question Papers and Answer for Public Exam 2019-2020 Download, 10th Public Exam March 2020 - Question Papers, Answer Keys, Time Table Download, 11th Pulic Exam Official Model Question Paper Download, 10th / SSLC Half Yearly Exam 2019 - Original Question Papers & Answer Keys Download, 10th Standard - PTA Book Model Question Paper with Answer Keys - Download, 10th 11th 12th Public Exam Time Table 2020 Tamilnadu Schools, 10th Half Yearly Exam 2020 - 2021 Time Table, 10th Half Yearly Exam Question Papers with Answer Keys Download, 10th Practical Question Papers Books and Study Materials, 10th PTA Book Model Question Papers and Answer Keys, 10th Public Exam March 2020 Question Papers Answer Keys Download, 10th Public Exam September 2020 Question Papers Answer Keys Download, 10th Quarterly Exam 2020 - 2021 Time Table, 10th Quarterly Exam Question Papers with Answer Keys Download, 10th Standard Topper's Answer Sheet Presentation, 11th Computer Applications Study Materials, 11th Half Yearly Exam 2020 - 2021 Time Table, 11th Half Yearly Exam Question Papers with Answer Keys Download, 11th Practical Question Papers Books and Study Materials, 11th Public Exam Question Papers Answer Keys Download, 11th Public Exam September 2020 Question Papers Answer Keys Download, 11th Quarterly Exam 2020 - 2021 Time Table, 12th Computer Applications Study Materials, 12th Half Yearly Exam 2019 & Term 2 Exams Time Table Download, 12th Half Yearly Exam 2020 - 2021 Time Table, 12th Half Yearly Exam Question Papers with Answer Keys Download, 12th Practical Question Papers Books and Study Materials, 12th PTA Book Model Question Papers and Answer Keys, 12th Public Exam March 2020 Question Papers Answer Keys Download, 12th Quarterly Exam 2020 - 2021 Time Table, 12th Quarterly Exam Question Papers with Answer Keys Download, 12th Standard Topper's Answer Sheet Presentation, 1st Mid Term Exam Question Papers and Answer Keys, 2nd Mid Term Exam Question Papers and Answer Keys, 3rd Mid Term Exam Question Papers and Answer Keys, 5th 8th Public Exam Time Table 2020 Tamilnadu Schools, 5th Public Exam 2020 Question Papers Answer Keys Download, 8th Public Exam 2020 Question Papers Answer Keys Download, 9th Half Yearly Exam 2020 - 2021 Time Table, 9th Half Yearly Exam Question Papers with Answer Keys Download, 9th Quarterly Exam 2020 - 2021 Time Table, 9th Quarterly Exam Question Papers with Answer Keys Download, BEO TRB Exam - Study Materials Question Paper Answer Keys Download, CBSE 10th Board Exam 2019 - Date Sheet Download, DEO Exam - Syllabus Study Materials Question Paper Answer Keys Download, Half Yearly Exam Tiem Table Syllabus Question Papers with Answer Keys Download, LAB Assistant Exam Syllabus Study Materials Question Papers Answer Keys, Monthly Test Question Papers and Answer Keys, PGTRB Business Maths Latest Study Materials, PGTRB Computer Science Previous Year Questions, PGTRB Computer Technology Latest Study Materials, Polytechnic TRB Chemistry Previous Year Questions, Polytechnic TRB Chemistry Study Materials, Polytechnic TRB Computer Science Study Materials, Polytechnic TRB CSE Previous Year Questions, Polytechnic TRB ECE Previous Year Questions, Polytechnic TRB English Previous Year Questions, Polytechnic TRB GK Previous Year Questions, Polytechnic TRB Maths Previous Year Questions, Polytechnic TRB Physics Previous Year Questions, Polytechnic TRB Study Materials & Previous Year Questions, Polytechnic TRB Tamil Previous Year Questions, TET Paper 1 - 2012 - Previous Year Questions & Answer Keys Download, TET Paper 1 - 2012 (Re) - Previous Year Questions & Answer Keys, TET Paper 1 - 2013 - Previous Year Question Papers & Answer Keys Download, TET Paper 1 - 2017 - Previous Year Question Papers & Answer Keys, TET Paper 1 - 2019 - Previous Year Questions & Answer Keys, TET Paper 1 - Psychology & Child Development - Study Materials, TET Paper 2 - 2012 - Previous Year Questions & Answer Keys, TET Paper 2 - 2012 (Re) - Previous Year Questions & Answer Keys, TET Paper 2 - 2013 - Previous Year Question Papers & Answer Keys, TET Paper 2 - 2017 - Previous Year Question Papers & Answer Keys, TET Paper 2 - Psychology & Child Development Study Materials, TET Papers 1 & 2 - Study Materials and Previous Year Question Papers and Answer Keys, 11th Tamil - Original Question Paper for Quarterly Exam 2019 -, 11th Tamil - Answer Key for Quarterly Exam 2019 Question Paper | Shri Krishna -, 11th Tamil - Answer Key for Quarterly Exam 2019 Question Paper | Mr. A. Sivakumar, SVB -, 11th Tamil - Answer Key for Quarterly Exam 2019 Question Paper | Mr. P.K. If you’re preparing for entrance exams such as NEET or JEE Mains, the fundamentals of class 11 … Study.com has answers to your toughest chemistry homework questions with detailed step by step explanations. Buy the Skills and Assessment activity books. 10th Quarterly Exam Question Papers and Answer Keys Download 2019 - 2020 ... 11th Public Exam September 2020 Question Papers Answer Keys Download; 11th Quarterly Exam 2020 - 2021 Time Table; TN 11th Public Chemistry Answer Key 2020 பதிவிறக்க Question Paper Download, TN 11th Public Exam Answer Key 2020 one marks Question Paper Solution, TN 12th Public Exam Exam Answer Key 2020 HSC Question Paper Solution, राजस्थान जीके से सम्बंधित महत्वपूर्ण प्रश्न, Content Writer & SEO Jobs (20,000 Per Month), Tamil Nadu Directorate of Government Examinations, On the same day of exam i.e, 26th march 2020, Open the “TN 11th Public Chemistry Answer Key 2020”, Answer Key appear on your computer Screen. The Tamilnadu board examinations are conducted every year in the offline examination mode system. Answer Key will is released for the convenience of students. NCERT TEXTBOOK QUESTIONS SOLVED. Along with the syllabus, the marking scheme is also changed. Algebra 1: Common Core (15th Edition) Charles, Randall I. Check the … | 2021-04-15 02:37:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3252769708633423, "perplexity": 8907.80073291479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00480.warc.gz"} |
https://teshenglin.github.io/ | ## Te-Sheng Lin (林得勝)
### National Yang Ming Chiao Tung University, TW
I am an associate professor at the Department of Applied Mathematics at National Yang Ming Chiao Tung University, Taiwan. My research focuses on developing analytical and computational tools for problems arising in fluid dynamics and further communicating with scientists from other disciplines to solve practical engineering problems.
I received my Ph.D. degree in Applied Mathematics from the Department of Mathematical Sciences at New Jersey Institute of Technology, where I worked with Lou Kondic and Linda J. Cummings.
### Interests
• Modeling - Thin liquid films
• Scientific computation
• Machine learning
### Education
• PhD in Applied Mathematics, 2012
New Jersey Institute of Technology, USA
• M.S. in Applied Mathematics, 2004
National Chung Cheng University, Taiwan
• B.S. in Mathematics, 2002
National Chung Cheng University, Taiwan
# Appointments
#### National Yang Ming Chiao Tung University
Feb 2021 – Present Hsinchu, TW
#### National Chiao Tung University
Aug 2020 – Jan 2021 Hsinchu, TW
#### National Chiao Tung University
Aug 2014 – Jul 2020 Hsinchu, TW
#### Loughborough University
Dec 2012 – Jul 2014 Loughborough, UK
#### Loughborough University
Jun 2012 – Dec 2012 Loughborough, UK
# Recent Publications
### Spontaneous locomotion of phoretic particles in three dimensions
The motion of an autophoretic spherical particle in a simple fluid is analyzed. This motion is powered by a chemical species which is …
### A shallow physics-informed neural network for solving partial differential equations on surfaces
In this paper, we introduce a mesh-free physics-informed neural network for solving partial differential equations on surfaces. Based …
### Thin liquid films in a funnel
We explore flow of a completely wetting fluid in a funnel, with particular focus on contact line instabilities at the fluid front. …
### A Shallow Ritz Method for elliptic problems with Singular Sources
In this paper, a shallow Ritz-type neural network for solving elliptic problems with delta function singular sources on an interface is …
### A Discontinuity Capturing Shallow Neural Network for Elliptic Interface Problems
In this paper, a new Discontinuity Capturing Shallow Neural Network (DCSNN) for approximating $d$-dimensional piecewise continuous …
# CV
Find my CV in PDF here.
# Recent Posts
### Multidimensional scaling
Multidimensional scaling, 簡稱 MDS, 是個資料分析或是資料降維的工具. 這裡我們要談一下從數學角度來說 MDS 的原理及做法, 更精確的說, 這裡講的是 classical MDS. 假設我們有 $n$ 筆 $p$ 維的資料,
### Sec.10.3 - 極座標曲線家族
Laboratory Project in Sec.10.3, Calculus by Stewart English version: Families of Polar Curves 在這個研究中,你將發現極座標曲線家族有趣又漂亮的形狀。同時,當常數改變時,你也會觀 … | 2022-05-17 04:05:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1946597695350647, "perplexity": 8576.596379132816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00355.warc.gz"} |
https://www.techwhiff.com/issue/you-want-to-create-a-triangle-with-sides-of-a-b-and--384820 | # You want to create a triangle with sides of a, b, and c. Which of the following inequalities should be true? a+b c a-b>c a-b
###### Question:
You want to create a triangle with sides of a, b, and c. Which of the following inequalities should be true?
a+b c
a-b>c
a-b
### Gas costs $3.05 a gallon, and your car travels at 27 miles for each gallon of gas. How far can you travel in your car with$95 in your pocket? A: 7800 B: 11 miles C: 870 miles D: 840
Gas costs $3.05 a gallon, and your car travels at 27 miles for each gallon of gas. How far can you travel in your car with$95 in your pocket? A: 7800 B: 11 miles C: 870 miles D: 840...
### The total investment (GPDI) is for both replacement of destroyed capital (CFC) and new additions which are called ____________..
The total investment (GPDI) is for both replacement of destroyed capital (CFC) and new additions which are called ____________.....
### HELP MEEEEEE!!!!!!!!!!!!!!!!!!!!
HELP MEEEEEE!!!!!!!!!!!!!!!!!!!!...
### Select all the objective case pronouns. I me my you your he her we us our they them their
Select all the objective case pronouns. I me my you your he her we us our they them their...
### Which of the following products is both a major import and export of the U.S
Which of the following products is both a major import and export of the U.S...
### Which of the following represents the impacts of mining on human health? I. Toxic chemicals can leak into drinking water. II. Air pollution can lead to irritation of eyes, nose, and throat. III. Heavy metal exposure can cause birth defects. (A) III only (B) I and II (C) I and III (D) I, II, and III
Which of the following represents the impacts of mining on human health? I. Toxic chemicals can leak into drinking water. II. Air pollution can lead to irritation of eyes, nose, and throat. III. Heavy metal exposure can cause birth defects. (A) III only (B) I and II (C) I and III (D) I, II, an...
### Emphasize most nearly means
Emphasize most nearly means...
### Tina placed a 12 meter rope along one side of the bicycle path. She hung a ribbon on each end of the rope and every 3 meters in-between. How many ribbons did she hang? A)4 B)5 C)6 D)7
Tina placed a 12 meter rope along one side of the bicycle path. She hung a ribbon on each end of the rope and every 3 meters in-between. How many ribbons did she hang? A)4 B)5 C)6 D)7...
### A historian claims massive drought and famine in China caused the Boxer Rebellion. Which of the following historians is making a counterclaim . A historian who claims the Boxer Rebellion followed directly from the actions of external colonial powers B. A historian who shares a diary entry from a Chinese woman who lost her crops to drought C. A historian who offers evidence that Christians were targeted during the Boxer Rebellion D. A historian who shows that harvest yields declined in the years
A historian claims massive drought and famine in China caused the Boxer Rebellion. Which of the following historians is making a counterclaim . A historian who claims the Boxer Rebellion followed directly from the actions of external colonial powers B. A historian who shares a diary entry from a Chi...
### Evaluate 5x-2y;x=2,y=-1
evaluate 5x-2y;x=2,y=-1...
### 81 people passed and 27 failed their functional skills level 2 exams write this as in its simplest form
81 people passed and 27 failed their functional skills level 2 exams write this as in its simplest form...
### A prism has an area of 4374 ft3. What is its volume in yd3?
A prism has an area of 4374 ft3. What is its volume in yd3?...
### Was the smalest ocean?
was the smalest ocean?...
### Which step can be used when solving x^2-6x-25=0?
Which step can be used when solving x^2-6x-25=0?...
### What is the product written in scientific notation? (5.9*10^-3) * (8.7*10^10) a. 14.61 * 10^-30 b. 5.1417 * 10^8 c. 14.61 * 10^7 d. 5.1417 * 10^7
What is the product written in scientific notation? (5.9*10^-3) * (8.7*10^10) a. 14.61 * 10^-30 b. 5.1417 * 10^8 c. 14.61 * 10^7 d. 5.1417 * 10^7...
### Mountain region is called the area of fruits expalin
mountain region is called the area of fruits expalin... | 2023-02-06 00:04:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3892003893852234, "perplexity": 4846.555726891784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00839.warc.gz"} |
http://httpcode.com/2014/04/10/april-fool.html | Last week was Aprils fools day, which means that for the days before and after you tend to be especially skeptical about the things you read on the web. So when I read that Redis had added one of my favorite algorithms the HyperLogLog I was in two minds as to whether this was real or not. The detailed analysis that Salvatore put together was very convincing but the date of the post was really off-putting.
If you haven’t heard of HyperLogLog before, it’s a method to work out the cardinality of some measure. Which in non-set based terms means, being able to efficiently count the number of unique items. One of the best implementations I’ve seen is the HyperLogLog Postgresql extension. This lets you define a new native data type. To use it you must use the various hash methods when inserting data, this is a required part of the HyperLogLog algorithm. The examples used in the readme.md page of the Github repo shows the real power of the extension.
Consider a data warehouse that stores analytic data;
-- Create the destination table
CREATE TABLE daily_uniques (
date date UNIQUE,
users hll
);
-- Fill it with the aggregated unique statistics
INSERT INTO daily_uniques(date, users)
FROM facts
GROUP BY 1;
Imagine that we have inserted a number of items into this table from some fact table. We can then query the table:
SELECT date, hll_cardinality(users) FROM daily_uniques;
Now the queries will look like:
SELECT hll_cardinality(hll_union_agg(users)) FROM daily_uniques WHERE date >= '2012-01-02'::date AND date <= '2012-01-08'::date;
or
SELECT date, #hll_union_agg(users) OVER seven_days
FROM daily_uniques
WINDOW seven_days AS (ORDER BY date ASC ROWS 6 PRECEDING);
Remember as well that the amount of storage space used here is absolutely minimal, Often in the order of bytes.
So as you can see HyperLogLog is really cool and it’s a great credit to the team behind Redis for bringing this into the product. Even if they do publish the announcement on April Fools day
Redis is a fantastic tool, it is so much more than just a memcache replacement, I like to think of it as Comp Sci in a box. It has all of the goodies: Lists with set based operations, sliding caches, queues, single threaded event based goodness. It’s a very exciting tool and I’m very glad to see it evolve. | 2017-04-25 20:12:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22738927602767944, "perplexity": 1768.5735327689338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120878.96/warc/CC-MAIN-20170423031200-00320-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=6929&school=Physics | ## “School of Physics”
Back to Papers Home
Back to Papers of School of Physics
Paper IPM / P / 6929
School of Physics
Title: Casimir Energy for Spherical Shell in Schwarzchild Black Hole Background
Author(s):
1 M.R. Setare 2 M.B. Altaie
Status: Published
Journal: Gen. Relat. Gravit.
No.: 2
Vol.: 36
Year: 2004
Pages: 331-341
Supported by: IPM
Abstract:
In this paper, we consider the Casimir energy of massless scalar field which satisfy Dirichlet boundary condition on a spherical shell. Outside the shell, the spacetime is assumed to be described by the Schwarzschild metric, while inside the shell it is taken to be the flat Minkowski space. Using zeta function regularization and heat kernel coefficients we isolate the divergent contributions of the Casimir energy inside and outside the shell, then using the renormalization procedure of the bag model the divergent parts are cancelled, finally obtaining a renormalized expression for the total Casimir energy. | 2022-05-20 10:37:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871445894241333, "perplexity": 1785.4461128288629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531779.10/warc/CC-MAIN-20220520093441-20220520123441-00055.warc.gz"} |
http://unapologetic.wordpress.com/category/fundamentals/numbers/page/2/ | # The Unapologetic Mathematician
## The Topological Field of Real Numbers
We’ve defined the topological space we call the real number line $\mathbb{R}$ as the completion of the rational numbers $\mathbb{Q}$ as a uniform space. But we want to be able to do things like arithmetic on it. That is, we want to put the structure of a field on this set. And because we’ve also got the structure of a topological space, we want the field operations to be continuous maps. Then we’ll have a topological field, or a “field object” (analogous to a group object) in the category $\mathbf{Top}$ of topological spaces.
Not only do we want the field operations to be continuous, we want them to agree with those on the rational numbers. And since $\mathbb{Q}$ is dense in $\mathbb{R}$ (and similarly $\mathbb{Q}\times\mathbb{Q}$ is dense in $\mathbb{R}\times\mathbb{R}$), we will get unique continuous maps to extend our field operations. In fact the uniqueness is the easy part, due to the following general property of dense subsets.
Consider a topological space $X$ with a dense subset $D\subseteq X$. Then every point $x\in X$ has a sequence $x_n\in D$ with $\lim x_n=x$. Now if $f:X\rightarrow Y$ and $g:X\rightarrow Y$ are two continuous functions which agree for every point in $D$, then they agree for all points in $X$. Indeed, picking a sequence in $D$ converging to $x$ we have
$f(x)=f(\lim x_n)=\lim f(x_n)=\lim g(x_n)=g(\lim x_n)=g(x)$.
So if we can show the existence of a continuous extension of, say, addition of rational numbers to all real numbers, then the extension is unique. In fact, the continuity will be enough to tell us what the extension should look like. Let’s take real numbers $x$ and $y$, and sequences of rational numbers $x_n$ and $y_n$ converging to $x$ and $y$, respectively. We should have
$s(x,y)=s(\lim x_n,\lim y_n)=s(\lim(x_n,y_n))=\lim x_n+y_n$
but how do we know that the limit on the right exists? Well if we can show that the sequence $x_n+y_n$ is a Cauchy sequence of rational numbers, then it must converge because $\mathbb{R}$ is complete.
Given a rational number $r$ we must show that there exists a natural number $N$ so that $\left|(x_m+y_m)-(x_n+y_n)\right| for all $m,n\geq N$. But we know that there’s a number $N_x$ so that $\left|x_m-x_n\right|<\frac{r}{2}$ for $m,n\geq N_x$, and a number $N_y$ so that $\left|y_m-y_n\right|<\frac{r}{2}$ for $m,n\geq N_y$. Then we can choose $N$ to be the larger of $N_x$ and $N_y$ and find
$\left|(x_m-x_n)+(y_m-y_n)\right|\leq\left|x_m-x_n\right|+\left|y_m-y_n\right|<\frac{r}{2}+\frac{r}{2}=r$
So the sequence of sums is Cauchy, and thus converges.
What if we chose different sequences $x'_n$ and $y'_n$ converging to $x$ and $y$? Then we get another Cauchy sequence $x'_n+y'_n$ of rational numbers. To show that addition of real numbers is well-defined, we need to show that it’s equivalent to the sequence $x_n+y_n$. So given a rational number $r$ does there exist an $N$ so that $\left|(x_n+y_n)-(x'_n+y'_n)\right| for all $n\geq N$? This is almost exactly the same as the above argument that each sequence is Cauchy! As such, I’ll leave it to you.
So we’ve got a continuous function taking two real numbers and giving back another one, and which agrees with addition of rational numbers. Does it define an Abelian group? The uniqueness property for functions defined on dense subspaces will come to our rescue! We can write down two functions from $\mathbb{R}\times\mathbb{R}\times\mathbb{R}$ to $\mathbb{R}$ defined by $s(s(x,y),z)$ and $s(x,s(y,z))$. Since $s$ agrees with addition on rational numbers, and since triples of rational numbers are dense in the set of triples of real numbers, these two functions agree on a dense subset of their domains, and so must be equal. If we take the ${0}$ from $\mathbb{Q}$ as the additive identity we can also verify that it acts as an identity real number addition. We can also find the negative of a real number $x$ by negating each term of a Cauchy sequence converging to $x$, and verify that this behaves as an additive inverse, and we can show this addition to be commutative, all using the same techniques as above. From here we’ll just write $x+y$ for the sum of real numbers $x$ and $y$.
What about the multiplication? Again, we’ll want to choose rational sequences $x_n$ and $y_n$ converging to $x$ and $y$, and define our function by
$m(x,y)=m(\lim x_n,\lim y_n)=m(\lim(x_n,y_n))=\lim x_ny_n$
so it will be continuous and agree with rational number multiplication. Now we must show that for every rational number $r$ there is an $N$ so that $\left|x_my_m-x_ny_n\right| for all $m,n\geq N$. This will be a bit clearer if we start by noting that for each rational $r_x$ there is an $N_x$ so that $\left|x_m-x_n\right| for all $m,n\geq N_x$. In particular, for sufficiently large $n$ we have $\left|x_n\right|<\left|x_N\right|+r_x$, so the sequence $x_n$ is bounded above by some $b_x$. Similarly, given $r_y$ we can pick $N_y$ so that $\left|y_m-y_n\right| for $m,n\geq N_y$ and get an upper bound $b_y\geq y_n$ for all $n$. Then choosing $N$ to be the larger of $N_x$ and $N_y$ we will have
$\left|x_my_m-x_ny_n\right|=\left|(x_m-x_n)y_m+x_n(y_m-y_n)\right|\leq r_xb_y+b_xr_y$
for $m,n\geq N$. Now given a rational $r$ we can (with a little work) find $r_x$ and $r_y$ so that the expression on the right will be less than $r$, and so the sequence is Cauchy, as desired.
Then, as for addition, it turns out that a similar proof will show that this definition doesn’t depend on the choice of sequences converging to $x$ and $y$, so we get a multiplication. Again, we can use the density of the rational numbers to show that it’s associative and commutative, that $1\in\mathbb{Q}$ serves as its unit, and that multiplication distributes over addition. We’ll just write $xy$ for the product of real numbers $x$ and $y$ from here on.
To show that $\mathbb{R}$ is a field we need a multiplicative inverse for each nonzero real number. That is, for each Cauchy sequence of rational numbers $x_n$ that doesn’t converge to ${0}$, we would like to consider the sequence $\frac{1}{x_n}$, but some of the $x_n$ might equal zero and thus throw us off. However, there can only be a finite number of zeroes in the sequence or else ${0}$ would be an accumulation point of the sequence and it would either converge to ${0}$ or fail to be Cauchy. So we can just change each of those to some nonzero rational number without breaking the Cauchy property or changing the real number it converges to. Then another argument similar to that for multiplication shows that this defines a function from the nonzero reals to themselves which acts as a multiplicative inverse.
December 3, 2007
## Ordinal numbers
We use cardinal numbers to count how many elements are in a set. Another thing we think of numbers for is listing elements. That is, we put things in order: first, second, third, and so on.
We identified a cardinal number as an isomorphism class of sets. Ordinal numbers work much the same way, but we use sets equipped with well-orders. Now we don’t allow all the functions between two sets. We just consider the order-preserving functions. If $(X,\leq)$ and $(Y,\preceq)$ are two well-ordered sets, a function $f:X\rightarrow Y$ preserves the order if whenever $x_1\leq x_2$ then $f(x_1)\preceq f(x_2)$. We consider two well-ordered sets to be equivalent if there is an order-preserving bijection between them, and define an ordinal number to be an equivalence class of well-ordered sets under this relation.
If two well-ordered sets are equivalent, they must have the same cardinality. Indeed, we can just forget the order structure and we have a bijection between the two sets. This means that two sets representing the same ordinal number also represent the same cardinal number.
Now let’s just look at finite sets for a moment. If two finite well-ordered sets have the same number of elements, then it turns out they are order-equivalent too. It can be a little tricky to do this straight through, so let’s sort of come at it from the side. We’ll use finite ordinal numbers to give a model of the natural numbers. Since the finite cardinals also give such a model there must be an isomorphism (as models of $\mathbb{N}$ between finite ordinals and finite cardinals. We’ll see that the isomorphism required by the universal property sends each ordinal to its cardinality. If two ordinals had the same cardinality, then this couldn’t be an isomorphism, so distinct finite ordinals have distinct cardinalities. We’ll also blur the distinction between a well-ordered set and the ordinal number it represents.
So here’s the construction. We start with the empty set, which has exactly one order. It can seem a little weird, but if you just follow the definitions it makes sense: any relation from $\{\}$ to itself is a subset of $\{\}\times\{\}=\{\}$, and there’s only one of them. Reading the definitions carefully, it uses a lot of “for every”, but no “there exists”. Each time we say “for every” it’s trivially true, since there’s nothing that can make it false. Since we never require the existence of an element having a certain property, that’s not a problem. Anyhow, we call the empty set with this (trivial) well-ordering the ordinal ${}0$. Notice that it has (cardinal number) zero elements.
Now given an ordinal number $O$ we define $S(O)=O\cup\{O\}$. That is, each new number has the set of all the ordinals that came before it as elements. We need to put a well-ordering on this set, which is just the order in which the ordinals showed up. In fact, we can say this a bit more concisely: $O_1\leq O_2$ if $O_1\in O_2$. More explicitly, each ordinal number is an element of every one that comes after it. Also notice that each time we make a new ordinal out of the ones that came before it, we add one new element. The successor function here adds one to the cardinality, meaning it corresponds to the successor in the cardinal number model of $\mathbb{N}$. This gives a function from the finite ordinals onto the finite cardinals.
What’s left to check is the universal property. Here we can leverage the cardinal number model and this surjection of finite ordinals onto finite cardinals. I’ll leave the details to you, but if you draw out the natural numbers diagram it should be pretty clear how to how that the universal property is satisfied.
The upshot of all of this is that finite ordinals, like finite cardinals, give another model of the natural numbers, which is why natural numbers seem to show up when we list things.
April 26, 2007 Posted by | Fundamentals, Numbers, Orders | 2 Comments
## Cardinal numbers
I’ve said a bunch about natural numbers, but I seem to have ignored what we’re most used to doing with them: counting things! The reason is that we actually don’t use natural numbers to count, we use something called cardinal numbers.
So let’s go back and think about sets and functions. In fact, for the moment let’s just think about finite sets. It seems pretty straightforward to say there are three elements in the set $\{a,b,c\}$, and that there are also three elements in the set $\{x,y,z\}$. Step back for a moment, though, and consider why there are the same number of elements in these two sets. Try to do it without counting the numbers first. I’ll wait.
The essential thing that says there’s something the same about these two sets is that there is a bijection between them. For example, I could define a function $f$ by $f(a)=x$, $f(b)=z$, and $f(c)=y$. Every element of $\{x,y,z\}$ is hit by exactly one element of $\{a,b,c\}$, so this is a bijection. Of course, it’s not the only one, but we’ll leave that alone for now.
So now let’s move back to all (possibly infinte) sets and define a relation. Say that sets $X$ and $Y$ are “in bijection” — and write $X\leftrightarrow Y$ — if there is some bijection $f:X\rightarrow Y$. This is an equivalence relation! Any set is in bijection with itself, using the identity function. If $X$ is in bijection with $Y$ then we can use the inverse function to see that $Y\leftrightarrow X$. Finally, if $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ are bijections, then $g\circ f:X\rightarrow Z$ is a bijection.
Any time we have an equivalence relation we can split things up into equivalence classes. Now I define a cardinal number to be an bijection class of sets — every set in the class is in bijection with every other, and with none outside the class.
So what does this have to do with natural numbers? Well, let’s focus in on finite sets again. There’s only one empty set $\{\}$, so let’s call its cardinal number ${}0$. Now given any finite set $X$ with cardinal number — bijection class — $c$, there’s something not in $X$. Pick any such something, call it $x$, and look at the set $X\cup\{x\}$. If I took any other set $Y$ in bijection with $X$ and anything $y$ not in $Y$ then there is a bijection between $x\cup\{x\}$ and $Y\cup\{y\}$. Just apply the bijection from $X$ to $Y$ on those elements from $X$, and send $x$ to $y$. This shows that the bijection class — the cardinal number — doesn’t depend on what choices we made along the way. Since it’s well-defined we can call it the successor $S(c)$.
We look at the set of all bijection classes of finite sets. We’ve got an identified element ${}0$, and a successor function. In fact, this satisfies the universal property for natural numbers. The set of cardinal numbers of finite sets is (isomorphic to) the set of natural numbers!
And that’s how we count things.
April 13, 2007 Posted by | Fundamentals, Numbers | 3 Comments
## The uniqueness of the integers
It’s actually not too difficult to see that the integers are the only ordered integral domain with unit whose non-negative elements are well-ordered. So let’s go ahead and do it.
In fact, let’s try to build from the ground up. We can start with the additive identity ${}0$ and the unit $1$. Since we’ve got an ordered ring we have to have $0\leq 1$, otherwise the multiplication can’t preserve the order.
Now we can also tell that $1$ is the smallest element larger than ${}0$. Let’s say there were some element in between: $0. Then $r^2, and $r^3, and so on. The collection of all powers of $r$ has no lowest element, so the positive elements can’t be well-ordered in this case.
We can add up an arbitrary number of copies of $1$ to get $n$, and we know there’s nothing between $n$ and $n+1$, or else there would have to be something between ${}0$ and $1$. We also get all the negative numbers since we have to have them. Multiplication also comes for free since it has to be defined by the distributive property, and every element around is the sum of a bunch of copies of $1$.
Finally, the fact that we’re looking for an integral domain means we can’t introduce any relations saying two different elements like these are really the same in our ring without making a zero-divisor or collapsing the whole structure. I’ll let you play with that one.
## The characterization of the integers
Okay, so we’ve seen that the integers form an ordered ring with unit, and that the non-negative elements are well-ordered. It turns out that the integers are an integral domain (thus the name).
Let’s assume we have two integers (still using the definition by pairs of natural numbers) whose product is zero: $(a,b)(c,d)=(ac+bd,ad+bc)=(0,0)$. Since each of $a$, $b$, $c$, and $d$ is a natural number, the order structure of $\mathbb{N}$ says that for $ac+bd=0$ we must have either $a$ or $c$ be zero and either $b$ or $d$ as well. Similarly, either $a$ or $d$ and either $b$ or $c$ must be zero. If $a$ is not zero then this means both $c$ and $d$, making $(c,d)=0$. If $b$ is not zero again both $c$ and $d$ are zero. If both $a$ and $b$ are zero, then $(a,b)=0$. That is, if the product of two integers is zero, one or the other must be zero.
So the integers are an ordered integral domain with unit whose non-negative elements are well-ordered. It turns out that $\mathbb{Z}$ is the only such ring. Any two rings satisfying all these conditions are isomorphic, justifying our use of “the” integers. In fact, now we can turn around and define the integers to be any of the isomorphic rings satisfying these properties. What we’ve really been showing in all these posts is that if we have any model of the axioms of the natural numbers, we can use it to build a model of the axioms of the integers. Once we know (or assume) that some model of the natural numbers exists we know that a model of the integers exists.
Of course, just like we don’t care which model of the natural numbers we use, we don’t really care which model of the integers we use. All we care about is the axioms: those of an ordered integral domain with unit whose non-negative elements are well-ordered. Everything else we say about the integers will follow from those axioms and not from the incidentals of the pairs-of-natural-numbers construction, just like everything we say about the natural numbers follows from the Peano axioms and not from incidental properties of the Von Neumann or Zermelo or Church numeral models.
April 3, 2007 Posted by | Fundamentals, Numbers | 2 Comments
## The ring of integers
As I mentioned before, the primal example of a ring is the integers $\mathbb{Z}$. So far we’ve got an ordered abelian group structure on the set of (equivalence classes of) pairs of natural numbers. Now we need to add a multiplication that distributes over the addition.
First we’ll figure out how to multiply natural numbers. This is pretty much as we expect. Remember that a natural number is either ${}0$ or $S(b)$ for some number $b$. We define
$a\cdot0=0$
$a\cdot S(b)=(a\cdot b)+a$
Firstly, this is commutative. This takes a few inductions. First show by induction that ${}0$ commutes with everything, then show by another induction that if $a$ commutes with everything then so does $S(a)$. Then by induction, every number commutes with every other. I’ll leave the details to you.
Similarly, we can use a number of inductions to show that this multiplication is associative — $(a\cdot b)\cdot c=a\cdot(b\cdot c)$ — and distributes over addition of natural numbers — $a\cdot(b+c)=a\cdot b+a\cdot c$. This is extremely tedious and would vastly increase the length of this post without really adding anything to the exposition, so I’ll again leave you the details. I’m reminded of something Jeff Adams said (honest, I’m not trying to throw these references in gratuitously) in his class on the classical groups. He told us to verify that the commutator in an associative algebra satisfies the Jacobi identity because, “It’s long and tedious and doesn’t add much, but I had to do it when I was a grad student, so now you’re grad students and it’s your turn.”
So now these operations — addition and multiplication — of natural numbers make $\mathbb{N}$ into what some call a “semiring”. I prefer (following John Baez) to call it a “rig”, though: a “ring without negatives”. We use this to build up the ring structure on the integers.
Recall that the integers are (for us) pairs of natural numbers considered as “differences”. We thus define the product
$(a,b)\cdot(c,d)=(a\cdot c+b\cdot d,a\cdot d+b\cdot c)$
Our life now is vastly easier than it was above: since we know addition and multiplication of natural numbers is commutative, the above expression is manifestly commutative. No work needs to be done! Associativity is also easy: just set up both triple products and expand out, checking that each term is the same by the rig structure of the natural numbers. Similarly, we can check distributivity, that $(1,0)$ acts as an identity, and that the product of two integers is independent of the representing pair of natural numbers.
Lastly, multiplication by a positive integer preserves order. If $a and $0 then $ac. Together all these properties make the integers as we’ve defined them into a commutative ordered ring with unit. The proofs of all these things have been incredibly dull (I actually did them all today just to be sure how they worked), but it’s going to get a lot easier soon.
March 29, 2007
## Integers
I’m back from Ohio at the College Perk again. The place looks a lot different in daylight. Anyhow, since the last few days have been a little short on the exposition, I thought I’d cover integers.
Okay, we’ve covered that the natural numbers are a commutative ordered monoid. We can add numbers, but we’re used to subtracting numbers too. The problem is that we can’t subtract with just the natural numbers — they aren’t a group. What could we do with $2-3$?
Well, let’s just throw it in. In fact, let’s just throw in a new element for every possible subtraction of natural numbers. And since we can get back any natural number by subtracting zero from it, let’s just throw out all the original numbers and just work with these differences. We’re looking at the set of all pairs $(a,b)$ of natural numbers.
Oops, now we’ve overdone it. Clearly some of these differences should be the same. In particular, $(S(a),S(b))$ should be the same as $(a,b)$. If we repeat this relation we can see that $(a+c,b+c)$ should be the same as $(a,b)$ where we’re using the definition of addition of natural numbers from last time. We can clean this up and write all of these in one fell swoop by defining the equivalence relation: $(a,b)\sim(a',b')\Leftrightarrow a+b'=b+a'$. After checking that this is indeed an equivalence relation, we can pass to the set of equivalence classes and call these the integers $\mathbb{Z}$.
Now we have to add structure to this set. We define an order on the integers by $(a,b)\leq(c,d)\Leftrightarrow a+d\leq b+c$. The caveat here is that we have to check that if we replace a pair with an equivalent pair we get the same answer. Let’s say $(a,b)\sim(a',b')$, $(c,d)\sim(c',d')$, and $(a,b)\leq(c,d)$. Then
$a'+b+c+d'=a+b'+c'+d\leq b+b'+c'+c$
so $a'+d'\leq b'+c'$. The first equality uses the equivalences we assumed and the second uses the inequality. Throughout we’re using the associativity and commutativity. That the first inequality implies the second follows because addition of natural numbers preserves order.
We get an addition as well. We define $(a,b)+(c,d)=(a+c,b+d)$. It’s important to note here that the addition on the left is how we’re defining the sum of two pairs, and those on the right are additions of natural numbers we already know how to do. Now if $(a,b)\sim(a',b')$ and $(c,d)\sim(c',d')$ we see
$(a+c)+(b'+d')=(a+b')+(c+d')=(b+a')+(d+c')=(a'+c')+(b+d)$
so $(a+c,b+d)\sim(a'+c',b'+d')$. Addition of integers doesn’t depend on which representative pairs we use. It’s easy now to check that this addition is associative and commutative, that $(0,0)$ is an additive identity, that $(b,a)+(a,b)\sim(0,0)$ (giving additive inverses), and that addition preserves the order structure. All this together makes $\mathbb{Z}$ into an ordered abelian group.
Now we can relate the integers back to the natural numbers. Since the integers are a group, they’re also a monoid. We can give a monoid homomorphism embedding $\mathbb{N}\rightarrow\mathbb{Z}$. Send the natural number $a$ to the integer represented by $(a,0)$. We call the nonzero integers of this form “positive”, and their inverses of the form $(0,a)$ “negative”. We can verify that $(a,0)\geq(0,0)$ and $(0,a)\leq(0,0)$. Check that every integer has a unique representative pair with ${}0$ on one side or the other, so each is either positive, negative, or zero. From now on we’ll just write $a$ for the integer represented by $(a,0)$ and $-a$ for the one represented by $(0,a)$, as we’re used to.
## More structure of the Natural Numbers
Now we know what the natural numbers are, but there seems to be a lot less to them than we’re used to. We don’t just take successors of natural numbers — we add them and we put them in order. Today I’ll show that if you have a model of the natural numbers it immediately has the structure of a commutative ordered monoid.
The major tool for working with the natural numbers is “induction”. This uses the property that every natural number is either ${}0$ or the successor of some other natural number, as can be verified from the universal property. Think of it like a ladder: proving a statement to be true for ${}0$ lets you get on the bottom of the ladder. Proving that the truth of a statement is preserved when we take a successor lets you climb up a rung. If you can get on the ladder and you can always climb up a rung, you can climb as far as you want.
First let’s define the order. We say the natural number $a$ is less than or equal to the natural number $b$ (and write $a\leq b$) if either $a$ and $b$ are the same number, or if $b$ is the successor of some number $c$ and $a\leq c$. This seems circular, but it’s really not. As we step down from $b$ to $c$ (maybe many times), eventually either $c$ will be equal to $a$ and we stop, or $c$ becomes ${}0$ and we can’t step down any more. A more colloquial way of putting this relation is that we can build a chain of successors from $a$ to $b$.
The relation $\leq$ is reflexive right away. It’s also transitive, since if we have three numbers $a$, $b$, and $c$ with $a\leq b$ and $b\leq c$ then we have a chain of successors from $a$ to $b$ and one from $b$ to $c$, and we can put them together to get one from $a$ to $c$. Finally, the relation is antisymmetric, since if we have two different numbers $a$ and $b$ with both $a\leq b$ and $b\leq a$, then we can build a chain of successors from $a$ back to itself. That would make the successor function fail to be injective which can’t happen. This makes $\leq$ into a partial order. I’ll leave it to you to show that it’s total.
The monoid structure of the natual numbers is a bit easier. Remember that a number $b$ is either ${}0$ or $S(c)$ for some number $c$. We define the sum of $a$ and $b$ using this fact: $a+0$ is $a$, and $a+S(c)$ is $S(a+c)$.
That ${}0$ behaves as an additive identity is clear. We need to show that the sum is associative: given three numbers $a$, $b$, and $c$, we have $(a+b)+c=a+(b+c)$. If $c=0$, then $a+(b+0)=a+b=(a+b)+0$. If $c=S(d)$, then $a+(b+S(d))=a+S(b+d)=S(a+(b+d))$ and $(a+b)+S(d)=S((a+b)+d)$. So if we have associativity when the third number is $d$ we’ll get it for $S(d)$, and we have it for ${}0$. By induction it’s true no matter what the third number is.
There are two more properties here that you should be able to verify using these same techniques. Addition is commutative — $a+b=b+a$ — and addition preserves order — if $a\leq b$ then $a+c\leq b+c$.
Notice in particular that I’m not using any properties of how we model the natural numbers. The von Neumann numerals preferred by Bourbaki have the nice property that $a\leq b$ if $a\subseteq b$ as sets. But the Church numerals don’t. The specifics of the order structure really come from the Peano axioms. They shouldn’t depend at all on such accidents as what sort of model you pick, any more than they should depend on whether or not $3$ is Julius Cæsar. No matter what model you start with that satisfies the Peano axioms you get the commutative ordered monoid structure for free.
March 12, 2007 Posted by | Fundamentals, Numbers | 4 Comments
## Natural Numbers
UPDATE: added paragraph explaining the meaning of the commutative diagram more thoroughly.
I think I’ll start in on some more fundamentals. Today: natural numbers.
The natural numbers are such a common thing that everyone has an intuitive idea what they are. Still, we need to write down specific rules in order to work with them. Back at the end of the 19th century Giuseppe Peano did just that. For our purposes I’ll streamline them a bit.
1. There is a natural number ${}0$.
2. There is a function $S$ from the natural numbers to themselves, called the “successor”.
3. If $a$ and $b$ are natural numbers, then $S(a)=S(b)$ implies $a=b$.
4. If $a$ is a natural number, then $S(a)\neq0$.
5. For every set $K$, if ${}0$ is in $K$ and the successor of each natural number in $K$ is also in $K$, then every natural number is in $K$.
This is the most common list to be found in most texts. It gives a list of basic properties for manipulating logical statements about the natural numbers. However, I find that this list tends to obscure the real meaning and structure of the natural number system. Here’s what the axioms really mean.
The natural numbers form a set $\mathbb N$. The first axiom picks out a special element of $\mathbb N$, called ${}0$. Now, think of a set containing exactly one element: $\{*\}$. A function from this set to any other set $S$ picks out an element of that set: the image of $*$. So the first axiom really says that there is a function $0:\{*\}\rightarrow\mathbb N$.
The second axiom plainly states that there is a function $S:\mathbb N\rightarrow\mathbb N$. The third axiom says that this function is injective: any two distinct natural numbers have distinct successors. The fourth says that the image of the successor function doesn’t contain the image of the zero function.
The fifth axiom is where things get really interesting. So far we have a diagram $\{*\}\rightarrow\mathbb N\rightarrow\mathbb N$. What the fifth axiom is really saying is that this is the universal such diagram of sets! That is, we have the following diagram:
with the property that if $K$ is any set and $z$ and $s$ are any functions as in the diagram, then there exists a unique function $f:\mathbb N\rightarrow K$ making the whole diagram commute. In fact, at this point the third and fourth Peano axioms are extraneous, since they follow from the universal property!
Remember, all a commutative diagram means is that if you have any two paths between vertices of the diagram, they give the same function. The triangle on the left here says that $f(0(*))=z(*)$. That is, since $K$ has a special element, $f$ has to send ${}0$ to that element. The square on the right says that $f(S(n))=s(f(n))$. If I know where $f$ sends one natural number $n$ and I know the function $s$, then I know where $f$ sends the successor of $n$. The universal property means just that $\mathbb N$ has nothing in it but what we need: ${}0$ and all its successors, and ${}0$ is not the successor of any of them.
Of course, by the exact same sort of argument I gave when discussing direct products of groups, once we have a universal property any two things satisfying that property are isomorphic. This is what justifies talking about “the” natural number system, since any two models of the system are essentially the same.
This is a point that bears stressing: there is no one correct version of the natural numbers. Anything satisfying the axioms will do, and they all behave the same way.
The Bourbaki school like to say that the natural numbers are the following system: The empty set $\emptyset$ is zero, and the successor function is $S(n)=n\cup\{n\}$. But this just provides one model of the system. We could just as well replace the successor function by $S(n)=\{n\}$, and get another perfectly valid model of the natural numbers.
In the video of Serre that I linked to, he asks at one point “What is the cardinality of 3?” This betrays his membership in Bourbaki, since he clearly is thinking of 3 as some particular set or another, when it’s really just a slot in the system of natural numbers. The Peano axioms don’t talk about “cardinality”, and we can’t build a definition of such a purely set-theoretical concept out of what properties it does discuss. The answer to the question is “!” (“mu”). The Bourbaki definition doesn’t define the natural numbers, but merely shows that within the confines of set theory one can construct a model satisfying the given abstract axioms.
This is how mathematics works at its core. We define a system, including basic pieces and relations between them. We can use those pieces to build more complicated relations, but we can only make sense of those properties inside the system itself. We can build models of systems inside of other systems, but we should never confuse the model with the structure — the map is not the territory.
This point of view seems to fetishize abstraction at first, but it’s really very freeing. I don’t need to know — or even care — what particular set and functions define a given model of the natural numbers. Anything I can say about one model works for any other model. As long as I use the properties as I’ve defined them everything will work out fine, and $1+2=3$ whether I use Bourbaki’s model or not.
March 5, 2007 Posted by | Fundamentals, Numbers | 11 Comments | 2014-12-22 17:07:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 362, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9119312167167664, "perplexity": 179.99706944743863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775517.52/warc/CC-MAIN-20141217075255-00046-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://xlanc.info/percentage-formula-in-excel/ | # Percentage Formula In Excel
percentage formula in excel the percentage adds to the story the percentage of those who prefer a specific project and are adults the formula in is b2b3 copied to and how to apply percentage formula i.
percentage formula in excel excel percentage function percentage formula column excel excel percent formula image titled calculate cost savings percentage excel percentage function percentage formula.
percentage formula in excel absolute value formula excel absolute value percentage formula in excel percentage formula excel mac percentage formula in excel 2007 in hindi.
percentage formula in excel percent formula in excel excel percentage calculating percentage excel percent formula in excel percentage formula excel marksheet percentage formula in excel 2007 in hindi.
percentage formula in excel how to calculate percentage of a number in excel percentage formula column excel how to make a percentage formula in excel calculating percentage excel percentage formula e.
percentage formula in excel percentage percentage formula in excel 2010 pdf food cost percentage formula excel pdf.
percentage formula in excel excel formula for percentage of total excel in formula excel formulas tab excel formula for percentage percentage formula excel pdf percentage formula in excel 2010 pdf.
percentage formula in excel excel formula percentage change percentage formula in excel percentage formula excel excel decrease percentage excel formula percentage percentage formula in excel 2010 pdf.
percentage formula in excel repeat until you have entered all of the formulas for each type of assignment always make sure you are in the correct column and row when doing it percentage formula excel.
percentage formula in excel percentage calculation formula in excel what is the percentage formula in excel percent formula in excel percentage calculation formula in excel percentage formula excel ma.
percentage formula in excel percent formula in excel percent excel formula excel formula auditing percentage formula for excel percentage formula percentage calculation formula in excel 2007 percentag.
percentage formula in excel formula for difference in excel excel difference excel formula difference between two dates in weeks formula for difference in excel percentage formula excel pdf percentage.
percentage formula in excel what is the percentage formula in excel want to become even more awesome in excel school percentage formula excel 2003 percentage formula excel.
percentage formula in excel percentage of total excel percentage formula column excel column should be percent of total sales formula percentage formula excel youtube percentage formula in excel 2007.
percentage formula in excel percentage formula in excel how to do a percentage formula in excel percent change formula excel calculate percent change in how to do a percentage formula percentage formu.
percentage formula in excel what is the excel formula for percentage percentage formula column excel percentage decrease formula excel how percentage formula excel mac percentage formula excel 2017.
percentage formula in excel excel difference between two numbers excel percentage difference between two numbers excel formula excel random different numbers percentage calculation formula in excel 20.
percentage formula in excel excel increase percentage percentage formula excel 2007 percentage formula excel 2003.
percentage formula in excel percentage change calculator excel percent change excel formula excel formula if excel max if array excel formula for percentage change percentage point percentage formula.
percentage formula in excel percentage formula in excel percent formula in excel excel percentage increase row percent formula excel percentage percentage formula in excel 2010 pdf percentage calculat.
percentage formula in excel percentage change calculator excel calculating percentage change in excel excel formula calculate percentage how to calculate percentages in excel excel percentage formula.
percentage formula in excel how to calculate change in excel percentage formula column excel percent change formula excel percentage formula excel youtube excel macro percentage formula.
percentage formula in excel how to make a percentage formula in excel excel formula calculate percentage percentage marks 1 excel percentage formula excel youtube percentage formula excel uk.
percentage formula in excel percent change formula excel percent change formula excel percent change formula in excel easy excel tutorial percentage increase formula excel 2007 percentage formula exce.
percentage formula in excel calculating percentage increase in excel excel formula percentage increase percent excel formula i want a formula calculating percentage increase in excel percentage formul.
percentage formula in excel excel percent change formula percentage formula column excel percent formula percentage formula column excel percentage calculate excel percent percentage formula in excel.
percentage formula in excel percent ranking second worksheet excel percentage formula in excel 2010 pdf percentage formula excel 2003. | 2018-07-20 18:18:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456313610076904, "perplexity": 4192.596340656629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591719.4/warc/CC-MAIN-20180720174340-20180720194340-00394.warc.gz"} |
https://academic.oup.com/ofid/article/4/1/ofw236/2410938/Do-Positive-Anaerobic-Culture-Results-Affect | ## Abstract
Background.
Aerobic and anaerobic cultures from body fluids, abscesses, and wounds are ordered routinely. Prior studies have shown that the results of anaerobic blood cultures do not frequently lead to changes in patient management.
Methods.
We performed a retrospective chart review to determine whether positive results of anaerobic tissue and fluid cultures (excluding blood) affect physicians’ treatment approaches. Of 3234 anaerobic cultures, 174 unique patient admissions had positive cultures and met inclusion criteria.
Results.
Only 18% (n = 31) of patient charts with positive cultures had documented physician acknowledgment (90.3% of acknowledgments by infectious diseases physicians), with 9% (n = 15) leading to change in antibiotic regimens based on results. Seventy percent of all patients received initial empiric antibiotics active against anaerobes. Of the remaining 30% (inappropriate, unknown, or no empiric coverage), 1 regimen change was documented after culture results were known.
Conclusions.
Given the lack of management change based on results of anaerobic wound cultures, the value of routine anaerobic culturing is of questionable utility.
Clinicians routinely order aerobic and anaerobic cultures when obtaining specimens from body fluids, abscesses, and deep wounds. Once specimens are collected, clinicians choose their initial, empiric antimicrobial regimen based on the likely pathogens at the site of infection, often selecting a broad-spectrum drug, or combination of drugs, to cover both aerobic and anaerobic organisms. Once culture results are available, the initial regimen may be refined to target the recovered pathogens. However, it is not clear that results translate into modifications. We investigated whether positive results of anaerobic cultures from body fluids/wounds/abscesses/bones affected clinician decisions about treatment, and hence whether the practice of routinely processing these specimens for anaerobic culture is a cost-effective use of microbiology laboratory resources.
There have been multiple retrospective cohort studies in academic and community hospitals, both in the United States and abroad, which reviewed the impact of routine aerobic and anaerobic blood cultures on choice of definitive regimens. Many of these studies show that results of anaerobic blood cultures infrequently lead to changes in patient management, because recognition of the presence of clinical risk factors for anaerobic bacteremia has already led to inclusion of anaerobic coverage in the initial, empiric antimicrobial regimen [1–3]. Salonen et al [4] evaluated the impact of positive anaerobic blood culture results and found that 57% of the patients with positive cultures were already on appropriate treatment, 33% of patients had their treatment modified based on the culture results, but approximately 20% of patients with positive anaerobic blood cultures still did not have their treatment appropriately altered when results became available. The authors concluded that a selective approach in obtaining anaerobic cultures only from patients with a high pretest probability may result in cost-effective care and appropriate management [4]. Other studies have come to similar conclusions regarding lack of cost effectiveness of routine anaerobic blood cultures and recommend more selective testing [5, 6].
In contrast, some investigators have concluded that there is benefit in obtaining routine anaerobic blood cultures, because some of their study patients who were not considered at high risk of anaerobic infection grew clinically significant organisms only in the anaerobic cultures [7]. In addition, they argue that routine anaerobic blood cultures may actually be cost effective, to ensure coverage of anaerobes, if present, and to narrow spectrum if results are negative [8]. After extensive literature review, we found no prior studies addressing anaerobic cultures from specimens other than blood.
## METHODS
This study, approved by the Rutgers Health Sciences Institutional Review Board, was a retrospective chart review of all adult in-patients (age ≥18 years) who had positive anaerobic cultures from specimens other than blood in 2012. The Robert Wood Johnson University Hospital (RWJUH) Microbiology Laboratory provided a list of all the nonblood anaerobic cultures from January 1, 2012 to December 31, 2012. These included cultures from tissue, wounds, drainage from abscesses, bone, pleural fluid, ascitic fluid, synovial fluid, tympanic fluid, and cerebrospinal fluid (CSF). Anaerobic cultures are routinely performed on all wound and body fluid specimens (with the exception of joint fluid and peritoneal dialysis fluid) received in a sterile container and on others if an order is received by the microbiology laboratory at the discretion of the physician. Swab specimens for anaerobic culture must be transported in an anaerobic transport device, whereas tissues or body fluids are sent to the laboratory in sterile containers at room air in a biohazard bag. All anaerobic specimens are inoculated to prereduced (ie, anaerobic) culture media and cultured using Anoxomat Anaerobic Culture system (Advanced Instruments Inc., Norwood, MA). The RWJUH laboratory routinely incubates anaerobic culture specimens for 48 hours unless longer incubation is specifically requested by the ordering clinician based on a high index of suspicion for slow-growing anaerobic bacteria such as Propionibacterium species. This laboratory protocol is based on an internal quality assurance review that was later validated by an internal RWJUH study, which showed that a longer duration of incubation did not significantly increase the yield of anaerobes [9].
Culture data collected included specimen source, organism identification, and time to final report of anaerobic culture. For analysis, if a patient had multiple positive anaerobic cultures, they were grouped together if they were from the same culture site within 2 days and from the same admission. However, if cultures were collected from the same patient on multiple admissions, each admission was counted as a separate patient for data analysis.
Laboratory data and physician orders were obtained from our electronic medical record (EMR). During the period of this study, physicians at RWJUH had not yet begun to enter daily notes into the EMR, so the On-Base EMR system, which gives access to the scanned paper chart, was used for all other elements of data collection, including patient demographics (Table 1), assessment of clinician acknowledgment of positive culture results in daily progress notes, and changes in clinical management based on the results. Data collected included demographics (age, sex), comorbidities, the medical service the patient was admitted to during their inpatient stay, initial antibiotic regimen, acknowledgment in the chart of positive anaerobic culture results, changes in regimen based on results, and whether or not Infectious Diseases consultation was obtained. Anaerobic cultures were considered acknowledged by physicians if culture results were documented in the “Laboratory/Microbiology” or “Assessment/Plan” sections of the physician progress note.
Table 1.
Patient Demographics
Characteristics No. (%)
Age (mean) 55
Male 74 (48)
Comorbid Conditions
Diabetes mellitus 34 (22)
Cancer 44 (29)
Abdominal pathology 38 (25)
Ob/Gyn 8 (5)
Microbiology
Monomicrobial 118 (77)
Polymicrobial 36 (23)
Concomitant-positive aerobic culture 5 (3)
Characteristics No. (%)
Age (mean) 55
Male 74 (48)
Comorbid Conditions
Diabetes mellitus 34 (22)
Cancer 44 (29)
Abdominal pathology 38 (25)
Ob/Gyn 8 (5)
Microbiology
Monomicrobial 118 (77)
Polymicrobial 36 (23)
Concomitant-positive aerobic culture 5 (3)
## RESULTS
### Culture Data
During the study period, a total of 3234 body fluid/tissue cultures were collected from 1997 patients, and only 205 (6.3%) cultures were positive from 172 patients during a total of 180 hospital admissions. Twenty-six charts were excluded—21 charts of patients less than 18 years of age and 5 charts of patients who had their culture specimens collected as outpatients. A total of 174 cultures from 154 patient charts were included in the analysis.
Of the positive cultures, the highest yield was from abscesses (Table 2). The majority of anaerobic-culture positive abscesses were from intra-abdominal (76%, n = 44) or pelvic (12%, n = 7) sites. Pleural fluid specimens had the lowest yield of anaerobes (3 positive of 755). The average number of days it took to report final anaerobic culture results in the EMR was 4.5 days (range, 1–8 days), with the majority being reported in 3–5 days. Only 1 specimen, which grew Peptoniphilus asaccharolyticus (formerly Peptostreptococcus), took 12 days to final report. The majority of cultured anaerobes were Bacteroides species and Prevotella species (Table 3).
Table 2.
Anaerobic Culture Positivity by Source
Specimen Type No. of Anaerobic Cultures No. of Positive Anaerobic Cultures Percent Positive
Abdominal cavity fluid 336 20 6%
Wound/tissue/biopsy 1236 84 7%
Pleural fluid 755 0.4%
Abscess 308 58 18.8%
Miscellaneous* 599 40 7%
Specimen Type No. of Anaerobic Cultures No. of Positive Anaerobic Cultures Percent Positive
Abdominal cavity fluid 336 20 6%
Wound/tissue/biopsy 1236 84 7%
Pleural fluid 755 0.4%
Abscess 308 58 18.8%
Miscellaneous* 599 40 7%
*Drainage fluid, tympanocentesis fluid, synovial/joint fluid, bone, bile, pelvic fluid, vitreous fluid, lymph node, gallbladder, appendix, gastric fluid, pericardial fluid, placenta, amniotic fluid, aspirate, other fluid not otherwise specified.
Table 3.
Anaerobes Recovered
Bacteria No. of Isolates
Bacteroides species 85
Clostridium species 14
Fusobacterium species
Mixed anaerobic flora
Peptoniphilus asaccharolyticus
Peptostreptococcus
Porphyromonas gingivalis
Prevotella species 43
Propionibacterium acnes
Streptococcus constellatus
Tissierella praeacuta
Veillonella
Bacteria No. of Isolates
Bacteroides species 85
Clostridium species 14
Fusobacterium species
Mixed anaerobic flora
Peptoniphilus asaccharolyticus
Peptostreptococcus
Porphyromonas gingivalis
Prevotella species 43
Propionibacterium acnes
Streptococcus constellatus
Tissierella praeacuta
Veillonella
### Patient Chart Data
The vast majority (73%) of patients who had positive anaerobic nonblood cultures were on a surgical service (general surgery, surgical oncology, orthopedics, or gynecology). Of the 154 patients with positive cultures included in the analysis, 39 (25%) were discharged before reporting of culture results in the EMR, so it was not possible for physicians to acknowledge the positive cultures in progress notes during the hospitalization or to act on the results. However, of the 115 patients whose culture results were reported before discharge, only 31 had their cultures acknowledged by physicians (27%), mostly by infectious disease consultants (28 of 31 cases) (Chart 1), and only 15 of these cases had antibiotics changed based on the results. For 14 of 15 cases where antibiotics were changed, antimicrobial spectrum was narrowed. In the remaining case, antibiotic coverage was broadened. There were no antibiotic regimen changes in cases without physician acknowledgement.
Chart 1.
Acknowledgment of positive anaerobic culture results stratified by those patients with and without ID consultation.
Chart 1.
Acknowledgment of positive anaerobic culture results stratified by those patients with and without ID consultation.
Most patients were started on empiric antimicrobials that included anaerobic coverage (n = 115, 75%) before culture results. For the remaining 25% of patients, the culture result led to regimen change in only 1 patient.
## DISCUSSION
This study demonstrates that positive anaerobic body fluid/tissue culture results infrequently affected physicians’ treatment decisions. The majority of patients whose wound/fluid cultures grew anaerobes were already receiving empiric treatment with a regimen that was active against the anaerobes that ultimately grew, and very few antibiotic regimens were changed when definitive results became available. In addition, as a consequence of the time required for growth and identification of anaerobic species, final anaerobic culture results often only became available after patients had already been discharged, thereby not allowing physicians to acknowledge or tailor antibiotic regimens in the inpatient setting.
Our study suggests that there is little utility of routine anaerobic tissue/body cultures, because clinical management does not change based on the results. This finding may have major implications for laboratory-resource use and cost-saving practices. The RWJUH Microbiology Laboratory uses the Anoxomat standard system to grow anaerobic cultures. Patients are charged approximately $58.00 per anaerobic culture using CPT code 87076. This suggests a significant opportunity for cost reduction. For example, in this study, pleural fluid specimens had low yield for anaerobes; and because the 3 patients who had positive anaerobic pleural fluid cultures were already on empiric antibiotics with anaerobic activity before the results, antibiotics were not altered. Eliminating 755 anaerobic cultures of pleural fluid alone could have potentially saved$43790 in addition to the microbiology laboratory technicians’ time.
Our results suggest that eliminating routine anaerobic cultures and having a more selective approach may be cost saving. This selective approach should be based on (1) known yield of anaerobic cultures from different body sites and (2) clinical situations in which culture results would be likely to change clinical management and potential patient outcomes. One example would be performing anaerobic culture of CSF in a patient with a ventriculoperitoneal shunt, given the recognition of Propionibacterium as a significant pathogen in that setting. The key to such an approach is open communication between the requesting clinician and the microbiology laboratory.
This study has several limitations. We reviewed written progress notes for documentation that physicians acknowledged culture results. This approach may have underestimated awareness of results. With paper charts, all components of the progress note, including laboratory values, must be entered manually. Therefore, it is possible that physicians would not bother to document a finding that did not affect patient management (eg, positive anaerobic culture on a patient already receiving an antimicrobial that has anaerobic coverage). On the other hand, with the EMR, results of laboratory tests are often inserted into a progress note via keyboard shortcuts, rather than by deliberate review of each individual result. If we were to repeat this study today, it would be difficult to assume that every laboratory value that was pasted was actually reviewed by the physician.
Another potential limitation is the premise that a lack of documentation or lack of change of antibiotics is equivalent to lack of utility of results, and therefore that cultures are not useful. One example would be that the positive anaerobic culture influenced the physician not to narrow the initial empiric antibiotic regimen to cover only the aerobes.
An additional limitation is that many patients were discharged before results were reported. It is unknown whether physicians may have changed antibiotics after discharge when the results became available at an outpatient encounter.
Finally, the low percentage of positive cultures may be a reflection of our institution’s overzealous physicians, who may order cultures routinely when fluid is sampled for any reason (ie, therapeutic thoracentesis), regardless of whether there is any suspicion of infection. In addition, we were not able to differentiate the specific type of specimen that was sent to the laboratory—swab (anaerobic transport) versus tissue or fluid (exposed to air), which may affect the culture yield.
An additional caveat in assessing the relevance of any study report to current practice relates to the rapid changes in technology within the hospital and laboratory settings. Since this study was performed, in addition to the switch to EMR documentation of progress notes, our laboratory has implemented new methodologies for organism identification (matrix-assisted laser desorption ionization-time of flight), which should allow for more rapid reporting of results.
## CONCLUSIONS
More studies are needed to evaluate whether results of anaerobic cultures from specimens other than blood impact patient outcomes (ie, hospital length stay, readmission rates, and morbidity/mortality). If the findings are consistent with our results, we would question the utility of routinely performing anaerobic cultures on most nonblood specimens.
## Acknowledgments
Potential conflicts of interest. All authors: No reported conflicts.
All authors have submitted the ICMJE Form for Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed.
## References
1.
Chandler
MT
Morton
ES
Byrd
RP
Jr
et al
.
Reevaluation of anaerobic blood cultures in a Veteran population
.
South Med J
2000
;
93
:
986
8
.
2.
Iwata
K
Takahashi
M
.
Is anaerobic blood culture necessary? If so, who needs it?
Am J Med Sci
2008
;
336
:
58
63
.
3.
Morris
AJ
Wilson
ML
Mirrett
S
Reller
LB
.
Rationale for selective use of anaerobic blood cultures
.
J Clin Microbiol
1993
;
31
:
2110
3
.
4.
Salonen
JH
Eerola
E
Meurman
O
.
Clinical significance and outcome of anaerobic bacteremia
.
Clin Infect Dis
1998
;
26
:
1413
7
.
5.
Ortiz
E
Sande
MA
.
Routine use of anaerobic blood cultures: are they still indicated?
Am J Med
2000
;
108
:
445
7
.
6.
Saito
T
Senda
K
Takakura
S
et al
.
Anaerobic bacteremia: the yield of positive anaerobic blood cultures: patient characteristics and potential risk factors
.
Clin Chem Lab Med
2003
;
41
:
293
7
.
7.
Grohs
P
Mainardi
JL
Podglajen
I
et al
.
Relevance of routine use of the anaerobic blood culture bottle
.
J Clin Microbiol
2007
;
45
:
2711
5
.
8.
Rosenblatt
JE
.
Can we afford to do anaerobic cultures and identification? A positive point of view
.
Clin Infect Dis
1997
;
25
(Suppl 2)
:
S127
31
.
9.
Hameed
N
Kirn
TJ
Weinstein
MP.
Clinical utility of incubating anaerobic cultures 2 versus 5 days
. In:
IDweek 2013
; | 2017-02-20 22:31:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3393884301185608, "perplexity": 8280.115454435649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00122-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://openwetware.org/index.php?title=Biomod/2011/Caltech/DeoxyriboNucleicAwesome/Simulation&oldid=521957 | # Biomod/2011/Caltech/DeoxyriboNucleicAwesome/Simulation
Tuesday, September 16, 2014
Home
Members
Project
Protocols
Progress
Discussion
References
# Simulations
## Overview
Our proposed sorting mechanism depends very heavily on a particular random-walking mechanism that has not been demonstrated in literature before. The verification of this mechanism is thus a vital step in our research. Verification of the random walk in one dimension is fairly straightforward: as discussed in <LINK TO THE EXPERIMENTAL DESIGN SECTION>, a one-dimensional track is easy to construct, and will behave like a standard 1D random walk, showing an average translation on the order of $n^{\frac{1}{2}}$ after n steps. Thus, we should expect the time it takes to get to some specific level of fluorescence to be proportional to the square of the number of steps we start the walker from the irreversible substrate. If we can, in an experiment, record the fluorescence over time when the walker is planted at different starting points and show that that fluorescence varies by this relationship, we'll have fairly certainly verified one-dimensional random walking.
Our particular case of 2D random walking, however, is not as easily understood, especially considering the mobility restrictions (ability to move to only 4 of 6 surrounding locations at any particular time) of our particular walker. As a control for the verification of 2D random walking, though, we still need to get an idea how long the random walk should take, and how that time will change as we start the walker at different points on the origami. We opt to do this by simulating the system with a set of movement rules derived from our design. We also use the same basic simulation (with a few alterations and extra features) to simulate our entire sorting system in a one-cargo, one-goal scenario, to give us some rudimentary numbers on how long sorting should take, with one vs multiple walkers.
Basic parameters and assumptions:
• The unit of time is the step, which is the time it takes a walker to take a step given four good opposite track locations (good locations to step to) around it.
• The walkable track are given coordinates like a grid (which shifts the even columns up by 0.5). The bottom-left is <1, 1>, the top-left <1, 8>, and the bottom-right <16, 1>.
• Movement rules are based on column:
• In even columns, a walker can move in directions <0, 1>, <0, -1>, <1, 0>, <-1, -1>.
• In odd columns, a walker can move in directions <0, 1>, <0, -1>, <-1, 0>, <1, 1>.
An illustration of the grid and motion rules used in the simulation. The bottom-left is the origin (<1,1> because MATLAB indexes by 1). The 2D platform, including track A (red), track B (blue), the marker (tan), cargo (gold), and goal (green), is shown on the left. The grid on the right -- the grid corresponding to our numbering system and representing viable track for a random walk -- is created by shifting even columns up by 0.5. This arrangement (which is, in essence, a visualization tool) reveals through the vertical symmetry of the arrangement that movement rules are going to vary by column only. The valid moves in even and odd columns shown on the left are mapped onto the grid on the right to derive the moveset listed above.
## MATLAB Code
At the core of the simulation is a function which runs runs one random walk on an origami of specified size. It can run in both a cargo-bearing (one-cargo one-goal) and a purely random-walk mode. The former has cargo positions corresponding to our particular origami pre-programmed and starting with multiple (specified by user) walkers at random locations on the origami, and terminates when all of the cargos have been "sorted" to the goal location (the x axis). The latter runs one walker starting at a specified location, and terminates when that walker reaches the specified irreversible track location. The function returns a log of all walkers positions over time, a log reporting when cargos were picked up and dropped off, and a count of the number of steps the simulation took. This function is utilized by separate cargo-bearing and random-walk data collection programs that call the function many times over a range of parameters.
The function code (saved as randomWalkFunction.m):
function [log, cargoLog, steps] = randomWalkFunction(x, y, length, ... numWalkers, startPos, endPos, cargoBearing, restricted, error) %x: Width y: Height %length: max # of steps to run simulation %numWalkers = number of walkers to simulate in cargoBearing state %startPos = starting position for walker in randomwalk state %endPos = irreversible track location in randomwalk state %cargoBearing = running cargoBearing (1) vs randomWalking (0) %restricted = whether we're paying attention to borders %error = the chance of the failure of any single track %Random walking cargo pickup/dropoff simulation %for origami tile, x (horizontal) by y (vertical) dim. %Locations index by 1. x+ = right, y+ = up % Gregory Izatt & Caltech BIOMOD 2011 % 20110615: Initial revision % 20110615: Continuing development % Added simulation for cargo pickup/dropoff % Adding support for multiple walkers % 20110616: Debugging motion rules, making display better % 20110616: Modified to be a random walk function, to be % called in a data accumulator program % 20110628-30: Modified to take into account omitted positions % , new probe layout, and automatic halting when % starting on impossible positions. % 20110706: Fixed walker collision. It detects collisions properly % now. % 20110707: Adding support for errors -- cycles through and % omits each track position at an input error rate %Initialize some things: %Cargo positions: cargoPos = [[3, 5]; [9, 5]; [15, 5]; [7, 7]; [11, 7]]; filledGoals = []; omitPos = [[3, 6]; [7, 8]; [8, 5]; [11, 8]; [15, 6]]; steps = 0; hasCargo = zeros(numWalkers); sorted = 0; trackAPoss = [0, 1; 0, -1; 1, 0; -1, -1]; %Movement rules, even column trackBPoss = [0, 1; 0, -1; -1, 0; 1, 1]; %'', odd column log = zeros(length, 2*numWalkers + 1); cargoLog = []; collisionLog = []; %Walkers: % Set position randomly if we're doing cargo bearing simulation, % or set to supplied startPos if not. if cargoBearing currentPos = zeros(numWalkers, 2); for i=1:numWalkers done = 0; while done ~= 1 currentPos(i, :) = [randi(x, 1), randi(y, 1)]; done = checkPossible(numWalkers, currentPos, omitPos, ... cargoPos); end end else numWalkers = 1; %Want to make sure this is one for this case currentPos = startPos; if checkPossible(numWalkers, currentPos, omitPos, ... cargoPos) ~= 1 'Invalid start position.'; cargoLog = []; steps = -1; return end end %Error: If there's a valid error rate, go omit some positions: if error > 0 for xPos=1:x for yPos=1:y %Only omit if it's not already blocked by something if checkPossible(0, [xPos, yPos], omitPos, cargoPos) if rand <= error omitPos = [omitPos; [xPos, yPos]]; end end end end end %Convenience: numOmitPos = size(omitPos, 1); numCargoPos = size(cargoPos, 1); %Main loop: for i=1:length, for walker=1:numWalkers %Add current pos to log log(steps + 1, 2*walker-1:2*walker) = currentPos(walker, :); %Update pos to randomly %chosen neighbor -- remember, %these are the only valid neighbors: % (0, +1), (0, -1) % IF x%2 = 0: % (+1, 0), (-1, -1) % ELSE: % (-1, 0), (+1, +1) temp = randi(4, 1); if (mod(currentPos(walker, 1),2) == 0) newPos = currentPos(walker, :) + trackAPoss(temp, :); else newPos = currentPos(walker, :) + trackBPoss(temp, :); end %If we tried to move onto the bottom two spots (in terms of y) %on an even column (i.e. a goal), we drop off cargo if we had it %and there wasn't one there already. %% Specific: 8th column has no goals! It has track instead. if newPos(2) <= 2 && mod(newPos(1),2) == 0 && newPos(1) ~= 8 if cargoBearing && hasCargo(walker) == 1 %Drop cargo, increment cargo-dropped-count, but %only if there isn't already a cargo here temp = size(filledGoals); match = 0; for k=1:temp(1) if filledGoals(k, :) == newPos match = 1; break end end if match ~= 1 hasCargo(walker) = 0; cargoLog = [cargoLog; steps, walker]; sorted = sorted + 1; filledGoals = [filledGoals; newPos]; end end %Don't move newPos = currentPos(walker, :); end %General out-of-bounds case without cargo drop: if restricted && ((newPos(1) > x || newPos(1) < 1 || ... newPos(2) > y || newPos(2) < 1)) %Don't go anywhere newPos = currentPos(walker, :); end %Hitting cargos case: for k=1:numCargoPos if cargoPos(k, :) == newPos %Remove the cargo from the list of cargos and "pick up" % if you don't already have a cargo if hasCargo(walker) == 0 && cargoBearing cargoPos(k, :) = [-50, -50]; hasCargo(walker) = 1; cargoLog = [cargoLog; steps, walker]; end %Anyway, don't move there newPos = currentPos(walker, :); end end % Already on irrev. cargo case: if (currentPos(walker, :) == endPos) return end %Hitting other walkers case: if numWalkers > 1 for k = 1:numWalkers if all(newPos == currentPos(k, :)) && k ~= walker newPos = currentPos(walker, :); collisionLog = [collisionLog; newPos, walker, k]; end end end %Hitting the omitted positions case: %If we have any position matches with "omitted" list %, just don't go there. match = 0; for k=1:numOmitPos if omitPos(k, :) == newPos match = 1; end end if match == 1 newPos = currentPos(walker, :); end %Finally actually update the position currentPos(walker, :) = newPos; end % Step forward, update log steps = steps + 1; log(steps, 2*numWalkers + 1) = steps - 1; if (sorted == 5) log(steps+1:end, :) = []; break end end return %%Checks if a position is a possible place for a walker to be: function [possible] = checkPossible(numWalkers, currentPos, ... omitPos, cargoPos) % If we're starting on an omitted position, or a goal, a cargo, % or another walker, just give up immediately, and return a -1: numOmitPos = size(omitPos, 1); numCargoPos = size(cargoPos, 1); possible = 1; for walker = 1:numWalkers thisWalkerPos = currentPos(walker, :); % Only run this for this walker if it's placed somewhere % valid (i.e. not waiting to be placed, x,y = 0,0) if all(thisWalkerPos) %Omitted positions: for k=1:numOmitPos if omitPos(k, :) == thisWalkerPos possible = 0; return end end %Cargo positions: for k=1:numCargoPos if cargoPos(k, :) == thisWalkerPos possible = 0; return end end %Other walkers: for k=1:numWalkers if (all(currentPos(k, :) == thisWalkerPos)) && ... (k ~= walker) possible = 0; return end end %Goal positions: if mod(thisWalkerPos(1), 2)==0 && thisWalkerPos(1) ~= 8 ... && thisWalkerPos(2) <= 2 possible = 0; return end end end return
## Random-Walk Simulation
The data we need from this simulator is a rough projection of the fluorescence response from our test of 2D random walking, which should change based on the starting location of the walker. Because this fluorescence is changed by a fluorophore-quencher interaction upon a walker reaching its irreversible track, in the case where we plant all of the walkers on the same starting track, the time it takes (fluorescenceinitialfluorescencecurrent) in the sample to reach some standard value should be proportional to the average time it takes the walkers to reach the irreversible substrate. As this 'total steps elapsed' value is one of the outputs of our simulation function, we can generate a map of these average walk durations by running a large number of simulations at each point on the origami and averaging the results:
%%% Random walk bulk simulation that %% runs a battery of tests and plots the results %% to see how long random walks take on average to complete %% based on distance from goal / platform size % Gregory Izatt & Caltech BIOMOD 2011 % 20110616: Initial revision % 20110624: Updating some documentation % 20110701: Updating to use new, updated randomWalkFunction % 20110707: Updated to use new error-allowing randomWalkFunction %% Dependency: makes calls to randomWalkFunction.m iterations = 2500; %Test each case of random walk this # times xMax = 16; %Scale of platform for test yMax = 8; stopPos = [15, 7]; %Stop position averages = zeros(xMax, yMax); %Init'ing this trash = []; %Trash storing variable %Cycle over whole area, starting the walker at each position %and seeing how long it takes it to get to the stop position matlabpool(4) for x=1:xMax for y=1:yMax temp = zeros(iterations, 1); parfor i=1:iterations [trash, trash2, temp(i)] = randomWalkFunction(xMax, yMax, ... 10000, 1, [x, y], stopPos, 0, 1, 0.0); end stdDev(x, y) = std(temp); averages(x, y) = mean(temp) end end matlabpool close
A plot of the number of steps (on an average over 500 iterations) it takes a walker to random walk from any point on the origami to the irreversible track at <15, 7>. The holes are due to omitted, cargo, or goal strands blocking the walker's starting location.
### Results
Results of the bulk data collection at right show that the average random-walk duration, and thus the time for (fluorescenceinitialfluorescencecurrent) to reach some standard level, increases with distance, though it changes less significantly the farther out one gets. Also important to note is that the "effective distance" (in terms of steps) along the short axis of our platform is a significantly less than the same physical distance along the long axis. This difference is due to our arrangement of track A and B: as can be seen in the left half of the diagram at the end of the #Overview section, alternating tracks A and B create a straight vertical highway for the walker to follow. Horizontal movement, in contrast, cannot be accomplished by purely straight-line movement -- it requires a back-and-forth weave that makes motion in that direction slower. The disparity in "effective distances" between the vertical and horizontal dimensions is something, in particular, that we should test for; however, a simple series of tests running random walks at a variety of points across the surface, and the comparison of the resulting fluorescence data to the control provided by this simulation should be sufficient to prove that our walker can, indeed, perform a 2D random walk. | 2014-09-16 07:41:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6197046637535095, "perplexity": 4583.26231502509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114105.77/warc/CC-MAIN-20140914011154-00238-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.physicsforums.com/threads/order-in-renormalization-theory.798299/ | # Order in Renormalization Theory
Tags:
1. Feb 17, 2015
### taishizhiqiu
I am currently studying QFT with 'An Introduction to Quantum Field Theory' by peskin. In part 2 (renormalization) of the book, he introduces counterterms and shows how to compute scattering amplitude with them.
Below are counterterms of $\phi^4$ theory:
Then he calculates a 2-2 scattering process to the second order:
Here I have a few questions:
1. Why the fifth diagram in the second line of the second picture is of the same order to the previous three diagrams?
2. Now that $\delta_m,\delta_Z,\delta_\lambda$ are infinite numbers, how can they use perturbation theory and what is the meaning of order in renormalization perturbation theory?
Edit: should this thread be moved to Homework section? It just cannot be fit into the question structure there.
Last edited: Feb 17, 2015
2. Feb 17, 2015
### vanhees71
It's most simple to look at perturbative renormalization in terms of the finite physical quantities. So you start at tree level. There's no counterterm necessary since you just take the S-matrix elements from your Feynman rules, evaluate cross sections at tree level ("Born approximation") and fit the coupling and masses to some experiment.
Then you go further and do a one-loop calculation. Then you'll find divergent loop integrals, but even if you wouldn't have divergences, you'd have to refit your parameters to the experiment again, i.e., you need counter terms in order to readjust the parameters in terms of their physical quantities. At the order given by the Feynman diagrams, both your (divergent) loop diagrams and the corresponding counter terms are of order $\hbar$ (relative to the tree-level order). Then you go on, and do two-loop diagrams. After taking into account the counter terms of the one-loop result for the subdivergences according to Zimmermann's forest formula you are again left with local overall counterterms which cancel the overall divergences of the two-loop diagrams. They are of the same order $\mathcal{O}(\hbar^2)$ as the (divergent) two-loop diagrams and so on. In this iterative way, using Weinberg's convergence theorem and the BPHZ formalism you can show that $\phi^4$ theory is Dyson renormalizable order by order in perturbation theory, i.e., you only need counter terms of the form already present in the bare Lagrangian, i.e., order by order the renormalization of the wave function norm, mass, and coupling constant is sufficient to get finite results for the S-matrix elements entering observable quantities like cross sections.
3. Feb 17, 2015
### taishizhiqiu
Please explain why the one-loop diagram and counterterm diagram showed in my post is of order $\hbar$(especially the counterterm diagram). Most textbooks set $\hbar=1$ so I have difficulty finding this out.
Will the result of renormalization perturbation theory becomes smaller and smaller order by order? If it is, why? If it is not, what's the meaning of perturbation?
4. Feb 17, 2015
### vanhees71
You can as well take the physical coupling constant $\lambda$ as your counting parameter. In $\phi^4$ theory they are strictly correlated. The $\hbar$ order is given by the number of loops, i.e., each quantum loop in the Feynman diagram enhances its $\hbar$ order by one. Have a look at my QFT manuscript. In one chapter, I reintroduce $\hbar$ to make this counting explicit. The loop expansion is nothing else than the stationary-phase expansion of the path integral to evaluate the effective (quantum) action:
http://fias.uni-frankfurt.de/~hees/publ/lect.pdf
5. Feb 18, 2015
### taishizhiqiu
I don't see any loops in the counterterm diagram(nor any $\lambda$ term), can you show me explicitly?
6. Feb 18, 2015
### vanhees71
Take the example in posting #1. The three one-loop diagrams can be written as $\mathrm{i} \mathcal{M}=-\mathrm{i} [A(s)+A(t) + A(u)]$ with the usual Mandelstam variables $s$, $t$, and $u$.
The one-loop piece $\Gamma(s)$ is logarithmically divergent. In dim. reg you have
$A(s)=-\frac{\lambda^2}{32 \pi \epsilon}+\text{finite}.$
In order to make this finite in the limit $\epsilon=\frac{4-d}{2} \rightarrow 0$, you have to subtract at least this pole term (minimal subtraction scheme), and this is of order $\lambda^2$, of course.
At the same time, the counterterm are always local terms as in the original expression. Thus in diagrammatic language it's usually drawn as some blob as in posting #1 (last graph). Technically these are the contractions of divergent diagrams in the forest formula.
If you go to higher order, you have to subtract first all subdivergences (which can be nested but not overlapping, which was the great achievement by BPHZ and is the essence of Zimmermann's forest formula). This is done by drawing all bare diagrams and contractions of divergent subdiagrams. The subdiagrams are of lower order and thus you know the counter terms and thus you know the Feynman rules for the contractions, which now stand for counter terms. These make all the subdivergences finite, and you are left with an overall divergence, which is again a local term as appearing in the original Lagrangian (that's the 2nd essence of the forest formula). In this way you can go inductively from one to the next loop (expansion parameter $\hbar$) or coupling-constant order (expansion parameter $\lambda$), which are equivalent for $\phi^4$ theory.
For details of the calculation, see p. 157 of my lecture notes. I just realize that I have to repair the diagram on p. 157. I hope, I can manage this right away!
7. Mar 20, 2015
### taishizhiqiu
I have read much more about renormalization now and I know how infinities are cured, but I am still confused.
What justifies the renormalized perturbation theory? Now that infinities are involved, I cannot expect a convergent sequence. | 2017-08-17 00:58:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7996218800544739, "perplexity": 682.0021037074744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102757.45/warc/CC-MAIN-20170816231829-20170817011829-00646.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-4-review-exercises-page-469/54 | # Chapter 4 - Review Exercises - Page 469: 54
$x=\dfrac{e^{16}}{5}$
#### Work Step by Step
$\bf{\text{Solution Outline:}}$ To solve the given equation, $\ln(5x)=16 ,$ use the definition of natural logarithms and convert to exponential form. Finally, use the properties of equality to isolate the variable. $\bf{\text{Solution Details:}}$ Since $\ln x=\log_e x,$ the equation above is equivalent to \begin{array}{l}\require{cancel} \log_e(5x)=16 .\end{array} Since $\log_by=x$ is equivalent to $y=b^x$, the equation above, in exponential form, is equivalent to \begin{array}{l}\require{cancel} 5x=e^{16} .\end{array} Using the properties of equality to isolate the variable results to \begin{array}{l}\require{cancel} x=\dfrac{e^{16}}{5} .\end{array}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2020-10-21 10:03:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560357928276062, "perplexity": 839.1178273198191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00609.warc.gz"} |
http://datatiles.ai/?page_id=347 | DecisionTree
## Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features ( Defined by Scikit)¶
### Case Study: We will use Decision Tree to figure out the hiring based on Past Data.¶
In [75]:
import numpy as np
import pandas as pd
from sklearn import tree
input_file = "PastHires.csv"
In [76]:
Past_Hires
Out[76]:
Years Experience Employed? Previous employers Level of Education Top-tier school Interned Hired
0 10 Y 4 BS N N Y
1 0 N 0 BS Y Y Y
2 7 N 6 BS N N N
3 2 Y 1 MS Y N Y
4 20 N 2 PhD Y N N
5 0 N 0 PhD Y Y Y
6 5 Y 2 MS N Y Y
7 3 N 1 BS N Y Y
8 15 Y 5 BS N N Y
9 0 N 0 BS N N N
10 1 N 1 PhD Y N N
11 4 Y 1 BS N Y Y
12 0 N 0 PhD Y N Y
Map Y,N to 1,0 and levels of education to scale of 0 for BS, 1 for MS and 2 for PhD
In [77]:
d = {'Y': 1, 'N': 0}
Past_Hires['Hired'] = Past_Hires['Hired'].map(d) ## using map
Past_Hires['Employed?'] = Past_Hires['Employed?'].map(d)
Past_Hires['Top-tier school'] = Past_Hires['Top-tier school'].map(d)
Past_Hires['Interned'] = Past_Hires['Interned'].map(d)
d = {'BS': 0, 'MS': 1, 'PhD': 2}
Past_Hires['Level of Education'] = Past_Hires['Level of Education'].map(d)
Past_Hires
Out[77]:
Years Experience Employed? Previous employers Level of Education Top-tier school Interned Hired
0 10 1 4 0 0 0 1
1 0 0 0 0 1 1 1
2 7 0 6 0 0 0 0
3 2 1 1 1 1 0 1
4 20 0 2 2 1 0 0
5 0 0 0 2 1 1 1
6 5 1 2 1 0 1 1
7 3 0 1 0 0 1 1
8 15 1 5 0 0 0 1
9 0 0 0 0 0 0 0
10 1 0 1 2 1 0 0
11 4 1 1 0 0 1 1
12 0 0 0 2 1 0 1
### Look at the features:¶
Years Experience
Employed?
Previous employers
Level of Education
Top-tier school Interned
Hired
### Hired is the Target Feature¶
Years Experience Employed? Previous employers Level of Education Top-tier school Internet are the features we will use to predict.
In [78]:
features = list(Past_Hires.columns[:6])
features
Out[78]:
['Years Experience',
'Employed?',
'Previous employers',
'Level of Education',
'Top-tier school',
'Interned']
In [79]:
type(features)
Out[79]:
list
### Construct the decision tree using Decision Tree Classifier.¶
In [80]:
y = Past_Hires["Hired"]
X = Past_Hires[features]
In [81]:
clf = tree.DecisionTreeClassifier()
In [82]:
clf = clf.fit(X,y)
In [83]:
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
dot_data = StringIO()
export_graphviz(clf, out_file=dot_data,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
Out[83]:
To read this decision tree, each condition branches left for "true" and right for "false". When you end up at a value, the value array represents how many samples exist in each target value.
In [84]:
features
Out[84]:
['Years Experience',
'Employed?',
'Previous employers',
'Level of Education',
'Top-tier school',
'Interned']
In [85]:
print (clf.predict([[10, 1, 4, 0, 0,0 ]]))
[1]
## Ensemble learning: using a random forest. We will build 12 decision Trees¶
We'll use a random forest of 10 decision trees to predict employment of specific candidate profiles:
In [86]:
from sklearn.ensemble import RandomForestClassifier
clf10TREE = RandomForestClassifier(n_estimators=12)
clf10TREE = clf.fit(X, y)
### Time to predict, Candidate # 1 ::: Predict employment of an employed 9 years Experience¶
In [87]:
print (clf10TREE.predict([[9, 1, 4, 0, 0, 0]]))
[1]
### Time to predict, Candidate # 2 ::: Predict employment of an unemployed 3-year Experience¶
In [88]:
print (clf10TREE.predict([[3, 0, 0, 0, 0, 0]]))
[0]
In [89]:
# Thanks , This is a sample code , (C) DataTiles.io , DataTiles.ai | 2018-11-14 14:19:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20997484028339386, "perplexity": 2267.5518318560175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742020.26/warc/CC-MAIN-20181114125234-20181114151234-00200.warc.gz"} |
http://www.eastgate.com/Tinderbox/forum/YaBB.cgi?action=print;num=1460312897 | Tinderbox User-to-User Forum (for formal tech support please email: [email protected]) http://www.eastgate.com/Tinderbox/forum//YaBB.cgi Tinderbox Users >> Questions and Answers >> How are the Templates used during exporting? http://www.eastgate.com/Tinderbox/forum//YaBB.cgi?num=1460312897 Message started by Desalegn on Apr 10th, 2016, 2:28pm
Title: How are the Templates used during exporting? Post by Desalegn on Apr 10th, 2016, 2:28pm I tried to read in this blog and website. The basics are barely explained. So, I have to send this paper to the professor. I have to clean it up in Latex. I have filled up the HTML template as explained in the website. But, I don't know how TB uses the template during the exporting. I tried all the available options: "Export selected notes" is what I actually want. What I get at the output is "Unable to find this template"Can sb tell me what mistake I made?Export & Template are not even mentioned in the "getting started..." document.
Title: Re: How are the Templates used during exporting? Post by Mark Bernstein on Apr 10th, 2016, 4:06pm There's a big chapter on Export in Tinderbox Help.But I sense that you may have a looming deadline, and that the export is likely to be an undemanding and one-time event. In that case, you might be better off using File ▸ Export Text to RTF, and then using whatever LaTeX tool you prefer to convert it to LaTeX.
For your question, you want to change the export attribute (such as HTMLParagraphStart) for the exported notes (or for their prototype, rather than for the template.
Title: Re: How are the Templates used during exporting? Post by Desalegn on Apr 10th, 2016, 4:27pm Aha, that is where I am lost. Now, I now assigned the "Template" as a prototype. I get the extension right (.tex), and all the attributes are assigned to the actual notes. But, the exported file is still empty. it contains only ""Unable to find this template"
Title: Re: How are the Templates used during exporting? Post by Mark Bernstein on Apr 10th, 2016, 4:52pm The $HTMLExportTemplate for the exported notes needs to be set; one typically does this in the HTML Inspector, which has a nice pull-down menu of templates. Title: Re: How are the Templates used during exporting? Post by Desalegn on Apr 10th, 2016, 6:47pm Thank you guys. I submitted the document by converting via RTF as you suggested. Also, I understood my original problem: I was using the Template both as template and as prototype. There are still a certain issues with the export. I created a Test Note which contains the following$Text as a sample:
the template contains the following:
Code:
documentclass{article}title{^title^}begin{document}\maketitle^text^end{document}
So, I am expecting the following kind of Latex document:
Code:
documentclass{article}title{Test Note}begin{document}\maketitle\textbf{\Large The test file contains some bigger note}}The formatting could be \textit{italics}, or \textbf{bold}. \\So, this is note is going to be exported too:\\\begin{itemize} \item lists \item like \item this\end{itemize}\begin{enumerate}\\ \item first \item second \item third \item fourth\end{enumerate}
What I am actually getting after using the above template is:
Code:
documentclass{article}title{Test Note}begin{document}\maketitle
\textbf{The test file contains some bigger note}
The formatting could be \textit{italics}, or \textbf{bold}.\\So, this is note is going to be exported too:\\ - lists - like - thisAnd ordered numbers like:\\ 1 first 2 second 3 third 4 fourthend{document}
The numerations, lists and headings are not recognized even if I assigned these attributes as shown in Mark Anderson's file.
Here is the file https://www.dropbox.com/s/hm1kre0sj33ome9/LatexExport.tbx?dl=0if you want to look at.
Title: Re: How are the Templates used during exporting?
Post by Mark Anderson on Apr 11th, 2016, 4:34am
As the aTbRef page, the LaTeX code for the mark-up (e.g. to denote a bold section) goes in the note itself**. The template tell TB what elements of the current note (and/or other notes) is to go in the exported page to be created. Thus ^text^ in the template says "insert here the processed $Text of the current note". TB does this, using the mark-up stored in the note's$HTML... series of system attributes you can edit this via a UI using either Get Info or the Export Inspector's sub-tabs.
** You've wisely used a prototype here as that means you only need to customise all the HTML-related mark-up attributes once. Using the pLatex prototype, the ordinary note inherits the LaTeX-valued $HTMLBoldStart, etc. The <h3> is due to the auto-headings feature (explained here). My understanding is you can't customise that process - or disable ti (without affecting (HTML) export overall. Bottom line - in this scenario, keep all your text the same size and use inline LaTeX to indicate headings. This is a good example of why I stated that LaTeX isn't a native form of export. Your bulleted lists don't work for export, because TB doesn't know they are lists. You need to do either of: • Use RTF lists, set via Format menu > Text >List…. But, at at v6.5.0, I think this only works for RTF bulleted lists. Other list item marker types and numbered lists are not detected as such during formatted (HTML) export. • Use Tinderbox list mark-up. • Starting a paragraph with an asterisk (*) tells TB to render that paragraph (and contiguous ones with the same start) as an unordered list (HTML bulleted list). • Using a hash symbol (#) to start paragraphs tell Tinderbox to make an ordered (numbered list). Using the first method I can get the LaTeX mark-up for only a bulleted list. Using the second method, I can get LaTeX bulleted and numbered list types, so I'd suggest using the latter and avoid RTF-style lists for now. The notes from your TBX, as amended for my test, look like this: Code: So, this is note is going to be exported too:*lists*like*thisAnd ordered numbers like:#first#second #third#fourth It is up to you whether you put any space after the asterisk or bullet (or before/after the first list item - it doesn't affect the creation of list output, just the ouptut white space. I've also updated the aTbRef page to clarify a few of these points. Title: Re: How are the Templates used during exporting? Post by Desalegn on Apr 11th, 2016, 4:54am That is interesting. I didn't know that TB treats the stars as lists; and the Hash as ordered numbers. That is almost Markdown. ::)---both worries and possibilities strike me here--- :P And, one more correction once you are in that page, replace start with begin. The native Latex doesn't seem to understands Start. Thank you Mark. That was helpful Title: Re: How are the Templates used during exporting? Post by Mark Anderson on Apr 11th, 2016, 6:00am I was using 'begin' in the general sense of the work and not as a LaTeX command. However, I've revised the aTbRef page "Working with LaTeX" even further to hopefully make things more clear.I've also made it plain that this isn't a built-in feature but rather a an opportunity arising from the HTML Export customisation features. As such, a number of features, e.g. link and auto-heading mark-up, are beyond the users control. Thus such features should be avoided (or no LaTeX output expected from them).Additions to the article are welcome, but please note aTbRef is not - by intent** - a 'how-to' tutorial. The aim here is to describe necessary setting changes to achieve LaTeX output. Use of LaTeX seems to have a useful public forum at http://tex.stackexchange.com.** The resource is already large and takes care to maintain without breaking. Expanding to 'how-to's is best left for a separate resource.~~~~~~~As a sidenote, it's worth noting that Tinderbox's use of 'quicklist' markup pre-dates Gruber's MarkDown by a few years. Markdown is not supported within$Text, though it is a feature that has been requested (for HTML output, though by the means discussed here it might go to other formats/mark-up). A bigger implementation issue is opening up the mark-up for links and images as without this such support is limited in what it can do. IOW, the harder part of any such implementation is less obvious to the user. | 2018-06-18 06:04:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4749075174331665, "perplexity": 9039.983251127658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.11/warc/CC-MAIN-20180618051104-20180618071104-00097.warc.gz"} |
https://math.stackexchange.com/questions/2011270/how-to-prove-this-limit-does-not-exist | # How to prove this limit does not exist
I'm a relative novice to epsilon-delta proofs. My professor assigned this practice problem and I'm having terrible trouble understanding the answer he gave. Moreover, I can't find a good account for a general strategy for how to do these sorts of proofs. I understand the epsilon delta definition, I understand what I'm supposed to do, but I need advice on strategy.
The actual problem.
Prove that the limit
$$\lim_{x \rightarrow 2} \frac{x^3}{x-2}$$ does not exist.
So far I'm pretty stumped; I know I need to show that there is some $\epsilon$ st. such that x being arbitrarily close to 2 does not guarantee that f(x) is within epsilon of L, but that's all I've got.
Thanks.
• Do you understand intuitively why this limit doesn't exist? – Henning Makholm Nov 13 '16 at 0:04
• Yes; it's unbounded at 2. – BenL Nov 13 '16 at 0:06
• Couple of hints in no particular order: $f(x)$ changes sign in any neighborhood of $2$. Also, $f(x)$ is unbounded in any neighborhood of $2$. – dxiv Nov 13 '16 at 0:10
• Uniqueness of limits has not been proven in class and thus can't be used in this proof. And I know it's unbounded; I just don't have any idea how to formalize this into a proof. – BenL Nov 13 '16 at 0:13
• Can an unbounded function stay within a (bounded) neighborhood $(L - \epsilon, L + \epsilon)$? – dxiv Nov 13 '16 at 0:15
First, let's simplify the problem. The numerator has a greater degree than the denominator, which means we can use long division: $$\frac{x^3}{x-2} \equiv x^2+2x+4 + \frac{8}{x-2}$$ Standard properties of limits, i.e. $\lim (\mathrm f+\mathrm g) = \lim \mathrm f + \lim \mathrm g$ and $\lim (k\mathrm f) = k \lim \mathrm f$, give $$\lim_{x \to 2} \left(\frac{x^3}{x-2}\right) = 12+\lim_{x \to 2} \left(\frac{8}{x-2}\right) = 12+8\, \lim_{x \to 2} \left(\frac{1}{x-2}\right)$$
Clearly the limit of $\frac{1}{x-2}$ as $x \to 2$ is not defined and the original limit is not defined.
It's enough to show that $\frac{1}{x-2}$ is unbounded.
For any $L>0$, we can find $x >2$ for which $\frac{1}{x-2} > L$.
If $\frac{1}{x-2} > L$ then $0 < x-2 < \frac{1}{L}$, and so $2< x < 2 + \frac{1}{L}$.
How big do you want $\frac{1}{x-2}$ to be? Let's say $L = 1,000,000$. For any $2 < x< 2 + \frac{1}{L}$, i.e. $2 < x < 2.000001$, you'll have $\frac{1}{x-2} > 1,000,000$.
a limit exists iff $\exists a$ $\forall \epsilon$ $\exists \delta$ so that $|x-2|<\delta \implies |f(x)-a|<\epsilon$
To prove a limit does not exist, you need to prove the opposite proposition, i.e.
$\forall a$ $\exists \epsilon$ $\forall \delta$ so that $\exists x,|x-2|<\delta$ and $|f(x)-a|>\epsilon$
$2+\delta/2$ and $2-\delta/2$ are good candidates : they are close enough from 2, and their image by the function are really far apart, so one of them will be far away enough from your $\epsilon$
Recall the definition of limit as follow:
We write $\lim\limits_{x \to 2} f(x) = a$ if for any $\epsilon > 0$, there exists $\delta$, possibly depending on $\epsilon$, such that $|f(x) - a| < \epsilon$ for all $x$ such that $|x - 2| < \delta$.
Now we look at $\frac{x^3}{x - 2}$, the closer it gets to 2 (for example, 1.99 or 2.01), the larger it is and it can grow without bound. That means for any interval that contains 2, the value of $f$ get infinitely large, then how can we make it arbitrarily close to any particular value?
This contradiction indicates the limit cannot possibly exists!
• You've got the definition of limit backwards ($\delta$ depends on $\epsilon$, there is no such $\delta$ for $\forall \epsilon$ unless the function is constant). – dxiv Nov 13 '16 at 1:17
• Thank you for your comment! That is also my understanding, I updating my wording to reflect that. – Andrew Au Nov 13 '16 at 3:39 | 2019-06-16 12:32:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550490975379944, "perplexity": 173.60946230182716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00255.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/find-distance-point-p-1-4-3-line-l-x-2-t-y-1-t-z-3t-using-following-methods--l-line-2-spac-q1450324 | Find the distance from the point P(1, 4, -3) to the line L: x=2+t, y=-1-t, z=3t by using each of the following methods.
a). If L is a line in 2-space or 3-space that passes through the points A and B, then the distance d from a point P to the line L is equal to the length of the component of the vector that is orthogonal to the vector
b). Show that in 3-space the distance d from a point P to the line L through points A and B can be expressed as
then use this result to find the distance between the point P and line L given at the beginning of this problem. | 2015-12-01 12:57:56 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101730346679688, "perplexity": 106.24504761177907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398467979.1/warc/CC-MAIN-20151124205427-00147-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.transtutors.com/questions/create-an-income-statement-64008.htm | # create an income statement
B Corporation had Income from continuing operations of $800,000 (after taxes) in 2010, unadjusted. In addition, the following information, which has not been considered, is as follows. 1. In 2010, the company adopted the double declining-balance method of amortizing equipment. Prior to 2010, Hauser had used the straight-line method. The change decreases income for 2010 by$50,000 (pre-tax) and the cumulative effect of the change on prior years' income was a $200,000 (pre-tax) decrease. 2. A machine was sold for$140,000 cash during the year at a time when its book value was $100,000. (Amortization has been properly recorded.) The company often sells machinery of this type. 3. B Corp decided to discontinue its stereo division in 2010. During the current year, the loss on the disposal of the segment of the business was$150,000 less applicable taxes. Instructions -Present in good form the Income Statement of the Corporation for 2010 starting with "income from continuing operations, unadjusted." (\$800,000). Then deal with the 3 items above as required and complete the income statement. -tax rate is 20% and 100,000 shares of common stock were outstanding during the year. Show calculations separate from the Income Statement as your Income statement must be in Good Form which means it is to look as a formal statement would. Any calculations to arrive at these figures should be shown on a separate sheet, or on the bottom of the page and referenced to the Income Statement. | 2019-07-16 04:04:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2566113770008087, "perplexity": 1951.266585369686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00314.warc.gz"} |
http://www.sciforums.com/threads/computer-courses-and.5506/ | # computer courses and \$
Discussion in 'Computer Science & Culture' started by aerosimon, Jan 28, 2002.
Not open for further replies.
1. ### aerosimonRegistered Member
Messages:
17
since some ... let me rephrase, i mean since most of you seem to work in the computer industry could anyone tell me which course is better to take if i was interested in both the computers side and also the money side (im greedy
)
would i be better off doing computer engineering instead of computer science if i wanted to make more money?
also i was wondering that since computer engineering is specialised in the engineering of computers would jobs be harder to come by, and since computer science can get you into most jobs in the computer industry would jobs be easier to find?
Messages:
4,127
Well, I strongly recommend not doing something for the money alone. Seriously consider what you would enjoy doing. If you really want money, get an MBA and you can earn it by bullshitting yourself through life.
However, if you are seriously considering CS/CE, realize there is a substantive difference between the two. CE is hardware. CE tends to appeal to very hardcore math/physics nerds, because it draws heavily on those two disciplines. CS, on the other hand is more befitting 'lesser' nerds. That is, nerds with some sociopolitical perspective.
I hear that CE is ultra-competitive. So, you'll need to be very skilled just to get an entry-level job. CS, on the other hand, is a bit more forgiving since there's a wider variety of chores you can perform. However, impressive skill is still necessary if you expect to rise above your menial chores and entry-level salary.
5. ### aerosimonRegistered Member
Messages:
17
ok thanks porfiry for the info, i'll see how i go in my first year of computer science. now that you tell me how competitive CE is i think i like my chances with computer science
. although im not too bad in maths and physics, i dont think i can match some of the smarties doing it aswell. again thanks for the info!
7. ### RickॐValued Senior Member
Messages:
3,336
Hey Dave,
hey pretty is the best btw.
bye!
8. ### malishaRegistered Senior Member
Messages:
64
Well i did comp-sci because i like software development and programming a little more then hardware. But it also depends on where you live.
In Aus there is a limited about of jobs on the hardware side of things, alot of my friends that did the extra year in comp-eng are getting into the same jobs that I am and to get a decent eng type of job you have to go overseas.
But it is really important to do what you like, otherwise your not going to be happy with your job no matter how much money it makes.
9. ### aerosimonRegistered Member
Messages:
17
hi malisha, i wanted to ask if you did an honours year for the computer science course. also how many years did it take you to finish your comp sci course, because at my uni it's only a 3 year course and providing you obtain good grades you may enter the honours year (4th) if you want to.
i also like programming but i dont mind studying the hardware side of things in comp engin aswell, so i've decided if i get good enough grades to transfer i will, but if i dont get good enough grades i'll just strive to get into the honours year, after all if i dont do well in comp sci i dont think i'd fair much better in comp engineering
10. ### malishaRegistered Senior Member
Messages:
64
Hi areo,
Na i didnt do honours, my grades where good enough but i instead had a job offer for 47k which i could not pass up. I believed that spending a year of actual real work experiance working and learning the industry first hand is alot better then spending another year of uni, you can always go back to uni but getting a job initially is difficult, expecially one your really happy with.
My degree was 3 years @ university of sydney in Aus and i enjoyed it a hell of a lot.
If you like hardware by all means do comp eng, me on the other hand, i loved the idea of just needing a computer and compiler to create something cool, not need for any bread boards and chips
so i focused all my energy on just software stuff.
And another thing, please dont asume that just because you might do crap in comp-sci that you will do crap in comp eng, i know alot of comp-eng guys who knock comp-sci studnets and then when an assignment comes up they turn around and ask for help from a comp-sci student, either by way of copying or explanantions, some people are just better at hardware and others are just better at programming, you seem like you like programming and also hardware so you would be very suited to comp-eng if you like both it is alot better because you get alot more exposure in comp-eng which is always a good thing.
, but if you only like one thing in particular dont force doing a course that covers both, its a waste of your time and you'll really get annoyed close to exam dates
11. ### aerosimonRegistered Member
Messages:
17
wow a 47k starting salary is very good! hope you get promoted and earn even more! well i really like programming but im not too good at it, maybe its because i bludged all through software design in my high school year... i'll work harder this year! well my marks werent good enough to enter computer engineering in the first year, i missed out by a couple of marks so i decided if i do well enough i'd probably transfer to computer engineering after the first year and start the second year of computer engineering (because the electives i chose matched those of comp eng's first year). if i dont get into it i'll try to get into the honours year of computer science, but i havent even started my first year so im not going to worry about it just yet (i've got other things to worry about
)
im studying at the university of nsw, i think i could have gotten into comp engin there but i dont really like syd university that much not that its bad! just that the building look a bit old to me and its near redfern which is a bit dangerous.
i would like if we were to keep in touch, my email is at [email protected], because if i have questions about the course i'd know who to turn to =) thanks
12. ### malishaRegistered Senior Member
Messages:
64
heheh no sh*t, unsw i got heaps of friends that go there, and they doing comp eng as well.
You know my reason for choosing sydney uni, because it was really open and had alot of nice areas to sit, i imagined myself sitting in the big quad area and eating lunch their but i never got around to doing it one because that area had alot of weird people and reason two is because i stayed in the labs to much or hung out at the cafeteria .
Actually though comp-eng in sydney was alittle higher then newsouth, i think newsouth had uai cut of at 91% whilst sydneys was 93% - 95% uai, well in my year anyway.
I got an 89.suthing not to good not to bad, i cant quite remember, i actually wanted more because i wanted comp-sci advanced but after the first year doing normal comp-sci was ok, i did take some advanced courses though, jsut the onces that i liked, but i tell you something the minute you walk into uni uai's are all out the door, people that got 99% uai are dying and people that got 75% uai and barley got into science are kicking butt
(not all but u get the point) i guess it all depends on how hard you work at uni. And let me tell you first year is the all important year where you pick up the basics or you copy all your assignments from then on so its important you get whats happening in first year.
And about the redfern thing, yeh as long as its in the morining not to much happen, just that theres always the 'ay brother you got a dollar brother' happening all the time which is quite annoying.
yeh i dont mind keeping in contact my email is
[email protected] , good luck with your degree though, and beware of the late late nights programming trying to finish of assignments
Last edited: Feb 3, 2002
13. ### malishaRegistered Senior Member
Messages:
64
Dam sorry i double posted somehow | 2018-03-24 19:33:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4812290668487549, "perplexity": 1810.9178441214765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650993.91/warc/CC-MAIN-20180324190917-20180324210917-00586.warc.gz"} |
http://jdh.hamkins.org/tag/jonas-reitz/ | # The set-theoretic universe is not necessarily a class-forcing extension of HOD
• J. D. Hamkins and J. Reitz, “The set-theoretic universe $V$ is not necessarily a class-forcing extension of HOD,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{HamkinsReitz:The-set-theoretic-universe-is-not-necessarily-a-forcing-extension-of-HOD,
author = {Joel David Hamkins and Jonas Reitz},
title = {The set-theoretic universe {$V$} is not necessarily a class-forcing extension of {HOD}},
journal = {ArXiv e-prints},
year = {2017},
volume = {},
number = {},
pages = {},
month = {September},
note = {manuscript under review},
abstract = {},
keywords = {under-review},
source = {},
doi = {},
eprint = {1709.06062},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/the-universe-need-not-be-a-class-forcing-extension-of-hod},
}
Abstract. In light of the celebrated theorem of Vopěnka, proving in ZFC that every set is generic over $\newcommand\HOD{\text{HOD}}\HOD$, it is natural to inquire whether the set-theoretic universe $V$ must be a class-forcing extension of $\HOD$ by some possibly proper-class forcing notion in $\HOD$. We show, negatively, that if ZFC is consistent, then there is a model of ZFC that is not a class-forcing extension of its $\HOD$ for any class forcing notion definable in $\HOD$ and with definable forcing relations there (allowing parameters). Meanwhile, S. Friedman (2012) showed, positively, that if one augments $\HOD$ with a certain ZFC-amenable class $A$, definable in $V$, then the set-theoretic universe $V$ is a class-forcing extension of the expanded structure $\langle\HOD,\in,A\rangle$. Our result shows that this augmentation process can be necessary. The same example shows that $V$ is not necessarily a class-forcing extension of the mantle, and the method provides a counterexample to the intermediate model property, namely, a class-forcing extension $V\subseteq V[G]$ by a certain definable tame forcing and a transitive intermediate inner model $V\subseteq W\subseteq V[G]$ with $W\models\text{ZFC}$, such that $W$ is not a class-forcing extension of $V$ by any class forcing notion with definable forcing relations in $V$. This improves upon a previous example of Friedman (1999) by omitting the need for $0^\sharp$.
In 1972, Vopěnka proved the following celebrated result.
Theorem. (Vopěnka) If $V=L[A]$ where $A$ is a set of ordinals, then $V$ is a forcing extension of the inner model $\HOD$.
The result is now standard, appearing in Jech (Set Theory 2003, p. 249) and elsewhere, and the usual proof establishes a stronger result, stated in ZFC simply as the assertion: every set is generic over $\HOD$. In other words, for every set $a$ there is a forcing notion $\mathbb{B}\in\HOD$ and a $\HOD$-generic filter $G\subseteq\mathbb{B}$ for which $a\in\HOD[G]\subseteq V$. The full set-theoretic universe $V$ is therefore the union of all these various set-forcing generic extensions $\HOD[G]$.
It is natural to wonder whether these various forcing extensions $\HOD[G]$ can be unified or amalgamated to realize $V$ as a single class-forcing extension of $\HOD$ by a possibly proper class forcing notion in $\HOD$. We expect that it must be a very high proportion of set theorists and set-theory graduate students, who upon first learning of Vopěnka’s theorem, immediately ask this question.
Main Question. Must the set-theoretic universe $V$ be a class-forcing extension of $\HOD$?
We intend the question to be asking more specifically whether the universe $V$ arises as a bona-fide class-forcing extension of $\HOD$, in the sense that there is a class forcing notion $\mathbb{P}$, possibly a proper class, which is definable in $\HOD$ and which has definable forcing relation $p\Vdash\varphi(\tau)$ there for any desired first-order formula $\varphi$, such that $V$ arises as a forcing extension $V=\HOD[G]$ for some $\HOD$-generic filter $G\subseteq\mathbb{P}$, not necessarily definable.
In this article, we shall answer the question negatively, by providing a model of ZFC that cannot be realized as such a class-forcing extension of its $\HOD$.
Main Theorem. If ZFC is consistent, then there is a model of ZFC which is not a forcing extension of its $\HOD$ by any class forcing notion definable in that $\HOD$ and having a definable forcing relation there.
Throughout this article, when we say that a class is definable, we mean that it is definable in the first-order language of set theory allowing set parameters.
The main theorem should be placed in contrast to the following result of Sy Friedman.
Theorem. (Friedman 2012) There is a definable class $A$, which is strongly amenable to $\HOD$, such that the set-theoretic universe $V$ is a generic extension of $\langle \HOD,\in,A\rangle$.
This is a postive answer to the main question, if one is willing to augment $\HOD$ with a class $A$ that may not be definable in $\HOD$. Our main theorem shows that in general, this kind of augmentation process is necessary.
It is natural to ask a variant of the main question in the context of set-theoretic geology.
Question. Must the set-theoretic universe $V$ be a class-forcing extension of its mantle?
The mantle is the intersection of all set-forcing grounds, and so the universe is close in a sense to the mantle, perhaps one might hope that it is close enough to be realized as a class-forcing extension of it. Nevertheless, the answer is negative.
Theorem. If ZFC is consistent, then there is a model of ZFC that does not arise as a class-forcing extension of its mantle $M$ by any class forcing notion with definable forcing relations in $M$.
We also use our results to provide some counterexamples to the intermediate-model property for forcing. In the case of set forcing, it is well known that every transitive model $W$ of ZFC set theory that is intermediate $V\subseteq W\subseteq V[G]$ a ground model $V$ and a forcing extension $V[G]$, arises itself as a forcing extension $W=V[G_0]$.
In the case of class forcing, however, this can fail.
Theorem. If ZFC is consistent, then there are models of ZFC set theory $V\subseteq W\subseteq V[G]$, where $V[G]$ is a class-forcing extension of $V$ and $W$ is a transitive inner model of $V[G]$, but $W$ is not a forcing extension of $V$ by any class forcing notion with definable forcing relations in $V$.
Theorem. If ZFC + Ord is Mahlo is consistent, then one can form such a counterexample to the class-forcing intermediate model property $V\subseteq W\subseteq V[G]$, where $G\subset\mathbb{B}$ is $V$-generic for an Ord-c.c. tame definable complete class Boolean algebra $\mathbb{B}$, but nevertheless $W$ does not arise by class forcing over $V$ by any definable forcing notion with a definable forcing relation.
More complete details, please go to the paper (click through to the arxiv for a pdf).
• J. D. Hamkins and J. Reitz, “The set-theoretic universe $V$ is not necessarily a class-forcing extension of HOD,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{HamkinsReitz:The-set-theoretic-universe-is-not-necessarily-a-forcing-extension-of-HOD,
author = {Joel David Hamkins and Jonas Reitz},
title = {The set-theoretic universe {$V$} is not necessarily a class-forcing extension of {HOD}},
journal = {ArXiv e-prints},
year = {2017},
volume = {},
number = {},
pages = {},
month = {September},
note = {manuscript under review},
abstract = {},
keywords = {under-review},
source = {},
doi = {},
eprint = {1709.06062},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/the-universe-need-not-be-a-class-forcing-extension-of-hod},
}
# Inner-model reflection principles
• N. Barton, A. E. Caicedo, G. Fuchs, J. D. Hamkins, and J. Reitz, “Inner-model reflection principles,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{BartonCaicedoFuchsHamkinsReitz:Inner-model-reflection-principles,
author = {Neil Barton and Andr\'es Eduardo Caicedo and Gunter Fuchs and Joel David Hamkins and Jonas Reitz},
title = {Inner-model reflection principles},
journal = {ArXiv e-prints},
year = {2017},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript under review},
abstract = {},
keywords = {under-review},
source = {},
doi = {},
eprint = {1708.06669},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/inner-model-reflection-principles},
}
Abstract. We introduce and consider the inner-model reflection principle, which asserts that whenever a statement $\varphi(a)$ in the first-order language of set theory is true in the set-theoretic universe $V$, then it is also true in a proper inner model $W\subsetneq V$. A stronger principle, the ground-model reflection principle, asserts that any such $\varphi(a)$ true in $V$ is also true in some nontrivial ground model of the universe with respect to set forcing. These principles each express a form of width reflection in contrast to the usual height reflection of the Lévy-Montague reflection theorem. They are each equiconsistent with ZFC and indeed $\Pi_2$-conservative over ZFC, being forceable by class forcing while preserving any desired rank-initial segment of the universe. Furthermore, the inner-model reflection principle is a consequence of the existence of sufficient large cardinals, and lightface formulations of the reflection principles follow from the maximality principle MP and from the inner-model hypothesis IMH.
Every set theorist is familiar with the classical Lévy-Montague reflection principle, which explains how truth in the full set-theoretic universe $V$ reflects down to truth in various rank-initial segments $V_\theta$ of the cumulative hierarchy. Thus, the Lévy-Montague reflection principle is a form of height-reflection, in that truth in $V$ is reflected vertically downwards to truth in some $V_\theta$.
In this brief article, in contrast, we should like to introduce and consider a form of width-reflection, namely, reflection to nontrivial inner models. Specifically, we shall consider the following reflection principles.
Definition.
1. The inner-model reflection principle asserts that if a statement $\varphi(a)$ in the first-order language of set theory is true in the set-theoretic universe $V$, then there is a proper inner model $W$, a transitive class model of ZF containing all ordinals, with $a\in W\subsetneq V$ in which $\varphi(a)$ is true.
2. The ground-model reflection principle asserts that if $\varphi(a)$ is true in $V$, then there is a nontrivial ground model $W\subsetneq V$ with $a\in W$ and $W\models\varphi(a)$.
3. Variations of the principles arise by insisting on inner models of a particular type, such as ground models for a particular type of forcing, or by restricting the class of parameters or formulas that enter into the scheme.
4. The lightface forms of the principles, in particular, make their assertion only for sentences, so that if $\sigma$ is a sentence true in $V$, then $\sigma$ is true in some proper inner model or ground $W$, respectively.
We explain how to force the principles, how to separate them, how they are consequences of various large cardinal assumptions, consequences of the maximality principle and of the inner model hypothesis. Kindly proceed to the article (pdf available at the arxiv) for more.
• N. Barton, A. E. Caicedo, G. Fuchs, J. D. Hamkins, and J. Reitz, “Inner-model reflection principles,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{BartonCaicedoFuchsHamkinsReitz:Inner-model-reflection-principles,
author = {Neil Barton and Andr\'es Eduardo Caicedo and Gunter Fuchs and Joel David Hamkins and Jonas Reitz},
title = {Inner-model reflection principles},
journal = {ArXiv e-prints},
year = {2017},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript under review},
abstract = {},
keywords = {under-review},
source = {},
doi = {},
eprint = {1708.06669},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/inner-model-reflection-principles},
}
This article grew out of an exchange held by the authors on math.stackexchange
in response to an inquiry posted by the first author concerning the nature of width-reflection in comparison to height-reflection: What is the consistency strength of width reflection?
# Approximation and cover properties propagate upward
I should like to record here the proof of the following fact, which Jonas Reitz and I first observed years ago, when he was my graduate student, and I recall him making the critical observation.
It concerns the upward propagation of the approximation and cover properties, some technical concepts that lie at the center of my paper, Extensions with he approximation and cover properties have no new large cardinals, and which are also used in my proof of Laver’s theorem on the definability of the ground model, and which figure in Jonas’s work on the ground axiom.
The fact has a curious and rather embarrassing history, in that Jonas and I have seen an unfortunate cycle, in which we first proved the theorem, and then subsequently lost and forgot our own proof, and then lost confidence in the fact, until we rediscovered the proof again. This cycle has now repeated several times, in absurd mathematical comedy, and each time the proof was lost, various people with whom we discussed the issue sincerely doubted that it could be true. But we are on the upswing now, for in response to some recently expressed doubts about the fact, although I too was beginning to doubt it again, I spent some time thinking about it and rediscovered our old proof! Hurrah! In order to break this absurd cycle, however, I am now recording the proof here in order that we may have a place to point in the future, to give the theorem a home.
Although the fact has not yet been used in any application to my knowledge, it strikes me as inevitable that this fundamental fact about the approximation and cover properties will eventually find an important use.
Definition. Assume $\delta$ is a cardinal in $V$ and $W\subset V$ is a transitive inner model of set theory.
• The extension $W\subset V$ satisfies the $\delta$-approximation property if whenever $A\subset W$ is a set in $V$ and $A\cap a\in W$ for any $a\in W$ of size less than $\delta$ in $W$, then $A\in W$.
• The extension $W\subset V$ satisfies the $\delta$-cover property if whenever $A\subset W$ is a set of size less than $\delta$ in $V$, then there is a covering set $B\in W$ with $A\subset B$ and $|B|^W\lt\delta$.
Theorem. If $W\subset V$ has the $\delta$-approximation and $\delta$-cover properties and $\delta\lt\gamma$ are both infinite cardinals in $V$, then it also has the $\gamma$-approximation and $\gamma$-cover properties.
Proof. First, notice that the $\delta$-approximation property trivially implies the $\gamma$-approximation property for any larger cardinal $\gamma$. So we need only verify the $\gamma$-cover property, and this we do by induction. Note that the limit case is trivial, since if the cover property holds at every cardinal below a limit cardinal, then it trivially holds at that limit cardinal, since there are no additional instances of covering to be treated. Thus, we reduce to the case $\gamma=\delta^+$, meaning $(\delta^+)^V$, but we must allow that $\delta$ may be singular here.
If $\delta$ is singular, then we claim that the $\delta$-cover property alone implies the $\delta^+$-cover property: if $A\subset W$ has size $\delta$ in $V$, then by the singularity of $\delta$ we may write it as $A=\bigcup _{\alpha\in I}A_\alpha$, where each $A_\alpha$ and $I$ have size less than $\delta$. By the $\delta$-cover property, there are covers $A_\alpha\subset B_\alpha\in W$ with $B_\alpha$ of size less than $\delta$ in $W$. Furthermore, the set $\{B_\alpha\mid\alpha\in I\}$ itself is covered by some set $\mathcal{B}\in W$ of size less than $\delta$ in $W$. That is, we cover the small set of small covers. We may assume that every set in $\mathcal{B}$ has size less than $\delta$, by discarding those that aren’t, and so $B=\bigcup\mathcal{B}$ is a set in $W$ that covers $A$ and has size at most $\delta$ there, since it is small union of small sets, thereby verifying this instance of the $\gamma$-cover property.
If $\delta$ is regular, consider a set $A\subset W$ with $A\in V$ of size $\delta$ in $V$, so that $A=\{a_\xi\mid\xi\lt\delta\}$. For each $\alpha\lt\delta$, the initial segment $\{a_\xi\mid\xi\lt\alpha\}$ has size less than $\delta$ and is therefore covered by some $B_\alpha\in W$ of size less than $\delta$ in $W$. By adding each $B_\alpha$ to what we are covering at later stages, we may assume that they form an increasing tower: $\alpha\lt\beta\to B_\alpha\subset B_\beta$. The choices $\alpha\mapsto B_\alpha$ are made in $V$. Let $B=\bigcup_\alpha B_\alpha$, which certainly covers $A$. Observe that for any set $a\in W$ of size less than $\delta$, it follows by the regularity of $\delta$ that $B\cap a=B_\alpha\cap a$ for all sufficiently large $\alpha$. Thus, all $\delta$-approximations to $B$ are in $W$ and so $B$ itself is in $W$ by the $\delta$-approximation property, as desired. Note that $B$ has size less than $\gamma$ in $W$, because it has size $\delta$ in $V$, and so we have verified this instance of the $\gamma$-cover property for $W\subset V$.
Thus, in either case we’ve established the $\gamma$-cover property for $W\subset V$, and the proof is complete. QED
(Thanks to Thomas Johnstone for some comments and for pointing out a simplification in the proof: previously, I had reduced without loss of generality to the case where $A$ is a set of ordinals of order type $\delta$; but Tom pointed out that the general case is not actually any harder. And indeed, Jonas dug up some old notes to find the 2008 version of the argument, which is essentially the same as what now appears here.)
Note that without the $\delta$-approximation property, it is not true that the $\delta$-cover property transfers upward. For example, every extension has the $\aleph_0$-cover property.
# Jonas Reitz
Jonas Reitz earned his Ph.D under my supervision in June, 2006 at the CUNY Graduate Center. He was truly a pleasure to supervise. From the earliest days of his dissertation research, he had his own plan for the topic of the work: he wanted to “undo” forcing, to somehow force backwards, from the extension to the ground model. At first I was skeptical, but in time, ideas crystalized around the ground axiom (now with its own Wikipedia entry), formulated using a recent-at-the-time result of Richard Laver. Along with Laver’s theorem, Jonas’s dissertation was the beginning of the body of work now known as set-theoretic geology. Jonas holds a tenured position at the New York City College of Technology of CUNY.
Jonas Reitz
web page | math genealogy | MathSciNet | ar$\chi$iv | google scholar | related posts
Jonas Reitz, “The ground axiom,” Ph.D. dissertation, CUNY Graduate Center, June, 2006. ar$\chi$iv
Abstract. A new axiom is proposed, the Ground Axiom, asserting that the universe is not a nontrivial set-forcing extension of any inner model. The Ground Axiom is first-order expressible, and any model of ZFC has a class-forcing extension which satisfies it. The Ground Axiom is independent of many well-known set-theoretic assertions including the Generalized Continuum Hypothesis, the assertion V=HOD that every set is ordinal definable, and the existence of measurable and supercompact cardinals. The related Bedrock Axiom, asserting that the universe is a set-forcing extension of a model satisfying the Ground Axiom, is also first-order expressible, and its negation is consistent. As many of these results rely on forcing with proper classes, an appendix is provided giving an exposition of the underlying theory of proper class forcing.
# Set-theoretic geology
• G. Fuchs, J. D. Hamkins, and J. Reitz, “Set-theoretic geology,” Annals of Pure and Applied Logic, vol. 166, iss. 4, pp. 464-501, 2015.
@article{FuchsHamkinsReitz2015:Set-theoreticGeology,
author = "Gunter Fuchs and Joel David Hamkins and Jonas Reitz",
title = "Set-theoretic geology",
journal = "Annals of Pure and Applied Logic",
volume = "166",
number = "4",
pages = "464--501",
year = "2015",
note = "",
MRCLASS = {03E55 (03E40 03E45 03E47)},
MRNUMBER = {3304634},
issn = "0168-0072",
doi = "10.1016/j.apal.2014.11.004",
eprint = "1107.4776",
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = "http://jdh.hamkins.org/set-theoreticgeology",
}
A ground of the universe V is a transitive proper class W subset V, such that W is a model of ZFC and V is obtained by set forcing over W, so that V = W[G] for some W-generic filter G subset P in W . The model V satisfies the ground axiom GA if there are no such W properly contained in V . The model W is a bedrock of V if W is a ground of V and satisfies the ground axiom. The mantle of V is the intersection of all grounds of V . The generic mantle of V is the intersection of all grounds of all set-forcing extensions of V . The generic HOD, written gHOD, is the intersection of all HODs of all set-forcing extensions. The generic HOD is always a model of ZFC, and the generic mantle is always a model of ZF. Every model of ZFC is the mantle and generic mantle of another model of ZFC. We prove this theorem while also controlling the HOD of the final model, as well as the generic HOD. Iteratively taking the mantle penetrates down through the inner mantles to what we call the outer core, what remains when all outer layers of forcing have been stripped away. Many fundamental questions remain open.
# Pointwise definable models of set theory
• J. D. Hamkins, D. Linetsky, and J. Reitz, “Pointwise definable models of set theory,” Journal Symbolic Logic, vol. 78, iss. 1, pp. 139-156, 2013.
@article {HamkinsLinetskyReitz2013:PointwiseDefinableModelsOfSetTheory,
AUTHOR = {Hamkins, Joel David and Linetsky, David and Reitz, Jonas},
TITLE = {Pointwise definable models of set theory},
JOURNAL = {Journal Symbolic Logic},
FJOURNAL = {Journal of Symbolic Logic},
VOLUME = {78},
YEAR = {2013},
NUMBER = {1},
PAGES = {139--156},
ISSN = {0022-4812},
MRCLASS = {03E55},
MRNUMBER = {3087066},
MRREVIEWER = {Bernhard A. K{\"o}nig},
DOI = {10.2178/jsl.7801090},
URL = {http://jdh.hamkins.org/pointwisedefinablemodelsofsettheory/},
eprint = "1105.4597",
archivePrefix = {arXiv},
primaryClass = {math.LO},
}
One occasionally hears the argument—let us call it the math-tea argument, for perhaps it is heard at a good math tea—that there must be real numbers that we cannot describe or define, because there are are only countably many definitions, but uncountably many reals. Does it withstand scrutiny?
This article provides an answer. The article has a dual nature, with the first part aimed at a more general audience, and the second part providing a proof of the main theorem: every countable model of set theory has an extension in which every set and class is definable without parameters. The existence of these models therefore exhibit the difficulties in formalizing the math tea argument, and show that robust violations of the math tea argument can occur in virtually any set-theoretic context.
A pointwise definable model is one in which every object is definable without parameters. In a model of set theory, this property strengthens V=HOD, but is not first-order expressible. Nevertheless, if ZFC is consistent, then there are continuum many pointwise definable models of ZFC. If there is a transitive model of ZFC, then there are continuum many pointwise definable transitive models of ZFC. What is more, every countable model of ZFC has a class forcing extension that is pointwise definable. Indeed, for the main contribution of this article, every countable model of Godel-Bernays set theory has a pointwise definable extension, in which every set and class is first-order definable without parameters.
# The ground axiom is consistent with $V\ne{\rm HOD}$
• J. D. Hamkins, J. Reitz, and W. Woodin, “The ground axiom is consistent with $V\ne{\rm HOD}$,” Proc.~Amer.~Math.~Soc., vol. 136, iss. 8, pp. 2943-2949, 2008.
@ARTICLE{HamkinsReitzWoodin2008:TheGroundAxiomAndVequalsHOD,
AUTHOR = {Hamkins, Joel David and Reitz, Jonas and Woodin, W.~Hugh},
TITLE = {The ground axiom is consistent with {$V\ne{\rm HOD}$}},
JOURNAL = {Proc.~Amer.~Math.~Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {136},
YEAR = {2008},
NUMBER = {8},
PAGES = {2943--2949},
ISSN = {0002-9939},
CODEN = {PAMYAR},
MRCLASS = {03E35 (03E45 03E55)},
MRNUMBER = {2399062 (2009b:03137)},
MRREVIEWER = {P{\'e}ter Komj{\'a}th},
DOI = {10.1090/S0002-9939-08-09285-X},
URL = {},
file = F,
}
Abstract. The Ground Axiom asserts that the universe is not a nontrivial set-forcing extension of any inner model. Despite the apparent second-order nature of this assertion, it is first-order expressible in set theory. The previously known models of the Ground Axiom all satisfy strong forms of $V=\text{HOD}$. In this article, we show that the Ground Axiom is relatively consistent with $V\neq\text{HOD}$. In fact, every model of ZFC has a class-forcing extension that is a model of $\text{ZFC}+\text{GA}+V\neq\text{HOD}$. The method accommodates large cardinals: every model of ZFC with a supercompact cardinal, for example, has a class-forcing extension with $\text{ZFC}+\text{GA}+V\neq\text{HOD}$ in which this supercompact cardinal is preserved.
# The Ground Axiom
• J. D. Hamkins, “The Ground Axiom,” Mathematisches Forschungsinstitut Oberwolfach Report, vol. 55, pp. 3160-3162, 2005.
@ARTICLE{Hamkins2005:TheGroundAxiom,
AUTHOR = "Joel David Hamkins",
TITLE = "The {Ground Axiom}",
JOURNAL = "Mathematisches Forschungsinstitut Oberwolfach Report",
YEAR = "2005",
volume = "55",
number = "",
pages = "3160--3162",
month = "",
note = "",
abstract = "",
keywords = "",
source = "",
eprint = {1607.00723},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/thegroundaxiom/},
file = F,
}
This is an extended abstract for a talk I gave at the 2005 Workshop in Set Theory at the Mathematisches Forschungsinstitut Oberwolfach. | 2018-03-21 07:03:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870683312416077, "perplexity": 672.71561782175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00173.warc.gz"} |
https://andthentheresphysics.wordpress.com/2015/08/05/its-more-difficult-with-physical-models/ | ## It’s more difficult with physical models
I ended up in a brief discussion on Twitter about models, in which the other party was suggesting that models can be manipulated to produce a desired result. Their background was finance, and they linked to some kind of Financial Times report to support their view. I, quite innocently, pointed out that this was more difficult with physical models, and the discussion went downhill from there. I do think, however, that my point is defensible, which I shall try to do here.
What I mean by physical models is those that are meant to represent the physical world, as opposed to – for example – financial, or economic models. The crucial point about physical models (which does not – as far as I’m aware – apply to other types of models) is that they’re typically founded on fundamental conservation laws; the conservation of mass, momentum, and energy. This has two major consequences; you are restricted as to how you can develop your model, and others can model the same physical system without needing to know the details of your model.
A set of fundamental equations that is often used to model physical systems is the Navier Stokes equations. These are essentially equations that describe the evolution of a gas/fluid in the presence of dissipation. The first of these equations represents mass conservation
$\dfrac{ \partial \rho}{\partial t} + \nabla \cdot (\rho \bold{u}) = 0,$
which we can expand to:
$\dfrac{ \partial \rho}{\partial t} + \dfrac{\partial \rho u_x}{\partial x} + \dfrac{\partial \rho u_y}{\partial y} + \dfrac{\partial \rho u_z}{\partial z} = 0.$
What this equation is essentially saying is that the density, $\rho$, in a particular volume cannot change unless there is a net flux, $\rho \bold{u}$, into that volume.
We can also write the equivalent equation for momentum
$\dfrac{\partial \rho \bold{u}}{\partial t} + \nabla \cdot (\rho \bold{u} \bold{u}) = - \nabla P + \nu \nabla^2 \bold{u},$
which we can again write out (but which I’ll only do for the $x-$component) as
$\dfrac{\partial \rho u_x}{\partial t} + \dfrac{\partial \rho u_x u_x}{\partial x} + \dfrac{\partial \rho u_x u_y} {\partial y} + \dfrac{\partial \rho u_x u_z}{\partial z} = - \dfrac{\partial P}{\partial x} + \nu \left( \dfrac{\partial^2 u_x}{\partial x^2} + \dfrac{\partial^2 u_x}{\partial y^2} + \dfrac{\partial^2 u_x}{\partial z^2} \right).$
The left-hand-side is similar to that for the equation for mass conservation; the momentum in a volume can change if there is a net flux of momentum into that volume. There are, however, now terms on the right-hand-side. These are forces. The first is simply the pressure force; if there is a pressure gradient across this volume, then it means that there is net force on the volume, and the momentum will change. The final term is the viscous force. Microscopically you can think of this as representing a change of momentum due to the exchange of gas particles between neighbouring volumes. If there is no viscosity, then there will be no net change in momentum. If there is a viscosity, then the momentum gained will be different to the momentum lost, and there will be a net change in momentum. This can then be expressed macroscopically as a force.
You can also write out a similar equation for energy, that can also include the change in energy due to work done by neighbouring volumes, energy changes due to viscous dissipation, and – potentially – heat conduction. I won’t write this one out, as it just gets more and more complicated and harder and harder to explain. The point is, though, that these equations describe the evolution of a gas/fluid and are used extensively across all of the physical sciencies; from studying star and planet formation, through to atmospheric dynamics. They conserve mass, momentum, and energy. There are certain parameters (viscosity, heat conduction, …) that can be adjusted, but these are typically constrained both by physical arguments and by observations. For example, one could set the ocean heat diffusion to be so small that the surface warmed incredibly fast. It would, however, be fairly obvious that this was wrong given that it would neither match the observed surface warming, nor the warming of the deeper parts of the oceans.
Now, I don’t have any specific expertise in modelling outside of the physical sciences, so maybe there are equivalent conservation-type laws that one can apply to other types of modelling. It does seem, however, that when a physicist applies some form of conservation law to economics, they get told that they don’t really understand economics terminology, nor the basics of economic growth. The latter may well be true, but it does still seem consistent with the basic idea that such modelling isn’t constrained by the type of conservation laws that constrain most physical systems.
To be clear, I’m not suggesting that physical models are somehow better than other types of models, or that physiscists are somehow better than other types of researchers; I’m simply pointing out that the existence of fundamental conservation laws makes it quite difficult to produce some kind of desired result using a physical model. I’m also not saying that it isn’t possible, simply that it’s harder when compared to models that don’t have underlying conservation laws. It’s also easier to pick up on such issues, given that you don’t need to know the details of the other model in order to model the same system independently. Essentially, that it may be regarded as easy to engineer a desired result with some models, does not necessarily make it the case for all types of models. Of course, I’m sure there are subtleties that I haven’t considered; this isn’t meant to be a definitive argument. Also, if someone can convince me that I’m wrong, feel free to try and do so.
This entry was posted in Climate change, Science and tagged , , , , , , , . Bookmark the permalink.
### 359 Responses to It’s more difficult with physical models
1. jsam says:
Because economic models have failed all models must fail.
2. Well, yes, that’s appears to have been the argument being made.
3. Yvan Dutil says:
Physical model are hard because they have a large number of constrain to respect. Economic model only deal with a few observable at a time. The observation space is then poorly constrained. In addition to conservation law, physical model must also be consistent with observation outside their own field of application. This is why you cannot accept homeopathy or other pseudoscience by only on a few experiment. In climate science, this make you reject the solar hypothesis even if a statistical signal exist,
4. BBD says:
I ended up in a brief discussion on Twitter about models, in which the other party was suggesting that models can be manipulated to produce a desired result.
So another paranoid (right-wing) conspiracy theorist claiming nefarious intent by climate scientists. Or did I miss some subtlety here?
Their background was finance
As it so often is, to nobody’s great surprise.
5. Yvan,
In climate science, this make you reject the solar hypothesis even if a statistical signal exist,
I’m not quite sure what you mean by this. I guess maybe what you mean is that you can find a correlation between solar activity and climate, but when you consider the actual physics, the energetics isn’t consistent with a high sensitivity to changes in solar forcing and all the other ideas (cosmic rays, magnetice fields, …) are not based on physically plausible models (yet).
6. BBD,
I think the a argument was simply “I have experience in some kind of modelling, my experience tells me that it’s possible to engineer all sorts of desired results with the models with which I have experience, therefore this is true for all models”.
7. Harry Twinotter says:
I have problems with the term “physical model” – to me things like climate models, financial models etc are all “conceptual models”.
A working scale model of the earth would be nice, but to my knowledge no one has attempted it. I know they did it on a Blake’s Seven episode to test evolution – it did not end well 🙂
8. Harry,
I’m not quite sure why you regard them all as conceptual models. However, even if physical model isn’t ideal, I did try to define what I meant and I’m happy to use a better descriptor if anyone can think of one.
9. Economics appears to be largely opinion: there are very few generally-agreed laws with which to construct universally-accepted financial models. Every economist you ask provides a different prediction as to how financial situations turn out.
This is self-evidently true because otherwise how could stock markets operate? With a workable model, everyone would be able to predict forthcoming crises and act accordingly. In reality every financial event that comes along is unforeseen—the few gamblers who happen to call it right being the exceptions that prove the rule.
No wonder they don’t understand physical models based on long-established and proven rules and laws.
10. Eli Rabett says:
This is the problem with statistical treatments of observations, they do not include physical constraints. The classic confrontation was Bart’s rooty solution to weight loss
11. Eli,
I had to look up Bart’s rooty solution to weight loss – initially I wasn’t sure if you were referring to Bart Simpson, or Bart Verheggen.
12. Sam taylor says:
I think where things like Murphy v Smith are at loggerheads is that they’re talking about fundamentally different things. Murphy is talking about the size of the physical economy, which of course must conform to the various conservation laws and so on, which leads to the obvious implications that Murphy spells out. However Noah comes from a world in which a Gaugin can be valued more highly than an airbus a320, and frankly there ain’t no conservation law that can account for that. They’re both right, but are talking at cross purposes.
13. BBD says:
ATTP
I think the a argument was simply “I have experience in some kind of modelling, my experience tells me that it’s possible to engineer all sorts of desired results with the models with which I have experience, therefore this is true for all models”.
That’s not an argument; it’s a logical fallacy 😉
14. PaulP says:
Another important difference is that air and water molecules do not read scientific journals whereas many important players in the economy do base some of their decisions on the outputs of economic models. This means that the models themselves have feedbacks into the reality they are trying to model- when these feedbacks are positive interesting things happen.
15. Paul P,
Interesting point. I hadn’t thought of that. In economic/financial modelling, there can actually be feedback based on the models themselves.
16. Sam,
I think where things like Murphy v Smith are at loggerheads is that they’re talking about fundamentally different things.
Yes, that was my impression too. For example, Murphy interpreted infinite literally, whereas Smith interpreted it as no limits for the foreseeable future (IIRC).
Gaugin can be valued more highly than an airbus a320, and frankly there ain’t no conservation law that can account for that. They’re both right, but are talking at cross purposes.
Yes, a good point.
My impression was that both did a poor job of considering the other party’s position. I think both Murphy and Smith’s posts came across as somewhat condescending, based on the idea that the other party was rather clueless when it came to things outside their area of formal expertise. A pity, because I think they were both making interesting points.
17. Magma says:
Using the Navier-Stokes equations as an example of a conservative system? You may be aiming high. Using as a recent example the fact that various media in Australia, the UK and North America once again credulously reported on a “NASA warp drive” that isn’t, even very basic insight into conservation of energy and momentum seems to be lacking in a large percentage of the population, even some educated in science or engineering.
Or to use an example from climate change, this: http://edition.cnn.com/2015/08/03/opinions/sutter-climate-skeptics-woodward-oklahoma/ The reporter is sympathetic and finds a silver lining in the cloud. I mostly find it depressing.
18. Magna,
Yes, I read some of that article, but didn’t make it to the end (it was long and rather depressing). I think I see where he’s coming from. I’m sure there are many “skeptics” who are very nice, decent people, who I would find very interesting and quite enjoy meeting and spending time with. It is just a bit depressing that in today’s educated socieities, we still see so much pseudo-science being accepted.
19. Sam taylor says:
There’s quite a good article here which tries to kind of thread the needle of the two competing worldviews, and indicate why they can both be right at the same time:
http://www.bondeconomics.com/2014/08/sustained-growth-on-finite-planet.html
Ultimately it all comes back to the \$ not being a physical unit, it’s just a symbol, and is perhaps best thought of as a kind of hybrid between a tool of exchange and an information feedback mechanism (price) which dictates how much effort humans devote towards certain behaviours.
20. Morbeau says:
BBD: That’s not an argument; it’s a logical fallacy.
It’s an important one though, because it leaves the door open to the kind of financial modelling that can “create” wealth by extracting a large number of tiny percentages of dollars and moving them into someone’s pocket. My take on this is that financial modellers engineer their programs to produce a desirable result for them or their employers, and then assume that everyone else does the same. It’s analogous to the political discussion between “conservative” businesspeople and the rest of the world, where the conservative thinks, “I gamed the system to get where I am, because that’s what business values as ‘innovation’, and I assume everyone is playing the same game. How could you succeed otherwise?” (Yes, I’m stereotyping.)
21. @Magma
That’s a great story, beautifully told: “Woodward County, Oklahoma: Why do so many here doubt climate change?”
22. T-rev says:
23. Sam,
Thanks, that’s a good article and is – I think – roughly how I understand the situation. We don’t need to tie GDP/economic growth to some resource. As long as a resource is sufficient, then we continue to increase economic activity and – hence – economic growth. At least, that’s what I think the argument is.
24. jj says:
The work of climateprediction.net provides evidence supporting your view. Through the magic of distributed computing they have run thousands of climate model simulations with a broad range of parameter values. All the runs show warming, and the runs which match the past, show warming consistent with IPCC projections. This is strong evidence that you can’t get anything you want from physics-based climate models despite the uncertainty in many of the parameterizations. With increasing CO2, the basic physics inevitably leads to warming. For example, http://www.climateprediction.net/wp-content/publications/NatGeoSci_2012a.pdf
25. @T-Rev
That’s a great article by Tom Murphy. Written by a physicist/astronomer too, so aTTP should like it. 🙂
26. T-rev and john,
That’s the one I linked to at the end of my post and which was criticised by Noah Smith for not understanding the standard economic terminology and confusing resource growth with economic growth. Sam (in the comments above) links to some interesting articles that try to reconcile the two positions.
27. afeman says:
Smith doesn’t so much interpret infinite differently as much as simply claim it’s out the scope of his field, much like nobody worries about whether Navier-Stokes applies to the heat death of the universe.
The whole eternal growth thing seems to be a hobby horse not of economists but of people who like to trash them. As somebody on Murphy’s or Smith’s post put it, economics is not what commenters at the Oil Drum say it is.
28. BBP says:
A significant point you allude to but don’t really emphasize, is that physical models ‘have’ to match observations to be considered good. Back in the mists of time when I was an astronomy grad student, my Masters Thesis was about the effect of using different variations of the mixing length model of convection in stellar models. My supervisor thought it might be able to cause variations in the giant branch temperature.
I tried several variations, but it turned out that once you calibrated the main mixing length parameter so a solar model matched the sun’s temperature, the giant branch didn’t move that much. However, the interior structure of the stellar envelope could change significantly, but at the time, there were no real observational constraints on this.
29. Mike Fayette says:
With respect, I think you all might missing the point of the initial argument, which I think is valid. I would restate it like something like this:
Mathematical models are useful tools to help us understand relationships between causes and effects in many different fields of study. These range from economic models to climate models and many others. Simple models – with few variables, a short time-frame, and well understood relationships between the variables – tend to be exceptionally reliable – mostly because they can be tested and revised until the assumptions built into these models can be “tuned” to match observed outcomes.
The danger in any model, however – whether it is a financial one, an astronomical one, or even a climate model, is when the model gets complex, has many “tunable” variables and even some unknown influences. This danger is greatest when people are trying to use these models to predict the future.
Planes have crashed, bridges have collapsed, and whole economies have been ruined because well-intended professionals mistakenly “trusted” their models to get it right. No model driven exercise is immune from this risk – including the current use of various complex climate models that are sometimes being used as “proof” that drastic action is needed to prevent some future climate catastrophe.
Complex models – like the Ptolemaic model of our solar system – or the Communist model of economics – frequently fail because skilled system modelers will “tune” the variables in such a way to create a desired outcome – one that makes sense to them – and one often just reinforces their own initial core beliefs.
This risk of a model failure applies equally to models based on physical processes or social ones.
30. Mike,
With respect, I think you all might missing the point of the initial argument
With respect, I think you’re missing the point that I’m making. I’m certainly not suggesting that physical models are always right, while other models can be horribly flawed. I’m suggesting that the existence of conservation laws means that it is not as easy to get a desired result with a physical model as it is with models that do not have underlying conservation laws. I don’t think anything you’ve said disputes that point. Just because we use the word “models” to describe theoretical calculations across many fields does not make the “models” somehow equivalent.
31. “What I mean by physical models is those that are meant to represent the physical world, as opposed to – for example – financial, or economic models. The crucial point about physical models (which does not – as far as I’m aware – apply to other types of models) is that they’re typically founded on fundamental conservation laws; the conservation of mass, momentum, and energy. ”
That is the claim.
Lets break your claims down.
1. Physical models represent the physical world…. so do economic and financial models
there is nothing super natural about economics, nothing non physical.
First mistake: false dichotomy
2. Physical models are based on conservation laws. Economic and financial models are not.
a) this is a hard claim to prove, so you pleaded ignorance.
b) showing conservation laws in economics probably undos your argument.
The field is called
ECONOPHYSICS
https://en.wikipedia.org/wiki/Econophysics
3. you make a claim about physical models being harder to “tune” to a desired outcome.
This is an empirical claim. Settled by empirical means. not blog comments
It is an open question whether physical models ( in general? in certain cases? in all cases? )
are harder to tweak to get a desire outcome than say economic models.
Answering that question first is probably best. Speculating that this supposed difference is do to conservation laws is premature
32. BBD says:
What ‘desired result’ do climate modellers allegedly tune their models to produce?
Anyone making this insinuation should be required to answer this question.
33. BBD says:
I’m sorry, Steven but this is rubbish:
1. Physical models represent the physical world…. so do economic and financial models
there is nothing super natural about economics, nothing non physical. First mistake: false dichotomy
A physical system does what physics makes it do. A human system like the market behaves as irrationally as humans do. There is no false dichotomy here.
34. Steven,
Physical models represent the physical world…. so do economic and financial models
I disagree. Economic and financial models certainly represent apects of the physical world, but not in a way that is independent of our own choices and decisions. If you don’t like the term “physical” then change it to “systems where – given the initial assumptions – the outcome does not dependent on societal choices and decisions”.
The field is called
ECONOPHYSICS
Okay, but that there is a field that does use conservation laws, doesn’t really change my point. I’ll concede that there are other types of models that do use conservation laws, but this is not universally true. I wasn’t really trying to dismiss economic/financial modelling, I was really trying to distinguish between those models that are constrained by fundamental conservation laws, and those that are not.
you make a claim about physical models being harder to “tune” to a desired outcome.
This is an empirical claim. Settled by empirical means. not blog comments
I agree that I haven’t shown this to be true. However, I do think that the underlying conservation laws do make it harder to arbitrarily tune models that aim to represent physical systems. Call it an opinion, if you want, but it’s not entirely un-informed.
35. Call it an opinion, if you want, but it’s not entirely un-informed.
I’ll expand on this a bit. Something that you regularly encounter as an active physicist are other physicists who may have no actual expertise in computational modelling, but who can see an issue with a model purely by doing some kind of basic sanity check. Does it conserve energy, for example. This is one reason why I think it’s harder to arbitrarily tune these type of models; you just need an understanding of basic physics to see if a model result is plausible or not, you don’t necessarily need to know all that much about the actual workings of the model.
36. The field is called ECONOPHYSICS
Actually, I’ll expand on this too 🙂 That there is a field that uses physical concepts in their modelling, doesn’t necessarily mean that these underlying concepts are fundamentally true. In modelling physical systems, energy is conserved. We don’t decide to conserve energy; we have to conserve energy. Applying a similar conservation law to economic modelling doesn’t immediately mean that that law is universally true.
37. Bobby says:
Having created both physical models and financial models, I definitely see a difference. I agree with Mike’s general point, which can be summarized by the famous quote: “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” However, I disagree with: “This risk of a model failure applies EQUALLY to models based on physical processes or social ones.” Risk of failure? Yes. Equally? No. The reasons are precisely those that ATTP laid out.
In economic and financial models, there are few constraints. Ask any anybody who made an NPV model how they determined their growth rate or their WACC, factors which greatly affect the result. You won’t get a pretty answer. Most of what I learned in creating NPV models has been during the act of building them not from analyzing the results. Macroeconomic models, of which I have less but some experience, have constraints but nothing close to physical models. On the extreme end of physical models are quantum mechanics models, where you have variables constrained to discrete values – there is nothing comparable in the economic world, at least that i can think of. I remember working on a QM model with 15 equations and 15 constraints, and needing a Monte Carlo simulation to work for quite some time to find just one solution that satisfied the equations (albeit these were older, slower computers).
Depending on the remaining degrees of freedom, it may be easy or hard to find many solutions. Historical climate history should be enough to constrain the models, I would think, and thus the future predictions. I’ve never worked on a climate model.
38. Bobby says:
Econophysics – can I expand on this red herring, too? ATTP nailed it again with “That there is a field that uses physical concepts in their modelling, doesn’t necessarily mean that these underlying concepts are fundamentally true.” I worked on the Black–Scholes equation for option pricing, which is used as an example in the Wiki. It is based on a few assumptions that are not necessarily true (not as true as energy conservation, for example), such as future volatility matching historical. The B-S equation (and similar equations to price derivatives) can not handle drastic changes in the macro-market, such as a market crash. Nobel-prize winning Scholes was on the board of Long Term Capital Management – ask him how the crash of 1998 worked out for them. By contrast, great movements in energy caused by natural climate variability must be constrained by energy conservation among other laws.
39. “…and the discussion went downhill from there”. I am not surprised. A friend of mine stated long ago that most often than not “the level of a discussion can only descend”. A serious joke 😉
40. Bobby,
41. Hernan,
Well, that may well be universally true 🙂
42. “Actually, I’ll expand on this too 🙂 That there is a field that uses physical concepts in their modelling, doesn’t necessarily mean that these underlying concepts are fundamentally true. In modelling physical systems, energy is conserved. We don’t decide to conserve energy; we have to conserve energy. Applying a similar conservation law to economic modelling doesn’t immediately mean that that law is universally true.”
No one here who works in the sciences has even done the FIRST thing required.
Actually test the proposition.
we have personal anecdotes.. “i’ve worked in both”
we have claims made with zero expertise
I’ve worked with both physical models and financials ones.
the physical one were way easier to tweak because you had constraints.
one more anecdote.
43. BBD says:
Contrarianism is actively tedious sometimes.
44. Steven,
the physical one were way easier to tweak because you had constraints.
I’m not following you.
one more anecdote.
It is just a blog.
However, I think you’re somewhat missing the point of what I’m getting at. This was really a response to a suggestion that seems quite common. Someone has experience of a form modelling in which – according to their experience – it is quite easy to engineer a desired result. They then extrapolate that to claim that this is true of models in general. Now, I also have experience of modelling. I don’t think that this is true for the models with which I have experience. I’m, therefore, disputing the suggestion that simply because some models are easily engineered to give a desired result, that this is true for all things that we call models. So, at best we’re all wrong and can say nothing about the relative merits of models from different disciplines. Alternatively, if people from one discipline regard their models as easily tunable, and those from another do not regard their’s as easily tuneable, maybe this is telling us something.
No one here who works in the sciences has even done the FIRST thing required.
Actually test the proposition.
As good as this would be, I’m not sure how one would do this. I still don’t think that this is required to point out that some models are constrained by universally accepted conservation laws, and others are not.
45. Howard says:
In hydrogeology they are generally called numerical models as opposed to physical models (sand-filled boxes), electric analog models (circuit boards), arithmetic models (back of envelope), etc.. A conceptual model is usually the first step where you compile and digest all of your data to set your constants, parameters, boundary conditions, calibration and verification targets, prediction scenarios, etc. Next, one might run some lower dimension arithmetic models to verify the conceptual model is reasonable. Once a numerical model is constructed, then you take on a shakedown cruise because there are always obvious input and assumptions problems that need to be fixed. Then, the model is “calibrated” against a period of historical data and adjusting various factors within “reasonable” limits. Once calibrated, a verification step consists of comparing the model output to another period of historical data. Once that is done, it is used to make predictions of potential future scenarios like no action, more pumping, less pumping, big floods, big droughts, mitigation measures, etc.
The bottom line is that numerical models can be tweeked and tuned six ways from Sunday. I have no idea about financial models, but I bet the same is true. The limit to tweeking and tuning is human imagination, not the subject matter.
In my experience with models, they are very useful in spotlighting what you don’t know, thereby driving a new field data collection focus. The problem is that human nature and societal pressures can pull modellers into over-tuning to prove that the model reproduces nature well and present a positive result. Bosses and clients never want to hear that you need to drill more holes and burn more samples.
46. Mike Fayette says:
Isn’t just a matter of the number of the unknown variables in a model?
In a simple model – where constrained physical values like weight, gravity, tensile strength, etc are all well known – then no one would argue with the excellent point that ATTP is trying to make….
In an economic or social model, however – the constraints are fuzzier at best, so the results are less predictable – which is why advertisers, pollsters, economists and others struggle so mightily to duplicate Asimov’s “psychohistory” premise…..
But that’s not what we’re talking about here.
Assume an engineering model has five “known” physical constraints. Can you model it accurately? Probably….
Assume an economic model has five “predicted” behavior and financial constraints. That model is more suspect, and I believe that is the point that ATTP is trying to make. If so – I agree.
But assume that a climate model, which might have five “known” physical constraints and five “unknown” physical constraints. My guess – and this is probably subject to a math analysis that is beyond my ability to do – that the margin of error in this situation is probably nearly as great as the purely economic model.
As a businessman, engineer, and sometimes marketing/advertising consultant – I am ACUTELY aware of how easy it is to get 9 out of 10 things right, but be totally hosed by the 10th factor that you did not account for properly ……
This leads me to be “skeptical” of the models from climate scientists who believe they have everything figured out, especially when their predictions over the past 10-15 years don’t seem to as accurate as they hoped…..
47. Mike,
This leads me to be “skeptical” of the models from climate scientists who believe they have everything figured out, especially when their predictions over the past 10-15 years don’t seem to as accurate as they hoped…..
They’ve never claimed to believe that they have everything figured out. Also, until recently climate models were known to do poorly on decadal timescales. They may have hoped for better, but I doubt they’re that surprised that it hasn’t quite worked out that way. Also, an apples-to-apples comparison and updated forcings, already improve the comparison.
Also, I’m not claiming that physical models are always right, or that they can’t be tuned. I’m suggesting that producing any kind of desired result is more difficult when your model is based on fundamental conservation laws, than when it is not.
48. Howard,
In hydrogeology they are generally called numerical models as opposed to physical models
Well, I was trying to distinguish between purely numerical models and those that are founded on the fundamental physical conservation laws.
The bottom line is that numerical models can be tweeked and tuned six ways from Sunday. I have no idea about financial models, but I bet the same is true. The limit to tweeking and tuning is human imagination, not the subject matter.
My argument is that this isn’t true if you’re modelling a physical system using something like the Navier Stokes equations. You can certainly do some tuning, but there are limits.
49. Howard says:
ATTP:
I get what you meant and you asked for another term. Perhaps mathematical physics model because “physical model” in the US (that’s the only experience I can speak to) typically means a petrie dish, bucket chemistry, sand box, wind tunnel, etc.
I agree that if you are modelling a sphere in a uniform incompressible flow field with low a Reynolds number, there are definite limits to tweaking these type of “tinker toy” (Mickey Mouse?) simulations. However, since this is a climate-focus blog, I assumed you were commenting about real-world models with highly uncertain/complex boundary conditions, uncertain/complex parameters, multi-phase flow, turbulence, transient conditions of heat, chemistry and biology, long-term and short term internal variance, planetary to particulate scales, etc. I know that geological fluid mechanical and heuristic models can be very highly tuned and they are several orders of magnitude less complex and uncertain and several orders of magnitude more constrained than GCMs. Are there limits to modifying GCMs? If so, what are they (besides computing power)?
50. Howard,
I get what you meant and you asked for another term. Perhaps mathematical physics model because “physical model” in the US (that’s the only experience I can speak to) typically means a petrie dish, bucket chemistry, sand box, wind tunnel, etc.
Ahh, sorry. That might explain why so many seem to find my use of “physical model” a bit odd. Useful to know.
The latter part of your comment is a fair point. These are complex models and I have rather glossed over these complexities. I would argue, however, that just because there is complexity doesn’t necessarily mean that it’s very tuneable. However, if you start adding chemistry and biology then my analogy would start to break down.
Are there limits to modifying GCMs? If so, what are they (besides computing power)?
I don’t really have a good answer to this. I suspect it depends somewhat on what scales you mean. My suspicion would be that it’s quite hard to modify globally averaged results substantially, but that given the inherent complexity and non-linearities, it may well be that you can substantially influence regional effects in GCMs.
51. izen says:
Whatever quibbles might be raised there is a fundamental difference in the underlying causative agency, the value/quantity of primary interest between climate and financial models.
In climate models energy is conserved, and this determines the value of the temperature changes. Those temperature changes and the associated energy flows can be measured as objective physical values and the conservation laws constrain the results. The number of Joules, or the temperature rise are not arbitary values.
In contrast financial models are primarily concerned with monetary value and flows of wealth. These are arbitrary social values. As recent (and historical) events demonstrate money can be created and destroyed. Unlike energy, money flows are not unidirectional and irreversible. In fact the predominate direction of flow for financial value is in direct contradiction with the comparable constrains on the flows of energy.
It is this root difference in the nature of energy and money that makes climate and financial models different at the epistemological level.
It may be possible to model financial systems with an a prior assumption that financial value is a conserved quantity, and place constraints on the nature of its movement through the system, but those are arbitrary impositions contradicted by observations. A mere mimicry of physical models that HAVE to conform to energy conservation laws to be legitimate. Because the constraints on physical models are ineluctable they attain a greater degree of epistemic validity.
52. T-rev says:
Maybe Sagan and Asimov were prescient and this is the result ?
“We’ve arranged a global civilization in which most crucial elements profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.”
― Carl Sagan
The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.” ― Isaac Asimov
53. Kevin O'Neill says:
SM writes: “1. Physical models represent the physical world…. so do economic and financial models there is nothing super natural about economics, nothing non physical.”
No, I don’t believe this is true in the sense that ATTP wrote. He gave examples of equations from physics that are used in physical; there are virtually no similar equations in economics that *can* be used. In fact, one need only look at the current state of the world economy – especially Greece and the plans put forward for them just 5 years ago – to see how poorly tied to reality economic models are.
For instance, is there a universally accepted equation that describes the effects of tax cuts on GDP? Personal income? Budget deficits? Income equality? No. Virtually every economic model will be based on assumptions specific to the model itself.
Moreover, the general assumptions (Real Business Cycles, Rational Actors, the GOP version of dynamic scoring, etc., etc) made by many economic modelers have been shown to be at odds with reality for decades – but that doesn’t stop the modelers from using them as if *nothing* was wrong with them. These models are based on *idealized* behaviors that are never found in the real world. You can call them supernatural, unnatural, or whatever you like – but they don’t model the real physical world.
Paul Romer has recently been discussing economic models and their modelers. Here’s one excerpt: “One of the issues that I raised in a conversation concerned the support (in the mathematical sense) of the distribution of productivity in the Lucas-Moll model. Their assumption 1 states that at time 0, the support for this distribution is an unbounded interval of the form [x,infinity). In response to objections of the general form “this assumption is unacceptable because it means that everything that people will ever know is already known by some person at time 0,” Lucas and Moll present a “bounded” model in which the support for the distribution of productivity at time zero is [x,B]. Using words, they claim that this bounded model leads to essentially the same conclusions as the model based on assumption 1. (I disagree, but let’s stipulate for now that this verbal claim is correct.) I observed that Lucas and Moll do not give the reader any verbal warning that because of other assumptions that they make, the support for the distribution of productivity jumps discontinuously back to [x,infinity) at all dates t>0 so it is a bounded model in only the most tenuous sense.
“… in only the most tenuous sense.” — I.e., not in any real-world, physical sense. This is *not* a parameter that has been ‘tuned’ to make the model better fit observations; it’s simply an assumption used because it gives a result the modeler likes. Romer makes the argument that this type of model is making economics an adversarial endeavor as opposed to a scientific one.
Paul Krugman has touched on the same issue with economic models many times over the years. He wrote in June of this year, “…Lucas’s attack on Romer rested in part on the claim that government spending on a new bridge would lead consumers, anticipating future taxes, to offset it one for one with cuts in their own spending; this is completely wrong if the spending is temporary.
But aside from exposing the intellectual decline and fall of the Chicago School, is this the way we should go about modeling such things? Well, yes, sometimes, because rigorous intertemporal thinking, even if empirically ungrounded, can be useful to focus one’s thoughts. But as a way to think about the reality of spending decisions, no. Ordinary households — and that’s who makes consumption decisions — have no idea what the government is spending, whether it is temporary or permanent…”
Again, models that have no bearing on reality. Useful in an academic sense – but not as a representation of the physical world.
There doesn’t exist an economic model more complicated than basic IS/LM that can accurately reflect reality.
54. Rob Nicholls says:
I’m particularly atrocious at brevity, but even allowing for the fact that others are v good at being succinct, I still don’t understand how anything complex like the subject at hand can be discussed properly on this ‘Twitter’ of which you speak. (sorry to be such a Luddite).
55. Rob Nicholls says:
Kevin O’Neill: “Again, models that have no bearing on reality.” The thought has occurred to me that there is an urgent need for much better computer models of economic systems, and that this is not an easy problem to solve. I realise that there are alternatives to neo-classical models, but I’m not sure how well-developed and useful any of these are yet.
56. ATTP,
You might enjoy this interview with theoretical-physicist-turned-finance-quant, Emanuel Derman: http://www.econtalk.org/archives/2012/03/derman_on_theor.html
He talks a lot about the similarities and differences between physics and economics — well, finance — as well as their use of theories versus models. Here is a segment that ties in with your post. The lead up starts around 9:30 on the audio.
“[S]tock prices or the returns on stock prices behave like smoke diffusing. And there’s something similar about them, but it’s not an accurate description in the way that, say, Newton’s Laws attempt to be an accurate description. It’s really based on an analogy to something you do understand, which is smoke diffusing, and saying maybe stock prices behave a lot like that.”
57. John L says:
But things may not be that easy 😉 Navier-Stokes equations have recently been shown to have some theoretical limitations, namely to be incomplete (relative to the particle description): https://www.quantamagazine.org/20150721-famous-fluid-equations-are-incomplete/
Aside energy dissipation there is also dispersion from capillarity:
“The evidence suggests that truer equations of fluid dynamics can be found in a little-known, relatively unheralded theory developed by the Dutch mathematician and physicist Diederik Korteweg in the early 1900s. And yet, for some gases, even the Korteweg equations fall short, and there is no fluid picture at all.
“Navier-Stokes makes very good predictions for the air in the room,” said Slemrod, who presented the evidence last month in the journal Mathematical Modelling of Natural Phenomena. But at high altitudes, and in other near-vacuum situations, “the equations become less and less accurate.”
Regarding economics, I think one important and basic principle is that when two entities trade or are doing an economical transactions, both win. And that is why they do it (at least as far as they are homo economicus…), so economics tend to be driven by imbalances not conservation.
58. Michael Hauber says:
I’m not sure that the physical constraints matter that much. Many important aspects of climate models are paramaterised and not based purely on physics. Although perhaps physical constraints are why warming is always larger than the hypothetical 0 feedback situation.
I’d say the fact that modelling attempts to predict the choices of individual humans is one key issue.
And another key issue is how the models are tested and validated. Climate models mostly go through a barrage of tests against the real world, and exist in a reasonably open peer reviewed framework. Experts who know just as much as the modellers are free to criticise, replicate and possibly improve on a particular model. Many financial models are cooked up by an individual analyst in an office, and are tested by the analyst’s boss, and a few key stakeholders who for the most part do not have the same expertise as the modeller. A lot of the validation does not depend on scientific analysis and replicability, but on political power struggles based on how the model output might reflect or impact on a specific stakeholder. And whether the results pass a ‘gut feel’ test for the critical decision maker. This is my experience with financial modelling, within a small number of entities. I couldn’t say for sure whether others in financial modelling have experienced a decision making process with more scientific validation.
Paramaterisation exists in climate models. I believe that climate models can be tuned to some extent, and the question is how far? Can the amount of tuning be meaningfully compared to the amount of tuning that can be done in a finance model? If a financial modeller produced a model predicting the total company expenditure next year was going to be in a range between 1.5 billion and 4.5 billion dollars they’d either be told to do the model again, or shown the door. If company expenditure last year was 2.8 billion then the modeller could perhaps easily tune the model to 1.5 or 4.5, but either result would be unacceptable unless unusual circumstances existed. Would it be fair or even meaningful to then compare this to the canonical climate modelling range of 1.5 to 4.5 degrees C for climate sensitivity?
The key observation in my mind is that if climate models could be tuned to produce any answer, then why haven’t they? Considering the vested interests involved, then surely at least one person would have the combination of expertise and motivation to produce a climate model with a sensitivity of 0.5 if this was actually possible?
59. Kevin O'Neill says:
Take two scenarios:
1) From 2007 to 2009 US gov’t spending rose 25%, at the same time revenues fell by over 20% and the Federal Reserve was printing money at unprecedented rates. What were the predicted effects on inflation, interest rates, and the price of government bonds?
2) In 1938 Guy Callendar proposed that temperatures over the previous 50 years could be explained by increased concentrations of atmospheric CO2. Extrapolating forward, how has Callendar’s CO2 hypothesis turned out?
Now, the difference between the accuracy of the predictions in these two scenarios is that Callendar was modeling a *physical* system. It has some additional terms that can dampen or accentuate the results over short periods (volcanoes, natural variability), but those are also physical and can be included in the model to make it even more accurate. Whereas the economic scenario depends on lots of other variables, many of which are counter-intuitive, exceptions to general rules, or inherently difficult to predict (human reaction to events – both individually and collectively).
To my knowledge, the only economists that correctly predicted the economic results were those who looked at very simplistic models. Basic Hicksian IS/LM; an economic tool that’s been around longer than Callendar’s 1938 paper and that many economists scoff at as being too simplistic to be of any use. Elegant math divorced from reality is more to their taste.
60. Sou says:
I’d say leave economic models to the economists, financial models to financial experts and climate models to climate modelers. None of them can be tweaked to produced any answer you want without changing the assumptions or “what ifs” or making a nonsense of the model. Sometimes people think the assumptions are sound and sometimes not. (In climate models, some people accept models that show low climate sensitivity and some don’t.)
If economic models were useless, we’d not have had relatively successful monetary policies with inflation constrained within a few percentage points over decades. Bearing in mind that these models rely on assumptions and the feedback is human behaviour and human expectations. So when an economic model predicts something, people have some “faith” in the model and adjust their behaviour accordingly. Whatever the model predicts, people’s behaviour will counter the prediction to some extent. Which is often a good thing. (The human behaviour response adds something of an unknown, but not entirely unpredictable element to economic models.) In general the people managing monetary and fiscal policy rely on people modifying their behaviour to keep the economy in check (with interest rate signals, taxation changes etc used as behaviour modification tools).
Climate model projections are also affecting people’s behaviour, so that maybe, just maybe, we’ll veer off the RCP8.5 pathway (which is in part based on economic models) and shift down a pathway or two.
With the large complex coupled climate models (as opposed to simpler climate models), one of the main things that makes them more difficult to tweak to “get an answer you want” is the number of components they have, the number of people involved and the fact that they are tested on hindcasts.
61. To reply to some of what has been said so far with respect to economic modeling:
I think that it’s important to keep in mind the fact that not all economists are equal and not all economics are equal. That is, if we weed out all conservative economists and conservative economics (the only exception being those very few, narrowly defined, legitimate contributions made by some conservative economists like Freidman, outside of which is nothing but nonsense, and this includes Freidman’s nonsense outside of his few legitimate contributions), what we end up with is a collection of economists (like Paul Krugman and Joseph Stiglitz) and economics that actually has a pretty good track record in terms of projections.
People need to read what they have been saying over the past year, especially, and not take seriously the never-ending barrage of conservative misstatements as to what they actually say and do not say.
Here are some quotes from some of Krugman’s recent and very informative writings (note: cookies must be on to connect to New York Times documents):
http://www.nytimes.com/2015/06/08/opinion/paul-krugman-fighting-the-derp.html?_r=0
“What am I talking about here? “Derp” is a term borrowed from the cartoon “South Park” that has achieved wide currency among people I talk to, because it’s useful shorthand for an all-too-obvious feature of the modern intellectual landscape: people who keep saying the same thing no matter how much evidence accumulates that it’s completely wrong.
I’ve already mentioned one telltale sign of derp: predictions that just keep being repeated no matter how wrong they’ve been in the past. Another sign is the never-changing policy prescription, like the assertion that slashing tax rates on the wealthy, which you advocate all the time, just so happens to also be the perfect response to a financial crisis nobody expected.
Yet another is a call for long-term responses to short-term events – for example, a permanent downsizing of government in response to a recession.”
http://krugman.blogs.nytimes.com/2015/03/28/unreal-keynesians/?_r=0
“And as I have often argued, these past 6 or 7 years have in fact been a triumph for IS-LM. Those of us using IS-LM made predictions about the quiescence of interest rates and inflation that were ridiculed by many on the right, but have been completely borne out in practice. We also predicted much bigger adverse effects from austerity than usual because of the zero lower bound, and that has also come true.”
“It’s true that the Hicksian framework I usually use to explain the liquidity trap is both short-run and quasi-static, and you might worry that its conclusions won’t hold up when you take expectations about the future into account. In fact, I did worry about that way back when. My work on the liquidity trap began as an attempt to show that IS-LM was wrong, that once you thought in terms of forward-looking behavior in a model that dotted all the intertemporal eyes and crossed all the teas it would turn out that expanding the monetary base was always effective.
But what I found was that the liquidity trap was still very real in a stripped-down New Keynesian model. And the reason was that the proposition that an expansion in the monetary base always raises the equilibrium price level in proportion only actually applies to a permanent rise; if the monetary expansion is perceived as temporary, it will have no effect at the zero lower bound. Hence my call for the Bank of Japan to “credibly promise to be irresponsible” – to make the expansion of the base permanent, by committing to a relatively high inflation target. That was the main point of my 1998 paper!
And a few years after I published that paper, the BoJ put it to the test with an 80 percent rise in the monetary base that utterly failed to move inflation expectations. In general, Japanese experience gave us plenty of reason to realize that macroeconomics changes at the zero bound. So it’s still a puzzle that so many macroeconomists tried to apply non-liquidity-trap logic in 2009 – and just embarrassing that they’re still doing it.
In the end, while the post-2008 slump has gone on much longer than even I expected (thanks in part to terrible fiscal policy), and the downward stickiness of wages and prices has been more marked than I imagined, overall the model those of us who paid attention to Japan deployed has done pretty well – and it’s kind of shocking how few of those who got everything wrong are willing to learn from their failure and our success.”
62. General circulation models are not physical models as defined above. Indeed, GCMs have to break at least one fundamental equation – most break the law of conservation.
63. Economics and finance are like physics and chemistry: Related but different and separate (despite the interdisciplinary field of financial economics). Modelling traditions are different in both fields, and heterogeneous in either. Economics and finance are many times bigger than climate science, and model diversity is correspondingly larger. The separation between applied and research models is more pronounced in economics and finance than it is climate science.
64. Richard,
General circulation models are not physical models as defined above. Indeed, GCMs have to break at least one fundamental equation – most break the law of conservation.
Conservation of what? Is this your mass argument again? They add CO2 to the atmosphere without reducing the mass of the planet? If so, I don’t think it is possible – given numerical accuracy – to account for this mass loss. If you mean something else, feel free to elaborate.
I’m not quite sure what you’re getting at with your second comment, or why it’s relevant.
The separation between applied and research models is more pronounced in economics and finance than it is climate science.
Again, not sure what your point is. One reason that there isn’t necessarily a large separation between applied and research models in the physical sciences is that the equations are well-defined and you don’t use a different formalism for applied models, compared to research models.
I noticed you were projecting again on Blair King’s post in which he makes up an awful lot of things, while claiming not to have done so. Of course, I get the impression – given your many corrections to your paper – that accuracy isn’t all that important to you.
65. Andrew Dodds says:
Michael Hauber –
You seem to be hitting a key difference here.
For a physical model, the ideal is to start with basic, well established physics and come up with a model that reproduces the phenomena of interest. So a really good climate model ‘knows’ nothing about trade winds or ENSO or seasons… these features just appear as you run the model. I’m not sure that climate models are 100% there yet – every time you have a parameterisation you are ‘telling’ the model something that you’d rather have emerge. But they are close.
I don’t see anything like that happening in macroeconomic models – indeed the very fact that there is a split between microeconomics, which does look at the economic effects of individual actions, and macroeconomics which deals with the big stuff – shows this.
66. MikeH says:
“most break the law of conservation” ? Would you like to point at which one?
“Climate models are mathematical representations of the climate system, expressed as computer codes and run on powerful computers. One source of confidence in models comes from the fact that model fundamentals are based on established physical laws, such as conservation of mass, energy and momentum, along with a wealth of observations. ”
67. Mike,
Richard will probably not respond (as that would be out of character) but I think he’s referring to the fact the GHGs are added to the atmospphere without their mass being removed from the planet. It could be something else, probably equally trivial.
68. Sou,
I’d say leave economic models to the economists, financial models to financial experts and climate models to climate modelers.
Well, yes, I wasn’t suggesting otherwise 🙂
Climate model projections are also affecting people’s behaviour, so that maybe, just maybe, we’ll veer off the RCP8.5 pathway (which is in part based on economic models) and shift down a pathway or two.
Well, yes, but – I would argue – the physical model is how our climate respond to a specified emission pathway. Of course, if we don’t follow that pathway, the response will be different, but the physics won’t be.
69. Andrew Dodds says:
aTTP –
I think that if someone’s critique of GCMs is ‘They break the law of conservation’ you can safely ignore their opinions on the subject.. I’d be embarrassed to make a howler like that and I’m not an academic.
KeefeAndAmanda –
Yes, Simon Wren-Lewis ( http://mainlymacro.blogspot.co.uk/ ) at Oxford calls this ‘MediaMacro’ in which a story is told about the economy that is completely at odds with mainstream macroeconomics, but just happens to justify the standard conservative agenda.
70. sorry – conservation of mass, of course
this makes it rather hard to do advanced atmospheric chemistry in a GCM
71. Richard,
So, you really are referring to the addition of CO2 to the atmosphere without removing mass from the planet? A $\sim 10^{-10}$ effect?
this makes it rather hard to do advanced atmospheric chemistry in a GCM
Chemistry isn’t physics 🙂
72. chris says:
Consideration of Richard Tol’s odd comment
“…this makes it rather hard to do advanced atmospheric chemistry in a GCM.”
rather reinforces ATTP’s point.
Chemistry is important in the climate system. Methane is converted to CO by the action of hydroxyl radicals in the troposphere and stratosphere and this should be taken into account in order properly to account for radiative effects of methane emissions. CO2 partitions into the oceans by dissolution and then reaction with water to form carbonic acid, bicarbonate and carbonate.. (on very long timescales CO2 is removed from the atmosphere by chemical weathering)..etc.
But this chemistry isn’t “done” in a GCM -it can’t be done and obviously doesn’t need to be. The rate at which methane is converted to CO2 can be determined and so the methane degradation “chemistry” is parametrized using a rather well-constrained equation (or set of equations) with appropriate rate constant(s). The partitioning of CO2 into oceans follows Henry’s Law in some form and the chemical partitioning of hydrated CO2 amongst it’s charged dissolved species can be calculated using the Henderson-Hasselbalch equation. All of these parametrizations are very highly constrained and so there is little room for “tuning” these aspects of atmospheric chemistry. Unlike economic models their parameterizations are not somewhat subjective elements subject to the notions of the modeller about the effects of consumer confidence, interest rate variations, and stock market fluxes.
73. Sam taylor says:
Sou,
Much of the thinking behind the thinking that it has been succesful monetary policy which has kept inflation low in the last few decade is backed up by DGSE models in which one of the assumptions is that central bank policy largely controls the rate of inflation. I think this might actually be one of the times when it’s correct to say that the assumptions of the models give you the rate that you want! If recent years have taught us anything, it’s that monetary policy is comparitively weak. Interest rates on the floor, and unconventional policies like QE and still the growth that is saught remains elusive. I think it’s more liekly that fiscal and political forces are more likely behind the lack of inflation. Things like deindexation of payments, declining unionisation, changes in trade policy could all plausibly also be linked to lowered inflation.
Kevin,
IS/LM doesn’t really work, given that we live in a world of endogenous money and loanable funds is not an accurate description of how the monetary system in fact operates. The people who seem to have been most right about the effects of government spending in the US are the endogenous money/MMT crowd. It’s also incorrect to call QE ‘money printing’, as it is in fact an asset swap program designed to help target an interest rate. This has been one of the big issues with a lot of economic models, in that they treat banks completely incorrectly and hence missed the cause of the crisis. This is probably all Milton Friedman’s fault when he wrote that idiotic nonsense about only judging models based on the accuracy of their predictions, not the correctness of their assumptions. In fact it’s probably fair to blame the dire state of economics today mostly on Friedman, however things look to perhaps be moving in the right direction these days,
And as for people saying that you have to prove economic models don’t conserve quantities, well there are plenty of economic growth models which basically assume that technology is mana from heaven. That’s hardly realistic.
74. I don’t know if this is what Richard is referring to, but early GCMs used flux corrections (discussed here) which is a consequence of coupling different models together in which the surface fluxes may not be the same.
These were essentially empirical corrections that could not be justified on physical principles, and that consisted of arbitrary additions of surface fluxes of heat and salinity in order to prevent the drift of the simulated climate away from a realistic state.
But,
By the time of the TAR, however, the situation had evolved, and about half the coupled GCMs assessed in the TAR did not employ flux adjustments.
The link is from AR4 (2007) and according to this
As a result, there has always been a strong disincentive to use flux corrections, and the vast majority of models used in the current round of the Intergovernmental Panel on Climate Change do not use them.
75. @Chris
There is of course a lot more going on than that.
The equations that describe chemical reactions take mass conservation as their starting point. The equations inside a GCM do no conserve mass. If the chemistry is simple, a fudge factor will make this problem go away. Not so for complex chemistry.
76. Richard,
When did you become a chemist?
I’m also slightly confused about your claim that they don’t conserve mass. If you’re referring to the removal of mass from the planet that provides the gravitational force, this is clearly irrelevant. If you’re referring to that the density doesn’t change when GHGs are added to the atmosphere, again this is about a $10^{-5}$ effect. Precisely in what way do they not conserve mass?
77. O Bothe says:
@attp
The equations are one thing, when you code them and introduce parameterisations etc. you may lose some of the intended properties.
Earlier versions of ECHAM6 do not conserve energy, neither in the whole nor within the physics, and small departures from water conservation are also evident.
Analysis of the CMIP5 runs suggest that these issues persist with ECHAM6.
Stevens, B., et al. (2013), Atmospheric component of the MPI-M Earth System Model: ECHAM6, J. Adv. Model. Earth Syst., 5, 146–172, doi:10.1002/jame.20015.
78. @wotts
I refer to MPI-Met ICON: http://www.mpimet.mpg.de/en/science/models/icon.html
Marco Giorgetta has been driving this for more than a decade. Here’s a relatively recent paper:
http://www.geosci-model-dev.net/6/735/2013/gmd-6-735-2013.html
79. Oliver,
The equations are one thing, when you code them and introduce parameterisations etc. you may lose some of the intended properties.
Yes, I think I said something along those lines in the post. My point was more to do with the claim that one can engineer desired results with models which – I would argue – is not necessarily that easy if the models are founded on fundamental conservation laws. My claim isn’t that the models are correct, or that the parametrizations don’t have any effect, or can’t be tuned at all.
I’ll have a look at that link, thanks. However, a point I would make is that non-conservation still tells you something (i.e., it’s diagnostic).
80. Richard,
So your point is that the models don’t do everything perfectly, or have some problems? Sure, I don’t think I said otherwise. Be extremely surprising if there weren’t issues and that they couldn’t be improved. That doesn’t mean, however, that they’re not founded on fundamental conservation laws or that they can be tuned to produce any desired result.
I’ll expand a bit on Oliver’s point. If you read the next section of Stevens et al., it says
Since the CMIP5 runs, an attempt has been made to identify the origin of departures from mass and thermal energy conservation within the framework of the ECHAM6 single column model. A variety of model errors relating to the inconsistent use of specific heats, how condensate was passed between the convection and cloud schemes, or how vertical diffusion was represented over inhomogeneous surfaces have been identified and corrected.
This is one of the points. Non-conservation tells you something – there are model errors, which you can then aim to correct. Just to clarify the point I’m trying to make in this post; it’s not that models of physical systems can’t be wrong, or won’t have errors; it’s simply that being based on fundamental conservation laws means that it’s unlikely that you can tune them to produce any desired result (assuming you’re being honest) and that these conservation laws allow you to assess the validity of the model and identify errors.
81. Grant,
Just listened to the podcast you mentioned here. A South African and a physicist 🙂
82. No, Wotts, that’s not my point at all. GCMs have a remote relationship with physics. The models are full of fudge and tuning factors. The fact that they roughly represent observations may, for all we know, reflect skill in model calibration rather than skill in forecasting.
83. Richard,
No, Wotts, that’s not my point at all. GCMs have a remote relationship with physics. The models are full of fudge and tuning factors.
And I think this is nonsense. You really shouldn’t get your information from climate denier blogs. In a sense, you may well be illustrating my point. Just because the models with which you’re familiar are full of fudge and tuning factors, doesn’t mean this is true for all models.
84. @wotts
I got the info from 25 years of conversations with climate modellers.
85. Richard,
The statement “GCMs have a remote relationship with physics” is clearly nonsense.
86. BBD says:
Oh do stop bullshitting Richard. It’s tiresome.
87. Sam taylor writes: “Kevin, IS/LM doesn’t really work…”
Post-2007/8 financial crisis, economists that relied upon analysis using IS/LM have largely been proven correct. Where is the group of MMT economists that had equal predictive success over this period?
Here’s Randal Wray in 2011 – ” I expect resumption of the financial crisis any day…” Now, that was three years *after* the meltdown. In Q1 2011 the empirical data shows the number of ‘problem banks’ had was peaking. Is that what influenced Wray? I don’t know, but he’s still waiting.
88. izen says:
@-“I got the info from 25 years of conversations with climate modellers.”
Did they both warn you they are full of gremlins..?
89. The statement “GCMs have a remote relationship with physics” is clearly nonsense.
Hmmm….
This is true wrt to sub-gridscale phenomenon ( clouds, rain, etc. ) which are much finer scale than model resolution, and thus, not represented by physics.
Also, there’s a lot of instability in model results. That’s why model runs make good spaghetti:
That doesn’t come from the physics so much as the numerical methods attempting to solve the physical equations. How much of that is a reflection of natural variability and how much is artificial variance introduced by lines of FORTRAN?
90. BBD says:
… and the best estimate of ECS is still ~3C when derived from multiple lines of evidence, so let’s give the crypto-denialism a body-swerve, eh?
91. anoilman says:
Turbulent Eddie: Nice try. Its not the FORTRAN. Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.
A Speghetti graph from a site run by a man with a high school diploma. Is that really credible in your mind? Really?
So as you well know, multi decadal declines as well as monsterous inclines are in all the data. In fact you are comparing ensemble averages of multiple runs, to actual temperatures. By definition they should be different. A huge part of the ‘running cool’ mythos is caused by 1998’s weather, namely an El Nino 2C hotter on one year.
For a rigorous comparison you should probably attempt to remove the effects of weather and other short term effects from your comparison.
Or are you really one of those numpties that believes climate scientists predict volcanoes, solar cycles and the weather? I kinda want an answer to that. I mean, I think we need to know whether you think climate scientists can replace NASA, the USGS, and of course NOAA ENSO Watch. I mean… you actually think all that? Truly? What about the Tooth Fairy?
92. BBD says:
AOM
Notice that TE is being typically deceptive with graphs. To obtain a visually impressive spread he shows all the RCPs – from 2.6 to 8.5. Given the very wide spread of forcings this represents, you would expect a wide range of model results. But TE is trying to pretend that this wide spread is model uncertainty. I don’t know about you, but I find this sort of behaviour irritating.
93. anoilman says:
BBD: Oh I noticed, but given that TE is getting advice from a man who’s education ends at high school, I’d say he has complicated issues.
94. TE,
This is true wrt to sub-gridscale phenomenon ( clouds, rain, etc. ) which are much finer scale than model resolution, and thus, not represented by physics.
Even this isn’t true. That they’re not self-consistently evolved in the models does not mean that they’re not represented by physics.
95. Gavin says:
The notion that GCMs can’t conserve mass or can’t do chemistry is nonsense.
There are certainly dynamical cores that exist that aren’t mass conservative (and they work fine in some applications – NWP for instance), but there are many others that are. GISS ModelE dynamical schemes are mass conservative and have been used to do chemistry for more than a decade (see Shindell et al, 2013 http://pubs.giss.nasa.gov/abs/sh05600u.html for a recent description). Other climate models (NCAR CESM, HadGEM etc.) also manage this without a problem.
This is an example of an old pattern. GCMs obviously have developed from simpler concepts and in the early days had many constraints. For any topic you can think of, you can generally go back to the first time people implemented it and find it was done imperfectly. Sometimes workarounds were found to deal with the problem (flux corrections, convective adjustment etc.), but after years of work, people found more fundamental ways of dealing with these issues and many of these fixes were discarded. Claims that models ‘don’t include’ water vapour, or can’t do clouds, or chemistry or sea level or whatever are generally bunk.
Yet we still find people (like Tol) confidently asserting as fundamental problems, points that (even when they were issues) were merely temporary fudges or approximations.
What strikes me as more shocking is that they still insist that the models ‘can’t’ do something, even after they have been pointed to examples of models doing exactly what it is claimed is impossible.
(In case anyone wants to claim that I am hereby asserting that models are perfect or that I have no idea how models related to the real world, please read this first: https://dl.dropboxusercontent.com/u/57898304/SchmidtSherwood2014.pdf ).
96. Gavin,
Thanks, a very useful comment.
97. Andy Skuce says:
If anyone is interested in reading an economist indulging in a rant about economic modellers, I would recommend this piece by John Kay
http://www.johnkay.com/2011/10/04/the-map-is-not-the-territory-an-essay-on-the-state-of-economics
98. Richard says:
It is interesting how economists who are obtuse in the extreme regarding the subject they claim to have expertise in (economics), veer towards certainty, unencumbered by doubt (thanks to undocumented 25 year-old chats alleged climate modellers), when it comes to chemistry, physics, climate models, and who knows what else. Is there any limit to these renaissance intellects? I quake in awe … at their self delusions and arrogance.
99. Even this isn’t true. That they’re not self-consistently evolved in the models does not mean that they’re not represented by physics.
If one has a model with a resolution of 100km, but a 10km thunderstorm takes place somewhere within ( 1% of the model resolution ) one is not resolving the physics of the individual thunderstorm.
parameterizations != physics
100. TE,
one is not resolving the physics of the individual thunderstorm.
I didn’t say that you were resolving it, but subgrid processes are not, necessarily, independent of physics. How you implement sub-grid processes in a model is, typically, motivated by the physics of the process that you’re trying to implement at a sub-grid level.
101. mwgrant says:
anOilMan,
“…Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.”
That is a statement that seems pulled out of thin air. Just what is your basis for such a sweeping comment?
102. mwgrant,
Watch this video.
103. Richard says:
ATTP – this is a long discussion thread but going back to the moniker ‘physical model’ I wonder whether the key differentiator is that the models like GCMs are characterized in several ways:
(1) Fundamental … from underlying proven physics like Navier-Stokes (which are in turn dependent on classical physics), with wide applications, as you note
(2) Bottom up … But equally are used in a ‘bottom up’ way that enfolds the underlying physics in higher level phenomena
(3) Emergent … This emergence of phenomena is key (such as the greater warming of the arctic and the warming of the lower atmosphere while the upper atmosphere cools) is ‘not obvious’ from the basic equations but is a very robust result, no parametrization at the phenomenological level required!
(4) Transparent (parameters) … As you say, where paramterization is unavoidable (like ‘how fast will methane be released from warming tundra’, then on-going research will provide an on-going tightening of the bounds – it is informed parametrisation, and even ‘how fast will the economy of China decarbonise’ can be estimated based on transparent evidence/ observations, projections and plans.
(5) Validated (parameters) … All the physical parameters are ultimately linked to real-world phenomena (like speed of methane release from carbon sinks), and these estimates are continually monitored and improved (so any settings of these parameters is not arbitrary, self-fulfilling guesswork, but transparent and justifiable).
104. mwgrant says:
aTTP
You miss the point. The flaw in the comment is the reference to ALL other software.
105. mwgrant,
Did you watch the video? I haven’t watched it for a year or so, but – IIRC – the claim was that climate models showed far fewer errors than any other type of software that they’d tested.
106. mt says:
Tol’s suggestion that GCMs can be tuned to give any desired result is a testable hypothesis.
The oil companies have enormous technical talent at their disposal. Presumably if there were anything to this hypothesis they would have tried to test it at some time in the last quarter century.
The motivation to create an alternative model which can comparably well replicate observed and paleo climate with very low sensitivity is surely enormous. Where is their result?
107. anoilman says:
mt: Oil companies do not have enormous technical talent at their disposal. I want to make that utterly clear to you. I already have one patent used for pipelines because they’ve been doing it wrong for 30 years. I’m about to get another for drilling… ’cause they’ve been doing it wrong for 45 years.
In fact if you look into what the industry does you’ll find a lot of ‘business’ papers which are long on bragging and short on details.
Sorry… I suppose I shouldn’t rant…
108. jsam says:
Tol believes GCMs can be tuned to give any desired result as that’s what economists do. And their success is splattered about for all to witness. Perhaps people in glass houses should stop modeling. It only exposes their nakedness.
109. anoilman says:
mwgrant: The video does indeed reference what we know of all other software. Give it a watch.
Even if the code for the models looked like old garbage, its been viewed, reviewed, reused, tested, and verified over and over. There is no industry that does that. Not even NASA. I think the closest would be open source. But what makes Climate Models different is that they are being reused competitively. Meaning, the goal is to cooperatively compete for better results. This would be like companies and businesses sharing all their technical know how directly with their competitors.
Just a second… I have to go check Windows Update.
110. Rob Nicholls says:
Thanks Andy Skuce; the article by John Kay is really interesting.
111. mt says:
oilman, that’s silly. The oil industry probably does more scientific computing than any other segment of the private sector. They show up in force at both computing and geophysics meetings.
But it’s a red herring anyway. Even if they didn’t have the talent in house, they could easily afford to pull together a team, if there were any point to it.
112. mwgrant says:
Yes. I appreciate the link and your memory is correct. But the claim in a TED talk really is just a repetition of anOilman’s comment. I guess it can be a basis for anOilman’s comment but it really does not get to the basis…even there it is only a slide, an assertion. [I am sure there some reference, but…not here] While I have little doubt on the considerable effort expended with the checking of the climate models comparative statements as above and in TED will tell little useful about the level of quality of the code(s). Hence such comments are subject to abuse and–way more more important–misunderstanding and misapplication by others.
I noticed that Easterbrook transitioned quickly to the important question of ‘is it good enough?’ That is something is must be determined by the context of the use of the model(s) and bug-density comparisons with unrelated code.
Clearly the QA of the code is dependent on the circulation and the use in the community of practitioners. Microsoft users are a huge community but their required expertise for the use of the products (in the context of of product capabilities) is low. On the other hand climate models (and other physical models) require more knowledge and expertise on the part of the user but the user bases are much, much, much smaller. To me this makes comparison more suspect.
Also the reference in TED is to ‘bugs’. That is different matter than errors in the conceptual models [as mentioned by Howard.] Code is only the implementation of a model; it is not the entire model.
regards,
mwg
113. mt says:
“…Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.”
Unfortunately, that’s also wrong. In particular, they suffer from the problems described here:
114. @gavin
Glad you’re making progress. Pity you did not dispel the notion that GCMs have only physics in — no approximations, no parameterizations, no calibrations.
As I have said many times before, it is the exaggerated and poorly informed claims of environmentalists that make climate science such an easy target.
115. Richard,
Excellent, just the response I was hoping for.
116. mwgrant says:
anOilman, aTTP,
BTW I did not notice the reference at the bottom of the slide earlier. If I can dig it up I want to look at it to see just what was done. Note that the comparison indicated in the slide is with not with all other software. The genre that is missing (based on the slide) are physical models such as the USGS MODFLOW family of codes. There are used in both the government and commercial sector and often in contexts where litigation and/or regulatory actions are in play. They are exceedingly well documented and tested by the authors but even more QA is imposed on their specific applications. (And yes they no doubt contain errors.)
Bottomline to me: it is one thing to indicate the level of effort put in to the conceptual underpinning and the code and it is another to state that it stands above all others. Just be realistic, that is all I ask.
117. mwgrant,
Sure, I don’t know if they are the most tested. Steve Easterbrooks presentation seemed to suggest that they’re pretty good. MT’s link suggests they still have issues, which isn’t a great surprise.
118. mwgrant says:
aTTP
Hmmm, my reading list just gets bigger. BTW it is interesting that the cited paper does not take up documentation. The root ‘document’ appears twice in the paper and in a very narrow context. Good extensive QA documentation is/can be a blessing or a burden. But in a dispute it is essential.
Thanks for the post.
119. izen says:
@-“In particular, they suffer from the problems described here:
Climate models are judged on how informative they are, not on the elegance of their code.
I wonder if econometric models are ‘better’ in terms of conforming to a set of software design principles, ie :-
F. Upstreaming, distribution, and community building:
In order to provide attractive alternatives to forking, maintainers
must be diligent to create a welcoming environment
for upstream contributions. The maintainers should nurture
a community that can review contributions, advise about
new development approaches, and test new features, with
recognition for all forms of contribution.
120. BBD says:
Richard
As I have said many times before, it is the exaggerated and poorly informed claims of environmentalists that make climate science such an easy target.
Um, false equivalence. You used to be better at this.
121. BBD,
Um, false equivalence.
Well, yes, I was going to point out that the only person who implied that it had been suggested that GCMs only have physics in them, was Richard himself, but I couldn’t really be bothered.
You used to be better at this.
Really?
122. BBD says:
Models are a work in progress and always will be. The insinuation that they are so badly borked as to be actively misleading runs into trouble when the focus is widened to include observational and palaeoclimate data.
123. BBD says:
Really?
Perhaps I’m erring on the generous side. All your fault; I learned it here 😉
124. John L says:
Richard Tol wrote:
“As I have said many times before, it is the exaggerated and poorly informed claims of environmentalists that make climate science such an easy target.”
Wait, I think you got it backwards, climate science and GCM:s are not dependent on environmentalists’ claims. The other way around, environmentalists tend to have a good basis for their worries. Unfortunately, more uncertain science and models probably mean more reasons to worry.
125. Michael Hauber says:
Turbulent Eddie and Richard Tol both seem to think that parameterization means its not physics.
By this argument none of fluid dynamics if physics. All physical laws governing a fluid behaviours are a statistical approximation of the resulting interactions of large numbers of atoms interacting a bit like billiard balls. Of course the statistical approximations can be extremely accurate, in part because the numbers of atoms in even a small parcel of air are so enormous that statistical variations can cancel out to a very high degree.
126. Tom Dayton says:
Steve Easterbrook is right, about climate model quality. And I’m speaking from experience of system engineering, designing, project managing, and quality assuring, ground software that was certified for Johnson Space Center Mission Operations crewed vehicle operations.
127. Michael Hauber – good points, though they tend to corroborate Tols comment: “GCMs have a remote relationship with physics”.
Here’s an interesting quote from Wiki:
As model resolution increases, errors associated with moist convective processes are increased as assumptions which are statistically valid for larger grid boxes become questionable once the grid boxes shrink in scale towards the size of the convection itself. At resolutions greater than T639, which has a grid box dimension of about 30 kilometres (19 mi),[11] the Arakawa-Schubert convective scheme produces minimal convective precipitation, making most precipitation unrealistically stratiform in nature.[12]
The statistics are corruptible.
The physics is not.
128. Eli Rabett says:
mw: Steve Easterbrook, Michael Tobis and William Connolley have written a fair amount on issues such as version controls, bit reproducability, and other software engineering issues. There have also been conferences and sessions at the Fall AGU.
The TL:DR version is that this was another bloody flag issue but some of the discussion was interesting and may have lead to some marginal improvements.
129. mwgrant says:
Tom Dayton,
“Steve Easterbrook is right, about climate model quality. And I’m speaking from experience of system engineering, designing, project managing, and quality assuring, ground software that was certified for Johnson Space Center Mission Operations crewed vehicle operations.”
If you want to say that based on and constrained to the experience you indicate Easterbrook appears to make a good argument in regard to the implementations of the codes that is understandable as your opinion/assessment. But of course I commented on anOilman’s comment, “…Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.” That is a very different statement.
Your dots do not connect with regard to anOilman’s comment. Sorry. Now if you can come back and say that you failed to mention it but indeed you have also spent hundreds of hours reviewing the climate codes and attendant QA documentation, all of the codes and QA documentation established for agency environmental transport codes… hmmm, all the performance assessment modeling done for WIPP, all of the performance assessment modeling done for Yucca Mountain, all of the NRC codes, etc.–all of this and much more in depth–well then you might be able to take a stab at the best of the best. Again I do not see that experience in your background. But why fool with such a useless comparative statement when the question is,”are the codes good enough for the intended use?” Exaggeration encourages suspicion.
130. mwgrant says:
[E]li,
“The TL:DR version is that this was another bloody flag issue but some of the discussion was interesting and may have lead to some marginal improvements.”
Again the important question is are they good enough for the intended application? Frankly in my opinion the answer is ‘yes’. The big problem lies with what is the intended use? For me–and the basis for ‘yes’–is that too much emphasis has been put on model results for the policy decision process. That is they are about as useful as a hockey stick, i.e., they paint a picture through a blurry lens and due to time constraints that is what we have to go on.
131. anoilman says:
mt: Issues in software are well known. Usually the call for throwing out old software and replacing it with new ‘ware is made by newbs. You can look up a lot about this. Its an immense schedule impact and more often than not it introduces a lot more bugs. That is what you article recommends. Industry sentiments would concur, but at least we have the earlier Mozilla flavor of Netscape to use;
http://www.joelonsoftware.com/articles/fog0000000027.html
The other thing is that you haven’t substantiated that there is a problem. So, it is illogical to think that you can miraculously replace old code with new code and achieve a better result. Different, yes. Better? Maybe. (How much better was BEST again?… )
It is also a misguided belief to think you can just toss money and hire experts to solve a problem. Life doesn’t work that way. Take it from me, I have to dodge the money that gets tossed. What happened to Shell’s submersible well cap… oh yeah, crushed like a beer can. Sign me up for that!
What you’re saying all sums up to a Silver Bullet for an undefined problem.
https://en.wikipedia.org/wiki/No_Silver_Bullet
132. anoilman says:
mwgrant: I was referring to Easterbrook earlier. Its been a while since I saw that. Its also been a tad longer since I had a subscription to a software engineering management journal.
Defects per unit of code is indeed the metric used to measure quality. There are many grades of defect, from simple implementation to design. Different defects take different amounts of time to identify and fix. Design defects are the most serious and have the greatest long term effect on a project.
Have you ever worked with software? The stuff that’s going out these days, is bad. NASA missed Mars, the military missed the target, and Microsoft… (just a second, I gotta get another update.)
Oh the stories I could tell about industrial software. It often that has little design, poor requirements analysis, no review, and poor coding. Ship and Patch;
http://www.desmogblog.com/2015/08/04/key-greenhouse-gas-study-may-have-systematically-understated-leaks-new-research-shows
http://www.wired.com/2009/05/minnesota-court-release-source-code-of-breath-testing-machines/
133. TE,
Nothing that you (or Tol) have said supports the claim that “GCMs have a remote relationship with physics”.
134. mwgrant says:
Hi anoilman,
” The stuff that’s going out these days, is bad. NASA missed Mars, the military missed the target, and Microsoft… (just a second, I gotta get another update.) ”
So it seems. So it seems. And let me go ahead and spit it out….all of the mentioned activities have more than likely ‘instituted formal quality assurance programs.” G-r-r-r-r!
So where am I coming from? A different background and perspective than you. Three-plus decades coding and using models in nuclear waste facility performance in the environment, impacts assessments of contaminated sites in regulatory and commercial contexts. IMO in that arena code defects are an incomplete measure of quality. Often QA assurance involves lots of documentation, e.g., validation and verification packages for codes, internal review, extensive archives including copies of all materials referenced…it could be painful but essential. In addition to the code(s) run, detailed characterization of the underlying site-specific conceptual models is needed in the case of numerical groundwater models and for exposure scenarios when human and ecological impacts are studied. There are more things, but the take away is that from my perspective from the use of models in regulatory and litigation contexts defect would miss some potentially very serious problems if the metric was defect counts.
If I were to guess I would guess that the climate codes are much larger than the codes I refer to above and defect count might be more significant metric for them. However, they are still (aggregate) physical models and so QA also should document and qualify the conceptual models that shape components of the larger system model. Defect counts simply do not do that.
It should be clear now that my bias is that climate models and codes have traits in common with large commercial codes (MS) as well as traits in common with codes for the physical models used in environmental regulatory work. This second set of traits includes things like use of numerical solutions for PDEs, real difficulty in characterizing the physical domain modeled, and difficulty setting initial and boundary conditions. Also there are multiple phases present in the systems and nonlinear processes occurring. And of course the systems have their stochastic aspects. How all of these sorts of non-code factors are handled has to be documented in order for the code to be used with the specified assurance developed in the upfront QA plan–where goals are set. In short I see climate codes subject to the same practices used in the regulatory arena–and given the tenor of the present projections–in the most severe manner.
Hope you are doing well,
mwg
135. BBD says:
mwgrant
For me–and the basis for ‘yes’–is that too much emphasis has been put on model results for the policy decision process.
This is both false and classic denialist rhetoric, aka concern trolling.
This is reality, described by James Hansen:
TH: A lot of these metrics that we develop come from computer models. How should people treat the kind of info that comes from computer climate models?
Hansen: I think you would have to treat it with a great deal of skepticism. Because if computer models were in fact the principal basis for our concern, then you have to admit that there are still substantial uncertainties as to whether we have all the physics in there, and how accurate we have it. But, in fact, that’s not the principal basis for our concern. It’s the Earth’s history-how the Earth responded in the past to changes in boundary conditions, such as atmospheric composition. Climate models are helpful in interpreting that data, but they’re not the primary source of our understanding.
TH: Do you think that gets misinterpreted in the media?
Hansen: Oh, yeah, that’s intentional. The contrarians, the deniers who prefer to continue business as usual, easily recognize that the computer models are our weak point. So they jump all over them and they try to make the people, the public, believe that that’s the source of our knowledge. But, in fact, it’s supplementary. It’s not the basic source of knowledge. We know, for example, from looking at the Earth’s history, that the last time the planet was two degrees Celsius warmer, sea level was 25 meters higher.
And we have a lot of different examples in the Earth’s history of how climate has changed as the atmospheric composition has changed. So it’s misleading to claim that the climate models are the primary basis of understanding.
136. mwgrant,
As BBD points out, many do not regard climate models as nearly as crucial as you seem to think they are. We have a very good idea of the global effect of increasing emissions from paleo work, and from basic physics. The decisions as to whether we should be reducing our emissions, or not, could be made without reference to GCMs. Where GCMs might actually be more important is in determining what sort of adaptation strategies we should be considering. Determining the regional effects of increasing our emissions isn’t all that easy without GCMs. Addmittedly, this is an area where there is less confidence in the output from GCMs (we’re more confident about the overall warming and changes to the hydrological cycle – see Hargreaves & Annan – than we are about regional effects). However, this is all we have at the moment. They’ll get better, but that doesn’t mean that one shouldn’t use their output now to inform policy.
137. anoilman says:
mwgrant: I have worked in a variety of software development roles in a variety of industries with varying degrees of quality and processes.
Commercial software is utterly different from climate software. Utterly.
You are neglecting the sheer amount of review and reuse that goes on with the actual climate model code. To be very very clear, its cooperatively shared. Different eyes look at it all the time to make sure it works right, and then share their findings.
That never happens with commercial code. Do you have any idea how much a code review costs? And you think a company would do 10 more with the exact same guys just to make sure? No. Companies aren’t that stupid. Installing a money furnace would be cheaper I think.
I’ve competitively shared code with my employer’s arch enemy. Have you?
Cell phones have large complex protocol stacks. It takes hundreds of programmers to write this stuff, and its also pretty scary to also try and make the phones. So most companies buy this source code from a common source for less than it would cost to write, then concentrate making the phones.
In the early days, we found ourselves in the curious position of being ahead of our competition. We were flying all over the world testing our phones, and kinda resented spending all that money and sharing the results. We were supposed to share our findings with the protocol stack developer, but we also knew who’d get the update… So we didn’t share everything that we had found. 6 months later, our dear competitor still had the bugs we already found earlier and consequently they were unable to certify their phones.
Too bad they didn’t have cooperative software sharing.
I’m not cognizant of the details for the requirements for climate models. That is unrelated to whether they work and should not be discussed/mentioned in the same context as software quality.
138. dhogaza says:
mwgrant:
“If I were to guess I would guess that the climate codes are much larger than the codes I refer to above and defect count might be more significant metric for them.”
Given that the source to NASA GISS Model E is online, you could check for yourself, for that particular model.
“However, they are still (aggregate) physical models and so QA also should document and qualify the conceptual models that shape components of the larger system model. Defect counts simply do not do that.”
Since NASA GISS’s Model E project has a nice web site attached to it, including a list of papers in the academic press which describe specifications and results, and other references to academic papers whose results are incorporated in the model, and links to other documentation, it seems you could spare us at least some of the questions your posting if you just did a little research.
I haven’t bothered looking for similar information for other models recently, though in the past I’ve found a bunch of documentation on the Hadley Centre’s GCM.
139. mwgrant says:
BBD
“For me–and the basis for ‘yes’–is that too much emphasis has been put on model results for the policy decision process.”
This is both false and classic denialist rhetoric, aka concern trolling.
—————————————–
It is simply a statement of my opinion based on my experience working with environmental models. In your response you choose to ignore the sentence that follows:
“That is they are about as useful as a hockey stick, i.e., they paint a picture through a blurry lens and due to time constraints that is what we have to go on.”
and in particular the last half of that sentence: due to time constraints that is what we have to go on.
‘false and classic denialist rhetoric’? Now who is really the troll here?
aTTP
So you see, my opinion is opposite of your read “many do not regard climate models as nearly as crucial as you seem to think they are.” My statement is that the value of models in the present context is over-stated. We are under time constraint in a decision process and have to objectively move that process forward given what imperfect inputs we have in hand. That is the essence of decisions.
As for my concern with model QA, my last lengthy response to anoilman points out my perspective on model and code QA in the environmental regulatory arena–a different background than his. In both case QA is a big deal. However how QA is approached is different. I found this interesting.
A final note to BBD — you are quick to use handles for people you neither know nor understand. Hackneyed political terms usually are not productive.
140. TE, Nothing that you (or Tol) have said supports the claim that “GCMs have a remote relationship with physics”.
Statistics is the key word.
There’s an apt line:
Models are like sausages: People like them a lot more before they know what goes into them”
But you kind of know that the GCMs are not good and pure, right?
Because why else would the gcm results be so different?
They all use the same physics, right? ( If they don’t there would be a big to-do about which physics are correct, but the physics are mostly “settled”, right? )
But statistics and simply instability of numerical methods give us the variance that we observe between models which are all attempting to represent the same physics:
141. TE,
Statistics is the key word.
That does not illustrate the GCMs have a remote relationship with physics. There’s a reason they call it sub-grid physics.
142. BTW, I tend to agree with the main theme of your post is that It’ more difficult with… physical models to fudge the results. Or at least, I don’t think fudging is what’s happening.
However, humans do gravitate toward conclusions and more than fudging, there is probably picking the result that best fits our ideas ( selection bias ).
143. BBD says:
mwgrant
A final note to BBD — you are quick to use handles for people you neither know nor understand. Hackneyed political terms usually are not productive.
Read the Hansen quote and compare with what you are doing here.
144. mwgrant says:
dhogaza wrote:
“Given that the source to NASA GISS Model E is online, you could check for yourself, for that particular model.”
Why bother? The comment is clearly indicated as a guess—and all the standing that entails or doesn’t. I am content. It is based on my experience with other environmental systems codes and the comparative level of complexity. I guess and tell you that I did. What’s you’re beef?
dhogaza wrote:
“Since NASA GISS’s Model E project has a nice web site attached to it, including a list of papers in the academic press which describe specifications and results, and other references to academic papers whose results are incorporated in the model, and links to other documentation, it seems you could spare us at least some of the questions your posting if you just did a little research.”
I have been there and my comment reflect what I found. Again looking at what is done for other custom and commercial environmental codes when used in a regulatory and litigative environment posted journal article are generally inadequate documentation—good summary, yes, but documentation no. [Journals have constraints on article size and besides QA of codes is not there primary function.] In particular validation and verification V&V or some functional equivalent is a sufficient concern to merit its on documentation.
145. mwgrant says:
BBD wrote:
mwgrant
A final note to BBD — you are quick to use handles for people you neither know nor understand. Hackneyed political terms usually are not productive.
Read the Hansen quote and compare with what you are doing here.
1.) What does Hansen’s quote have to do with the charged language you so often choose to use?
2.) Yes, I read the quote. You are on your own tangent. Note however that Hansen’s “I think you would have to treat it [model output] with a great deal of skepticism, “ is what I always do. This is consistent with my earlier statement:
“For me–and the basis for ‘yes’–is that too much emphasis has been put on model results for the policy decision process. That is they are about as useful as a hockey stick, i.e., they paint a picture through a blurry lens and due to time constraints that is what we have to go on
Perhaps ‘imperfect’ would have been better than ‘blurry’.
146. mwgrant says:
—anoilman
1.) “Commercial software is utterly different from climate software. Utterly.”
I have worked* with both commercial, national lab and house-custom code on projects with the USDOE, USNRC, USEPA, and USACE. This includes writing code from scratch, extending others’ codes, using commercial codes and using government codes. Utterly, indeed.
—-
*To avoid confusion: both as on-site worker and as off-site consultant working for beltway bandits.
2.) “You are neglecting the sheer amount of review and reuse that goes on with the actual climate model code. To be very very clear, its cooperatively shared. Different eyes look at it all the time to make sure it works right, and then share their findings.”
No. I am familiar with those processes in other disciplines. I just do not share the same faith in them that you do — not when the stakes are much higher than those I have seen in smaller environmental projects where a much more demanding/detailed QA effort is reasonably expected.
“I’ve competitively shared code with my employer’s arch enemy. Have you?”
I’ve shared my code with the opposition’s hired guns’ in a contentious highly government-public environment. I think that qualifies but like your comment it is irrelevant here.
“I’m not cognizant of the details for the requirements for climate models. That is unrelated to whether they work and should not be discussed/mentioned in the same context as software quality.”
Here ”whether they work” is an ambiguous term. It is tempting to dismiss the comment out-of-hand, but I will respond:
If when working on a model for a physical system you put the wrong model for a particular process, i.e., you implement a poor conceptual model or perhaps inaccurate approximation for that process, then the code can execute to the cows come home and quality will be an issue. So I will assume that by ‘work’ you mean something more than ‘execute’ and that something includes selection the correct conceptual model or correct level of approximation for the process. But that is very much a part of the requirements or specifications for the mode, i.e., it is still very much a matter of quality assurance.
and back to
”Cell phones have large complex protocol stacks. It takes hundreds of programmers to write this stuff, and its also pretty scary to also try and make the phones. So most companies buy this source code from a common source for less than it would cost to write, then concentrate making the phones.
I think that to an extent there is a similar pattern in the government agencies. Our projects handled the QA in-house and out-of-house code differently. Separate were in place with-in the quality assurance scheme.
————
It is just opinion but I think that climate folks should really see how the geohydrologists have approached model QA in a contentious regulatory environment. It may be enlightening and sobering at the same time.
147. BBD says:
mwgrant
too much emphasis has been put on model results for the policy decision process.
As I said – correctly – this is both false and classic denialist rhetoric (concern tr0lling). If you don’t like being challenged then avoid making false statements and parroting denialist rhetoric – especially if you aren’t of that camp.
It is simply a statement of my opinion based on my experience working with environmental models.
See ATTPs response to your earlier comment above. Productive conversation arises when the party in error admits the error, which you chose not to do.
148. BBD says:
1.) What does Hansen’s quote have to do with the charged language you so often choose to use?
Tone tr0lling. See previous comment.
2.) Yes, I read the quote. You are on your own tangent.
Hansen:
Oh, yeah, that’s intentional. The contrarians, the deniers who prefer to continue business as usual, easily recognize that the computer models are our weak point. So they jump all over them and they try to make the people, the public, believe that that’s the source of our knowledge. But, in fact, it’s supplementary. It’s not the basic source of knowledge.
You (incorrectly):
too much emphasis has been put on model results for the policy decision process.
QED
* * *
It is just opinion but I think that climate folks should really see how the geohydrologists have approached model QA in a contentious regulatory environment. It may be enlightening and sobering at the same time.
And I think that the constant insinuation that there are ‘problems’ with climate science which needs to learn from other fields is generally rather risible.
149. And I think that the constant insinuation that there are ‘problems’ with climate science which needs to learn from other fields is generally rather risible.
I tend to agree, but I’ll make a further comment. Most forms of modelling in the physical sciences is simply a way of trying to gain understanding of a physical system. How does it respond to changes? What happens if…. Etc. The goal isn’t to produce some kind of result for a client. It’s true, however, that climate science is in a bit of a grey area in which climate models are used to try and answer fundamental questions, but are also used to provide information for policy. So there is a case to be made for maybe dealing with climate modelling in a way that is slightly different to how we deal with other forms of modelling in the physical sciences. However, we already do use the same models for weather prediction, which is providing a service, so it’s not clear that this isn’t already the case, to a certain extent. Additionally, we’re trying to provide projections for the coming decades, so how do we actually validate these models? As I think Annan & Hargreaves (or Hargreaves & Annan) said. We could wait for 100 years to find out, but that would probably be too late if what they suggest now is broadly correct.
150. mwgrant says:
BBD and I then wrote:
2.) Yes, I read the quote. You are on your own tangent. Note however that Hansen’s “I think you would have to treat it [model output] with a great deal of skepticism, “ is what I always do. This is consistent with my earlier statement:
“For me–and the basis for ‘yes’–is that too much emphasis has been put on model results for the policy decision process. That is they are about as useful as a hockey stick, i.e., they paint a picture through a blurry lens and due to time constraints that is what we have to go on
Perhaps ‘imperfect’ would have been better than ‘blurry’.
DUE TO TIME CONSTRAINTS THAT IS WHAT WE HAVE TO GO ON
Can you read? I am not going to return to the topic or respond to you feigned injury, BBD.
151. BBD says:
I can read, mwgrant. I see no acknowledgement that you said something objectionable.
152. mwgrant says:
aTTP raises good points and these as it happens allows me to press the point that there are possibly some useful things to be learned.
Most forms of modelling in the physical sciences is simply a way of trying to gain understanding of a physical system. How does it respond to changes? What happens if…. Etc. The goal isn’t to produce some kind of result for a client. It’s true, however, that climate science is in a bit of a grey area in which climate models are used to try and answer fundamental questions, but are also used to provide information for policy.
Yes. The same holds for many other environmental models, e.g., groundwater contaminant models The models share other trait with the climate models. Numerical implementations, under-characterized model domains, fluid flow, chemical interactions, difficult defining the best initial and boundary conditions , and stochastic elements, to name a few.
So there is a case to be made for maybe dealing with climate modeling in a way that is slightly different to how we deal with other forms of modeling in the physical sciences. … Additionally, we’re trying to provide projections for the coming decades, so how do we actually validate these models?
There is a lot in common as I already mentioned above in the first part of this response. In addition to those consider this: a big concern with groundwater models is validation, particularly in light of the fact that impacts well into the future have to be predicted—years to thousands of years. So you see, these people have some very similar problems to take on.
“We could wait for 100 years to find out, but that would probably be too late if what they suggest now is broadly correct.”
Just again emphasizes again similarities.
aTTP, my comments are not in any way directed at throwing the models out, just keeping them in perspective within a necessary decision process consistent with Hansen’s I think you would have to treat it with a great deal of skepticism’ comment and suggesting another view on QA in what is notably a contentious arena.
153. anoilman says:
BBD: mwgrant is insinuating that there are problems with climate models while providing zero evidence that this is the case. I tend this label that behavior trolling.
Given mwgrant’s preference for being extremely precise in his arguments, its safe to say that, he, like us, understands that there are no concerns over climate models. Its an easy conclusion since he has no evidence that there is a problem in the first place.
154. anoilman says:
mwgrant: You do know that the models go back and forth to the real world don’t you? They’ve taken weird results analyzed them further, sent them back to hunt for more real world information, which was later found. You do know that right?
155. mwgrant says:
anoilman,
mwgrant: You do know that the models go back and forth to the real world don’t you? They’ve taken weird results analyzed them further, sent them back to hunt for more real world information, which was later found. You do know that right?
You respond with a snarky devoid-of-content comment evoking the real world with which you are so familiar given that I’ve mentioned V&V a couple of times? You can do better. Or can you? I tire of you irrelevant bluster. You want to BS your creds but keep opening your mouth and getting in your on way. Let’s face it, I think jack of your creds in the climate model context and vice versa. I’ll sleep at night. I tire of your irrelevant bluster. (That make me think you did more ‘managing than doing’ but that is probably an availability bias on my part.)
156. Okay, this thread seems to be getting to the confrontational stage, so maybe it’s best to call it quits.
157. mwgrant says:
Good idea.
158. Eli Rabett says:
perhaps mw should go find out about how the community earth system model has been put together.
159. mwgrant says:
Let the cheap shots keep rolling, aTTP? If it is done it is done.
160. Eli Rabett says:
Not a cheap shot. CESM is an open source earth system model, and as such an interesting and important scientific software effort that is now in its fourth or fifth generation.
161. mwgrant says:
Eli, OK on second thought that seems like an interesting idea. It will take some time, but what the heck, let’s do it? Are you available for questions? Shall we in due time take it up on your blog so as not to burden aTTP? (I have one but it is not up and running.) Retired analyst with background in fate and transport modeling in the environmental arena takes a look at CESM. No agenda from either of us and no snark. And interaction as I go thru it. Hell, I live in Montgomery County I’ll even meet you for lunch. Interested? I need a hobby.
162. anoilman says:
mwgrant: Why don’t politely provide proof that there is something wrong. That I would look at. Lets see the mystery data. Show us.
Pony up buddy.
163. mwgrant says:
anoilman – where have I stated ” that there is something wrong.” Quote it ol’ buddy.
164. mwgrant says:
anoilman — I’ve just taken stock how far afield from my original comment this thread has gotten–the audacity of your nonexpert assessment of the best of all software. … really a foolish statement because the burden of proof is on you. Not going to waste time on you any more.
165. mwgrant says:
Eli Rabett wrote:
perhaps mw should go find out about how the community earth system model has been put together.
CESM 1.2 User Manual writes:
<iCESM Validation
Although CESM can be run out-of-the-box for a variety of resolutions, component combinations, and machines, MOST combinations of component sets, resolutions, and machines have not undergone rigorous scientific climate validation. … Users should carry out their own validations on any platform prior to doing scientific runs or scientific analysis and documentation.
http://www.cesm.ucar.edu/models/cesm1.2/cesm/doc/usersguide/x32.html#software_system_prerequisites
Well, that was short and sweet. Looking at CESM will not be that useful from a QA perspective. That is not why it has been made available anyway. (Eli doesn’t seem tuned to this thread. Guess that was the case.) Better Model E documentation. Really not a surprise.
166. mt says:
Wow, a lot of interesting ideas on this thread. I hope ATTP doesn’t shut it down despite an expressed inclination to do so.
Let’s start with the nuts and bolts issue.
On the quality of GCMs I am interested in engaging with mwgrant and frankly less so with anoilman who appears to me to be talking out of his hat. Although I am often cast as a zealot by some in the naysayer camp, I will not appear as a defender of the state of the art in climate modeling. I think some wrong turns have been taken in the last 15 years, or at the very least, that some promising avenues have been unreasonably ignored.
I am interested in the idea of undertaking a serious code review of a GCM. If you choose CCSM, I am already not be an enthusiast. I’d be more interested in GISS or the Hadley model, myself.
The bit you quote about validation is interesting, though I’m not sure what conclusions you take from it. I find it alarming that there is a “validation” phase in porting CCSM to any given platform. The idea that anybody outside NCAR knows how to do this appears to be a polite fiction. Their budget having been cut, their support for outside users is remarkably thin and getting the thing to even run at scale, never mind validate it, on a non-NCAR platform can be immensely frustrating as I can personally attest.
That said, I remain reasonably confident that NCAR internally do have a verification/validation protocol, and that the results of the model on validated platforms is tested to reasonable standards in the sense that would satisfy any software engineer or follower of Feynamn’s dictum “the easiest person to fool is yourself”.
That said in their defense, there is little that can be done about this if we accept the underlying idea of the thrust of ever increasing complexity of GCMs. Supercomputing is a sort of perpetual 1965, where ever single machine has its peculiarities; code that runs under putatively the “same” operating system and the “same” compiler on the “same” message passing infrastructure will fail on a new machine simply because the ways in which processors are allocated to a job varies from one machine to another. Most programmers have long since forgotten the overhead of JCL (“job control language”); supercomputer end users (never mind programmers) spend a lot of time mucking with shell scripts that essentially perform the functions of JCL at a much higher level of complexity and with much weaker documentation.
Finally, these models are not for practical purposes Open Source even if they do literally satisfy the criteria, of which I’m uncertain. http://opensource.org/definition Publication of the source is not sufficient. In practice, though, Open Source implies a welcoming of input and exchange with any interested and motivated party. Under budgetary stresses, NCAR in particular doesn’t act remotely in an open-sourcey sort of way. And in practice, every run on non-NCAR platforms constitutes a fork. University researcher modifications are to my knowledge hardly ever vetted and folded back into the source tree. Mostly they are done by immensely patient grad students who are adept in some aspect of science but little or no software engineering training.
Again I would point to this article which I wish would get MUCH more traction in the climate software community: http://arxiv.org/pdf/1407.2905v1.pdf
To critics of climate science I would add first of all that none of this is unique to climatology – many other sciences are struggling along in this way, in a way that seems antediluvian to someone who has some sense of modern commercial software development.
Secondly, I would add that uncertainty is not your friend. To state that the models are lousy and therefore we should act as if the sensitivity is zero is about as unreasonable an argument as we see in these unreasonable times. It’s like saying you haven’t looked at your bank statements and so therefore you can write as many checks as you want.
Also, I would like to be intellectually honest and point to a key weakness in my complaints, which is that, somewhat to my surprise, the different models DO seem to be converging on some specific regional predictions, e.g., the drying of the American southwest. So all the effort that has been put in for fifteen years has NOT been entirely fruitless. That said, I think it’s long past time for a simple, readable AGCM implemented in a modern programming language (ideally, a language whose name is homonymous with a large snake) over a couple of well-tested libraries for the fluid dynamics and radiation code. I’ve been arguing for this for a decade, and I think it’s more true than ever.
I’m no longer waiting for my phone to ring nor submitting vain proposals to funding agencies. But maybe it’s time for an unfunded effort.
167. mwgrant says:
mt
Paragraph by paragraph I find your comment on target…well, python needs to thought about a bit (speed? and IMO there is nothing wrong with modern fortran)
I hope to respond later in more detail. However for now I’ll make some quick comments:
1.) I too have an expectation that internal documentation in some form/condition can be found at UCAR. I think that because the point of the site is to make the code available to a wider community of users, there was a pragmatic decision made on the amount of material made available–do not overwhelm the user. Most people probably want to open the box and run the QA. UCAR is quite clear and correct in its note: the user is responsible installs on his/her machine(s). This is both a reasonable and practical burden to put on the user and would be a part of their project’s overarching QA plan.
2.) I agree considering GISS and Hadley is much more meaningful.
3.) How to proceed is an important topic. That is a whole big topic by its self. Being a productive effort in the current environment could prove tricky.
4.) While a modernized updated code (open source paradigm?) could be an attractive tool and contributor one should not lose track of the important goal is assure adequate confidence in the QA on the existing codes. But we definitely need a gnuClimate :O)
168. mt says:
I believe that the idea that there is much need for better climate modeling to inform the mitigation process is wrongheaded.
It would be good if we could inform the adaptation process; so far progress has been limited, and I think it will remain so until a new software approach is taken. And if anybody asked me, I honestly think I can connect the right communities to make a good stab at it.
But on the mitigation side I will continue to maintain that this is a red herring. We have more than enough information to know that our recent and foreseeable emissions trajectory is insane.
169. anoilman says:
mwgrant: So nothing wrong. Got it! Nothing to worry about!
It has been a textbook experience as always. Thanks;
http://www.easterbrook.ca/steve/2010/11/do-climate-models-need-independent-verification-and-validation/
170. mwgrant says:
anOilman:
So you can’t find a quote. Got it. It been a coloring book experience. Don’t worry you will get better staying in the lines in time.
171. I’m having a pleasant evening with the family (even though South Africa lost to Argentina for the first in a rugby test) so don’t have time to write a lengthy comment. MT has made some very interesting points. Maybe we can aim to use those as a motivation for being constructive, rather than confrontational.
172. mwgrant says:
I will not continue here on this topic. Too much chaff. aTTP you should’ve snip the troll cr*p–plain and simple.
173. > The flaw in the comment is the reference to ALL other software.
I thought this was common knowledge that everything was broken:
Once upon a time, a friend of mine accidentally took over thousands of computers. He had found a vulnerability in a piece of software and started playing with it. In the process, he figured out how to get total administration access over a network. He put it in a script, and ran it to see what would happen, then went to bed for about four hours. Next morning on the way to work he checked on it, and discovered he was now lord and master of about 50,000 computers. After nearly vomiting in fear he killed the whole thing and deleted all the files associated with it. In the end he said he threw the hard drive into a bonfire. I can’t tell you who he is because he doesn’t want to go to Federal prison, which is what could have happened if he’d told anyone that could do anything about the bug he’d found. Did that bug get fixed? Probably eventually, but not by my friend. This story isn’t extraordinary at all. Spend much time in the hacker and security scene, you’ll hear stories like this and worse.
It’s hard to explain to regular people how much technology barely works, how much the infrastructure of our lives is held together by the IT equivalent of baling wire.
Computers, and computing, are broken.
View at Medium.com
Even formal specification can’t provide much safety against exploits, unless it goes from programs to language all the way up to hardware.
***
> Quote it ol’ buddy.
There’s this:
Again the important question is are they good enough for the intended application? Frankly in my opinion the answer is ‘yes’. The big problem lies with what is the intended use? For me–and the basis for ‘yes’–is that too much emphasis has been put on model results for the policy decision process.
The emphasized bit presumes that models have been used for the policy decision process, a claim which, as MT suspects, might very well be a red herring.
In other words, dear mwgrant, unless you can clarify where you’re going with this “important question,” your huffing and puffing against Oily One is turning into playing the ref, and playing the ref is against the site policy.
Thank you for your concerns,
W
174. mwgrant,
Although I do have some sympathy for those who get flack on climate blogs, I would argue that if you think there’s too much chaff here, there probably isn’t a climate blog that would be suitable.
MT,
What you said here seems similar to what I was trying to get at here. I agree that using inadequacies in climate models as an argument against mitigation is a red herring.
175. mwgrant says:
aTTP
“Although I do have some sympathy for those who get flack on climate blogs, I would argue that if you think there’s too much chaff here,”
Different topic, so …. Ha! I’ve reached that conclusion quite some time ago. That’s why one has to break it off somewhere.
176. Tom Curtis says:
Anders, South Africa lost to Argentina AND New Zealand lost to Australia. The world has turned upside down ….
177. Steven Mosher says:
” Most programmers have long since forgotten the overhead of JCL (“job control language”);”
we had one guy who wrote JCL. nobody ever pissed him off.
178. mwgrant says:
we had one guy who wrote JCL. nobody ever pissed him off.
One? Mosh, you are indeed a youngster aren’t you?
179. mwgrant says:
Willard wrote:
“I thought this was common knowledge that everything was broken” etc., etc.
Everything being broken is never a point of contention. The point of contention is “…Climate models are the best and most thoroughly reviewed software out there. There is nothing like it”—in particular its broad and deep reach. This basis for this assertion has been given as Easterbrook (TED) which cites Pipitone and Easterbrook 2012. The discussion in the TED talk is only a small part of the twenty-one minute presentation but clearly indicated the quality as indicated by defect density:
As Easterbrook notes this defect density is low. Fine, I had not disputed this anyway. I have simply questioned the superlative comparison of anoilman’s evaluation. Initially this was based on common sense—-there are just too many codes out there to have been considered.
My intuition seems to have been corroborated in the article. In the beginning of the section on Future Work the Piptone and Easterbrook reference cited on the debug slide in the later’s TED talk states [my bold]:
Many of the limitations to the present study could be overcome with more detailed and controlled replications. Mostly significantly, a larger sample size both of climate models and comparator projects would lend to the credibility of our defect density and fault analysis results.
A little later in the same section the authors quote ‘Hatton (1995)’:
“There is no shortage of things to measure, but there is a dire shortage of case histories which provide useful correlations. What is reasonably well established, however, is that there is no single metric which is continuously and monotonically related to various useful measures of software quality…”
Enough said. BTW the article is interesting and informative on the question of ascertaining climate software quality. It is a good initial effort, but here have perhaps forgotten the limitations mentioned by the authors.
***
In response to my statement
anoilman – where have I stated ” that there is something wrong.” Quote it ol’ buddy.
W offers:
There’s this:
”Again the important question is are they good enough for the intended application? Frankly in my opinion the answer is ‘yes’. The big problem lies with what is the intended use? For me–and the basis for ‘yes’–is that too much emphasis has been put on model results for the policy decision process.”
W. misread the quoted text[block]. There is nothing wrong here. There is only a problem to be solved—-specifying the intended use. Existence of a problem does not necessarily mean that there is something wrong.
I read mt’s comments, ‘red herring’ and the rest, differently than you. The two broad alternatives beyond a base case no-action are mitigation and adaptation. I saw mt’s comments as further observations on the current state of things and on how modeling might inform for each alternative. I agree with what he wrote and indicated this to him—outside of this blog.
Playing the ref? If the ref intentionally or unintentionally get in the game, then the ref is game. The timing of his caution flags suggest to me that the ref entered the game. So he got bumped a little. Ref may think otherwise, but there probably isn’t a climate blog where that doesn’t happen.
Huffing and puffing? — nah, it is exasperation.
For the record Oily One is a W reference not used by mwg.
180. mt says:
The idea that mitigation and adaptation should be considered as alternatives is fundamentally wrong in my opinion. Such a dichotomy certainly should not be read into my comments.
There is no meaningful bound to adaptation cost in the absence of mitigation. This should be obvious to any one who is paying reasonably close attention.
Climate models are only one among several streams of evidence pointing to a high enough sensitivity that a large proportion of our current fossil fuel reserves cannot be used (at least in the absence of an enormous sequestration effort) and that further discoveries are counterproductive. We simply cannot adapt to the amount of CO2 that individuals could profitably emit in the absence of regulation. I don’t think this is plausibly in doubt.
My point was that if there is a policy use to continuing efforts in climate modeling, it is to inform such adaptation as will inevitably become necessary, even in the most rigorous mitigation scenario.
Mr. Grant’s comments make it very clear how very badly we have been doing in communicating this fact to the public and the policy sector. He’s obviously not stupid and does have some relevant skills. He apparently just doesn’t get it.
Without significant mitigation, adaptation will fail. With vigorous mitigation, expensive adpatation will be necessary. They are not alternatives.
181. Tom Curtis says:
mt, I think it is fair to say that without models, we cannot place either an upper or a lower bound on the cost of adaption. That is, without climate models we cannot be sure that the costs of climate change will require significant adaption to maximize standards of living. However, that is irrelevant because without climate models we also cannot be sure that climate change impacts will exceed the bounds of all reasonable adaption measures such that standards of living will be reduced to levels not seen globally in 100, or even a thousand years; and without models we can be sure the low cost possibilities are also low probability possibilities. Not mitigating (ignoring the “evidence” of models) is like playing Russian roulette with five rounds loaded when we are uncertain whether it is a five or a six chamber cylinder. We might come out ahead, but no sane person bets on it. With the “evidence” of models, we at least know the number of chambers.
182. mwgrant says:
mt
“The idea that mitigation and adaptation should be considered as alternatives is fundamentally wrong in my opinion. Such a dichotomy certainly should not be read into my comments. ”
Thanks for that clarification on that detail. It was my mis-read. Your point-of-view is duly. Please understand that my perspective is to view the a set of plausibly executable possible alternative and the manifolds of possible scenario dependent outcomes. You should note that there is nothing that precludes your variant of an adaptation to be an alternative in the risk approach. It is simple: develop a manageable number of alternatives assess the risk (good and bad). And crank it out. (Yes, is it complicated and bumpy.)
A couple points (I’m trying to be brief)
1.) I expect that if such a structured comprehensive process is undertaken objectively it has the capability to order relatively or semi-quantitatively the alternatives. If either entrenched side in this ‘debate’ has faith in their position they have also have no fear in laying on the line in a fair process. So I have no ‘sympathy’ for mt who has in some manner worked it out in his head or elsewhere. However, the same applies to all others that have gone thru similar evaluations. Until given good reason I would not budge deviate from that approach.
2.) The estimation of risk is obviously multiple points of contention, maybe to the point that some may be deemed incalculable. Even if that is the case it is important to have taken up by all participating parties in a transparent manner. Even some sharper delineation of areas of disagreement may help.
3.) mt if you have some concrete suggestions on you think I could read to facilitate my better understanding of your ‘we ain’t got no options’ perspective point them out and I’ll be happy to spend some time with it–or am I safe for now just going with your third paragraph as a readers’ digest version.
“Mr. Grant’s comments make it very clear how very badly we have been doing in communicating this fact to the public and the policy sector. He’s obviously not stupid and does have some relevant skills. He apparently just doesn’t get “
This is a good comment. Whether I do not get it or the efforts at communication have been bad or both, we are all better off if this is addressed. (Perhaps the nature of blogs as suggested by aTTP above means that they are not very helpful in the process and we need to modify them or move to other venues.
I appreciate your well written response. Thanks.
183. Willard says:
> Existence of a problem does not necessarily mean that there is something wrong.
Something tells me this semantic argument won’t lead you very far, mwg.
It’s a problem, but there’s nothing wrong with what you call a “problem.” Is there really a problem because you use the word “problem” anyway? You were just asking questions, after all.
This semantic argument seems to lead to the Just Asking Question defense. Here’s where the Just Asking Question leads:
If you agree about what MT said, then you might need to revise your false dichotomy about mitigation or adaptation.
It would be easy to be disprove Oily One’s hyperbole: V&V code, functional code, formally specified code, etc. It’s easy to understand it too: the world is full of crappy code.
Do you know the concept of good enough parenting, BTW?
***
> If the ref intentionally or unintentionally get in the game, then the ref is game.
If this were true, whining about moderation would always be justified. Any kind of ad hom would be justified too.
The ref ain’t game since it’s AT’s blog and AT’s rules.
184. mwgrant says:
Willard
Thanks for the quick response. Pretty much as I expected. I wonder if you have ever made a real contribution to a blog? Oh, well not my problem. Goodnight. Catch you another time.
185. Willard says:
Thank you for your concerns, mwg.
186. anoilman says:
mwgrant Actually Willard has made a real contribution to a blog. Although I kinda expect you to argue that in absence of an actual definition for all words being conveyed…
If you must know he’ll have a go at either side in an argument if they are speaking nonsense. Its all about the semantics in that case. You should read what he’s written on the subject.
Usually he steps in sooner when there is as many shirt ripping appeals to authority as both you and I did. He’ll happily point out that it is a nonsense argument.
Your comments bare a striking resemblance to page one of the Contrarian Matrix;
https://contrarianmatrix.wordpress.com/no-best-practices/
187. mwgrant,
It’s worth trying to work out what Willard it trying to get at. Normally you benefit if you do.
Perhaps the nature of blogs as suggested by aTTP above means that they are not very helpful in the process and we need to modify them or move to other venues.
Blogs are broadly useless, IMO. What I think people should do is do their best to be as informed as they can be. Normally that would involve interacting with real experts.
188. mt says:
I disagree on any generalization of the utility of “blogs”; blogs are a form of conversation. The utility of a conversation is determined by its content, not its medium.
189. mwgrant says:
anoilman,
My view in regard to the contrarian matrix. (Thanks for your observation and the link. My distaste for classification and labels in regard to people is pretty strong but temptation of self-assessment was stronger.)
There is no such thing as a global average temperature. It is a metric. That entails uses and limitations. I think that the former outweigh the latter.
Global temperature is, statistically speaking, a random walk No. It is a stochastic ‘variable’
That most of the W since 1950 is from A means little Disagree. Thought never even occurred to me.
The GW has been less than predicted or overestimated by the models. Sure if one looks at the graphs, but the importance of that has also been overestimated in the context of decision-making. Regardless of what the eventual outcome will be, time is paramount in decision-making.
Model projections are unfalsifiable. That concern is an irrelevant waste of time. Popper was a philosopher, not a god.
Paucity of data prevails and its climate signal is almost indistinguishable from noise. Disagree, although I think that the statement is too open to interpretation.
We don’t know what adjustments were made to these records. No we do not. But it is what we have.
We need … It would be nice but we must work with what we have improving what we can. Again time is a constraint. To me the most important item here is probably the V&V. Independent? Yes that would be nice, but more important is transparency in a highly contentious environment. Find a way to get all of the internal documentation out in to the public. That is a relatively easy first step and may go a long way.
General comment: I view global warming as a risk problem in the context of policy. Jumping to mitigation or entrenching in ’n-action’ are too restrictive . A reasonable set of alternatives and outcomes need to be characterized to inform the decision. Time is paramount not because an urgency about gloom and doom, but because many of the aspects of the problem are conditioned by time and depend on time-ordering. Bottomline anoilman, here less than perfect decision making trumps less than perfect science.
BTW, I think that such a matrix is limited in utility without counterparts for camps with opposing view—all in neutral language.
It was a good workout. Do take care of yourself.
190. mwgrant says:
mt, my view is that the medium influences the content, e.g., anonymity, no visual feedback, etc. can impact expression which is bound into content. Even on the other end expression influences perception. We can not escape our wiring.
191. Willard says:
> My view in regard to the contrarian matrix.
Don’t miss the other steps:
I don’t view “exaggeration encourages suspicion” and “gloom and doom” as neutral. They both belong to the “Do not panic” level, i.e. the CAGW meme. One does not simply conflate calmness of tone and neutrality to whine about flaming in Mordor.
There are ways to understate one’s claim that can be heard loud and clear by the Internet dogs, e.g:
I think that such a matrix is limited in utility without counterparts for camps with opposing view—all in neutral language.
In that sentence, the word “limited” is quite splendid, and the concept of “opposing view” creates another dichotomy that would deserve due diligence.
The idea that we can not escape our wiring may be related to the one according to which we cannot escape our writing.
192. anoilman says:
mwgrant: The total lack actual experts who produce and consume the data would strike me as a critical concern for a global warming blog. Especially in this case the notion of producing and consuming different data has been brought up by people who don’t do it. Hence… I think Anders is right.
Willard’s site is simply the usual drivel we see all the time. “Stop the press! Something might be wrong!”, is a textbook meme as far as everyone here is concerned.
You could have just said what you said in that last post up front. You were all over the place. “To me the most important item here is probably the V&V. Independent? Yes that would be nice, but more important is transparency in a highly contentious environment. Find a way to get all of the internal documentation out in to the public. That is a relatively easy first step and may go a long way. ”
I’m not sure I agree with your ideas of public transparency. That’s not a no… but I just don’t see what you’re going to get. This isn’t stuff you can just pick up and read. You’d probably need a graduate level course in climate science before you could begin. Various aspects of it will likely require a phd in a narrow subject. Joe public can’t understand the difference between passing the pointer to an automatic, and a crappy comment. (I was once asked me why we can’t just use GPS at bit for drilling.)
On the other hand actual qualified and competitive groups are reading and verifying the code/results, so I don’t believe they are biased. Quite the opposite, we have groups on different continents, different organizations all competing cooperatively. The final stage of review is peer reviewed papers on the subject. (It has been a very long time since I saw a paper on the competition results.)
If you think you are really on to something, why don’t you start up a public, and open group to start reviewing and testing the code that we all know you have access to? If you do well, and gain traction, then maybe you would have earned the street cred with the other organizations to look at their code.
193. anoilman says:
mwgrant: I wouldn’t advise going after Willard… he’ll win.
194. mwgrant says:
anoilman…I’ll get back…I’ve run out food. Hope moderation will edit this stub out later. Ditto Willard. Thanks
195. mt says:
OK, I exaggerate. I’ve been a fan of Marshall McLuhan since the bronze age. The medium is of course the message. But I’d say that’s true only in part. Also, in compensation for some of the drawbacks, though, blog conversation is scalable and referenceable (presuming the URL and web service stays intact anyway).
It depends very much on who is running it and who the audience is and what the moderation policy is. But I think there are some real advantages along with the disadvantages.
The key problem is identifying who knows what they are talking about and who is posing or hopelessly overconfident. That may vary by subject matter as well as person.
Addressing that problem is very important, arguably crucial.
196. Willard says:
My own prognosis is that peer-reviewed lichurchur as we know it will collapse and blogs as we don’t know them yet will win.
For instance:
http://putnamphil.blogspot.com/
197. MT,
I think you make some very good points about the value of blogs. I, for example, don’t see this as some site where I get to broadcast my brilliance to the world, it’s simply a site where I can express my views – some of which are more informed than others – and learn from those who comment (and regularly do). As frustrating, and difficult, as moderation can be, I wouldn’t be able to run a blog without comments. It would just seem pointless and the lack of any feedback would seem bizarre.
You’re right that identifying who knows what they’re talking about and who doesn’t is crucial, but maybe not all that hard. What I find most disappointing are those who clearly do have valuable contributions to make, but choose to do so in a way that just makes it not really worth interacting with them.
198. Blogs are broadly useless, IMO. What I think people should do is do their best to be as informed as they can be. Normally that would involve interacting with real experts.
On your blog, I’ve read posts from: Lacis, Dessler, Way, Pielke, and probably some others I’ve missed, who all have published climate papers. So on your blog, there has been interaction with ‘real experts’.
Similarly, Curry is an expert who runs a blog and Lacis, Pielke, and others have posted there.
And of course, Isaac Held’s blog is full of gems and he goes out of his way to respond to posts.
Now, verifiable data and ideas are the important part, not expert authority, ( Pascal overturned ‘method of authority’ by taking the barometer up the mountain, though it’s ironic to site Pascal ). But if you want experts, you seem to have them.
199. TE,
Sure, I was being a little too harsh. As MT indicates, there is value. It’s sometimes hard to find, though 🙂
200. TE, Sure, I was being a little too harsh. As MT indicates, there is value. It’s sometimes hard to find, though
We’ll try to do better with the signal to noise.
201. Willard says:
> though it’s ironic to site Pascal
I blame the Universal Humanistic Individualism effect.
202. BBD says:
mwgrant
General comment: I view global warming as a risk problem in the context of policy. Jumping to mitigation or entrenching in ’n-action’ are too restrictive .
I read this to say: do not commence mitigation at this point. Which I understand to mean: do not commence decarbonisation of the energy supply at this point.
Is this a correct reading?
203. mwgrant says:
BBD. No. The other reasons to look at how we use fossil fuels.
204. mwgrant says:
There are other reasons to look at how we use fossil fuels. Sorry —edit interrupt by call.
205. mwgrant says:
anoilman. The fact is I have no interest in going after Willard or anyone else. I am sure he will be relieved.
206. BBD says:
mwgrant
Thanks
So what does ‘jumping to mitigation’ mean, exactly? I really don’t follow what you are trying to say here.
207. Joshua says:
M-dub –
==> “A reasonable set of alternatives and outcomes need to be characterized to inform the decision. ”
How do you envision that taking place?
208. mt says:
As for “jumping” to mitigation, we have been standing nervously at the end of the diving board holding up the line for, what, twenty years now?
209. As for “jumping” to mitigation, we have been standing nervously at the end of the diving board holding up the line for, what, twenty years now?
Technically a diving board that’s been getting higher.
210. WebHubTelescope says:
…and Then There’s Physics says:
Blogs are broadly useless, IMO. What I think people should do is do their best to be as informed as they can be. Normally that would involve interacting with real experts.
I would agree. What readers curious about science should gravitate toward is reading a site that contains independent research such as at http://ContextEarth.com or forums that feature collaborative work. Azimuth, moyhu are examples.
211. WHT,
It’s not a competition.
212. BBD says:
It’s a contribution.
And we should all be grateful for the gift.
🙂
213. mt says:
WHT points to ContextEarth, a new entry in the climate blogs, which is pushing this theory.
The paper is downright peculiar.
The fact that it owes its existence to blogworld would put my priors squarely in the category of Stadium Handwave in my estimation. (I’ve talked climate in person with John Baez. I like the guy, but he doesn’t really understand climate. Physicists tend to expect more simplicity than the world affords.)
In the end, though, it is a very different sort of beast than the Stadium thing. Its trouble is not that it applies tests that are far too broad to inform us like the Stadium paper does. Quite to the contrary – it performs tests that it passes far better than would reasonably be expected. It looks more than a bit too good. And it is frank about it:
The ground of ENSO modeling with simple systems is well-trodden. This is very different than what the literature says.
Does the author use the claimed nearly invincible method to predict the future, or just the past? A couple of seasons of on the money predictions would certainly cause people to sit up and take notice. (Of course, there’s the “difficulty of predicting significant regime changes that can violate the stationarity requirement of the model” dodge to fall back upon.)
What’s more the physics doesn’t sit right with me.
You’re a SciPy guy if you’re the real WHT. I presume you can point us at Pukite’s code?
214. mt says:
Oh, sorry, the missing quote:
“Apart from the difficulty of predicting significant regime
changes that can violate the stationarity requirement of the
model, both hindcasting for evaluating paleoclimate ENSO
data [14] and forecasting may have potential applications for
this model. The model is simple enough in its formulation
that others can readily improve in its fidelity without
resorting to the complexity of a full blown global circulation
model (GCM), and in contrast to those depending on erratic
[19] or stochastic inputs which have less predictive power.”
In other words, the atmosphere doesn’t enter into it, (except for the stratospheric QBO).
215. mt says:
Further investigation suggests that WKT *is* Paul Pukite, the author of the Arxiv paper.
216. Willard says:
> Azimuth, moyhu are examples.
Azimuth’s a wiki (a small Rails gift from 37 Signals, if memory serves me well) that contains a blog:
http://www.azimuthproject.org/azimuth/show/Azimuth+Blog
Moyhu is a blog:
http://moyhu.blogspot.com/
What were you saying regarding blogs again, Web?
217. dhogaza says:
“Moyhu is a blog”
Nick Stoke’s blog, who has been contributing to the science outside his blog, and reports so on his blog. I don’t know if GISS has adopted his rewrite/replication of their FORTRAN code for GISTemp with his Python version, but serious conversations in that direction were happening a couple of years ago.
Nick Stokes is serious and makes serious contributions outside the blogosphere.
Willard does not.
218. dhogaza says:
WebHubTelescope mentioned “independent research”. The format of the medium isn’t really relevant, Moyhu (Nick Stokes) persues independent research just as WHT says.
You had a point, Willard?
219. anoilman says:
mt says:
“August 11, 2015 at 3:21 am
WHT points to ContextEarth, a new entry in the climate blogs, which is pushing this theory.
The paper is downright peculiar.”
However its a particularly useful paper if its right. Perhaps someone will pick up that ball and try to figure out more. That is of course the purpose to writing and disseminating papers.
Right now we simply have ENSO watch and they aren’t exactly that precise these days;
220. WebHubTelescope says:
mt says:
Further investigation suggests that WKT *is* Paul Pukite, the author of the Arxiv paper.
Like, duh.
Azimuth’s a wiki (a small Rails gift from 37 Signals, if memory serves me well) that contains a blog:
It contains a threaded forum for long-term discussion where you can add charts and equation mark-up to your heart’s content. That’s where all the El Nino discussion takes place.
Moyhu is a blog:
Barely, it is really an engineering notebook which contains interactive charts and code for analyzing data. See also WoodForTrees and KlimateExplorer and SkS also has something similar. Just as ContextEarth connects to a complementary interactive semantic web server http://entroplet.com
What were you saying regarding blogs again, Web?
Something obviously does not sit well
Probably got a bee up your butt.
Only the scientists with the keys to the castle are allowed to play in this field, eh?
221. mwgrant says:
Not perfect but time moves on…
BBD: “So what does ‘jumping to mitigation’ mean, exactly? I really don’t follow what you are trying to say here.”
mt: “As for “jumping” to mitigation, we have been standing nervously at the end of the diving board holding up the line for, what, twenty years now?”
aTTP: “Technically a diving board that’s been getting higher.”
Here is the full quote with the phrase. ([o] is a typo correction.)
General comment: I view global warming as a risk problem in the context of policy. Jumping to mitigation or entrenching in ’n[o]-action’ are too restrictive . A reasonable set of alternatives and outcomes need to be characterized to inform the decision. Time is paramount not because an urgency about gloom and doom, but because many of the aspects of the problem are conditioned by time and depend on time-ordering. Bottomline anoilman, here less than perfect decision making trumps less than perfect science.
Mitigation and no-action are decision alternatives. The meaning of ‘jumping to mitigation’ is implementing the mitigation without with adequate characterization of the decision (including the goal of the policy, the metric(s) defining defining quality of outcome, alternative selection criteria, clarity, etc.) or consideration of other reasonable alternatives.The meaning of ‘entrenching in n[o] action’ is opting to continue in the present mode taking no additional actions again without with adequate characterization of the decision or consideration of other reasonable alternatives. That is, these two alternatives are representative of the sort of default policy that we may well inherit as a result of lack of due diligence in the decision making phase.
I certainly agree that we have been in that phase for two decades and as time passes conditions change. This is an incredibly stupid, preventable circumstance that has evolved. At this stage there are no innocents and IMO assignment of blame is a useless, nay, hindering exercise. To me uncertainty in outcome under different alternative approaches can be manipulated provide both major camps in the ‘debate’ with pro and con arguments. This suggests is all the more reason to immediately move toward adopting an open inclusive structured approach to the decision making. Handling the science is not the stumbling block—making decisions is, i.e., less than perfect decision making trumps less than perfect science. But, hey, if you insist in wanting to be perfect… just wait and you might be.
[No, Joshua. I have not forgotten…mwg]
222. Willard says:
> Nick Stoke’s blog, who has been contributing to the science outside his blog, and reports so on his blog.
And yet Web agreed with AT that blogs were useless. Fancy that.
***
> You had a point, Willard?
Yes, that the format of the medium isn’t really relevant, which kinda undermines the whole blog bashing thing.
Wait. Have you just said the same thing?
***
> Willard does not.
https://yourlogicalfallacyis.com/tu-quoque
Shouldn’t noticing a tu quoque be considered as a serious contribution?
Don’t touch the furniture on your way out and thanks for playing.
223. Willard says:
> Only the scientists with the keys to the castle are allowed to play in this field, eh?
When a ClimateBall player such as Web makes things personal, it shows weakness. Therefore, one only has to read what has not been addressed. Here’s what has not been addressed:
Does the author use the claimed nearly invincible method to predict the future, or just the past? A couple of seasons of on the money predictions would certainly cause people to sit up and take notice. (Of course, there’s the “difficulty of predicting significant regime changes that can violate the stationarity requirement of the model” dodge to fall back upon.)
There’s also the quote, which I believe Web can find back if he’s more interested than to use AT’s for his usual drive-bys [Mod : redacted the last part of this comment.]
224. BBD says:
mwgrant
Mitigation and no-action are decision alternatives. The meaning of ‘jumping to mitigation’ is implementing the mitigation without with adequate characterization of the decision (including the goal of the policy, the metric(s) defining defining quality of outcome, alternative selection criteria, clarity, etc.) or consideration of other reasonable alternatives.
The goal of the policy is to avoid dangerous warming by reducing CO2 emissions. This also defines the quality of the outcome. What ‘other reasonable alternatives’ are there to decarbonisation of the energy supply?
What is the policy downside from simply getting on with this?
I don’t really understand what you are arguing for. My impression is that it distils down to more talking about uncertainty and decision making under uncertainty and further delays in getting on with the necessary infrastructural changes.
225. WebHubTelescope says:
It’s not like I am doing anything different than what the experts on El Nino and ENSO are attempting. I am just solving the second-order differential equation that Allan Clarke at FSU has documented. This is a simple formulation of a wave equation that can obviously model the standing wave pattern that has been known to occur across the equatorial Pacific thermocline for thousands of years.
The question is whether the periodic forcing inputs that determine the sloshing standing wave can be deduced. I think my paper, and a very recent paper by Astudillo et al [1] are showing that this well may be a deterministic behavior (read not chaotic or stochastic).
[1] H. Astudillo, R. Abarca-del-Rio, and F. Borotto, “Long-term non-linear predictability of ENSO events over the 20th century,” arXiv preprint arXiv:1506.04066, 2015.
226. BBD says:
At this stage there are no innocents and IMO assignment of blame is a useless, nay, hindering exercise.
[…]
But, hey, if you insist in wanting to be perfect… just wait and you might be.
What does this mean? It sounds as though you are in fact apportioning more blame to one ‘camp’ – the one that has acknowledged the urgent need for decarbonisation.
This suggests is all the more reason to immediately move toward adopting an open inclusive structured approach to the decision making.
Given that the other ‘camp’ is very strongly attached to doing nothing at all, I cannot see how what you appear to suggest will result in anything remotely productive. But again, perhaps I have missed something here.
227. anoilman says:
BBD: Its terribly important to argue ad-nauseum over whether we have Super Really Very High Confidence or merely High Confidence as we attempt to contemplate looking at any data whatsoever let alone even consider making a decision.
Perhaps we should wait for a sign?
“Dr. Egon Spengler: Vinz, you said before you were waiting for a sign. What sign are you waiting for?
Louis: Gozer the Traveler. He will come in one of the pre-chosen forms. During the rectification of the Vuldrini, the traveler came as a large and moving Torg! Then, during the third reconciliation of the last of the McKetrick supplicants, they chose a new form for him: that of a giant Slor! Many Shuvs and Zuuls knew what it was to be roasted in the depths of the Slor that day, I can tell you!”
228. mwgrant says:
BBD
No, for me you are much too vague to bet the farm. Here are some questions I had when I quickly assessed your characterizations in your first paragraph. Please do not be put off by them. It reflects the essential process of trying to achieve clarity in a decision characterization. These questions are as much an exercise for me as for you.
The goal of the policy is to avoid dangerous warming by reducing CO2 emissions.
Warming where?
Reducing CO2 emissions where?
What level of warming is dangerous?
What does being ‘dangerous’ entail?
C02 emissions where?
Do you have a metric for warming, e.g., average global temperature? What is it?
This also defines the quality of the outcome.
If you stay with the qualitative representation of quality are they nominal categories of quality and how are they defined?
Wouldn’t some specified quantitative level of reduction in emissions be a more appropriate measure of quality?
Should there be more than a qualitative relationship between the quality of the emissions reduction in the amount of avoidance in warming?
What ‘other reasonable alternatives’ are there to decarbonisation of the energy supply?
Where is the list of alternative actions that have been examined, quantified, and documented?
Who is the targeted or client decision maker for that documentation?
Personally I would have no objection to starting actions for other reasons and in consideration of a risk from global warming. However, I expect along the lines of its effectiveness being continually monitored and that there is/are quantitative measure(s) for that. I expect costs to be an integral component of effectiveness.
I don’t really understand what you are arguing for.
I can see that and consider it to be a legitimate comment on your part. If I can not resolve that then you of course can/will dismiss it of turn it over every once in awhile. Certainly it has been productive for me and I appreciate it.
My impression is that it distils down to more talking about uncertainty and decision making under uncertainty and further delays in getting on with the necessary infrastructural changes.
It would be easy to dismiss that as an impression, i.e., subjective. However, there is some very legitimate concern here. Such a process does not have to goes off the rails and becomes an impediment, but we both know that they do. I can not fix that and the risk is not insignificant. However, the risk from politically muddling our way through the problem is also not to be neglect. As mt and aTTP point out a 20 year delay has complicated matters.
229. mt says:
WHT/Paul P:
You showed up saying “What readers curious about science should gravitate toward is reading a site that contains independent research such as at http://ContextEarth.com“, but did not mention that you are its author. So you can forgive my surprise that you are merely expressing enthusiasm for your own work.
Your paper is insufficiently clear for replication. You leave the impression that your impressive match in figure 4 is a tuning of four harmonics, hence only eight degrees of freedom. That match would be impressive. But in fact, as we can see in http://contextearth.com/2014/05/27/the-soim-differential-equation/, it isn’t.
You have left yourself complete freedom to determine a forcing function.
So you conclude that with **a given forcing** an arbitrary model with the right time constants can be driven close to an observed record. As far as I can tell you obtained that forcing from the trying to match the observations with your presumed model.
Effectively you have a very large number of degrees of freedom. So you’ve told us nothing that can’t be gleaned from first principles.
A car whose wheels are badly out of alignment can be driven up a twisty road if you steer carefully enough.
Further, your concluding claim that your model is “in contrast to those depending on erratic or stochastic inputs which have less predictive power” is unjustified, as your forcing function is exactly that.
The physical basis for your model is not justified that I have seen. Further, your claim ( http://contextearth.com/2014/05/27/the-soim-differential-equation/ )
“Note that I really don’t care that the math has only been previously applied to tanks built on a human scale. Just because we are dealing with a “tank” the size of the Pacific Ocean doesn’t change the underlying math and physics”
is not true, because of the Coriolis effect.
Perturbations in the thermocline can propagate westward far more easily than eastward – for a given stratification there is only one eastbound Kelvin mode. So the idea that “sloshing” is the correct model without accounting for the actual propagation of trapped balanced equatorial modes is incorrect.
230. mt says:
MWG re “CO2 emissions where?” can you possibly be serious? The answer, obviously, is “on earth”.
CO2’s lifetime is much longer than the mixing time constants of the atmosphere. CO2 is well mixed; tropospheric CO2 concentrations are globally near enough uniform that the Mauna Loa time series is good enough for policy purposes.
If you are really interested in this problem and you didn’t understand this, you clearly should hang around where the people who are talking know what they are talking about.
231. mwgrant says:
Yes I am serious. Think of all of my questions and in particular those relating to who is the decision maker or decision-makers. A big cause of poor decisions is not formulating the decision correctly. My approach is from the perspective of the clarity test. Missed bigtime on your response mt–in a number of ways.
232. BBD says:
mwgrant
Like MT above, I’m incredulous. But, for form’s sake:
Warming where?
[on Earth]
Reducing CO2 emissions where?
[at the point of origin]
What level of warming is dangerous?
[probably less than 2C]
What does being ‘dangerous’ entail?
[agricultural disruption and WAIS collapse, etc]
C02 emissions where?
[repeat question]
Do you have a metric for warming, e.g., average global temperature? What is it?
[global average temperature]
233. BBD says:
A big cause of poor decisions is not formulating the decision correctly.
How can this affect the decision to reducing CO2 emissions to avoid dangerous warming (>2C)?
How is this likely to be a poor decision?
234. BBD says:
Sorry – missed out from above:
Wouldn’t some specified quantitative level of reduction in emissions be a more appropriate measure of quality?
I believe this has already been proposed.
235. mwgrant,
I’m equally incredulous. Here’s something that maybe people don’t realise and is – I think – what MT is getting at. We’re clearly changing our climate, we’re doing it quite rapidly relative to most – if not all – previous epochs of climate change, and the change could be substantial (globally the difference between a glacial and an inter-glacial is about 5oC). Is it going to be bad? That depends on what we actually do, but if we carry on as we are, then I think the consequences will be severe. However, maybe they won’t be. Maybe we’ll be lucky, and the changes will all just end up somehow working in a way that’s a net benefit (I don’t think they will, but let’s – for argument’s sake – say they could be). However, what we’re doing is irreversible on human timescales. If the majority of scientists are right, and the minority of naysayers are not, we can’t go back in time and make a different set of decisions.
236. anoilman says:
Clearly we’re not smoking the right stuff.
237. mwgrant says:
“Clearly we’re not smoking the right stuff.”
That is correct
238. Willard says:
Just Asking Questions is boring, more so when it has been predicted on an earlier thread. Their relevance is begged, and the commitment his shifted on the interlocutor. Take these:
What level of warming is dangerous?
What does being ‘dangerous’ entail?
The first one presumes that unless we can discover the very exact treshold between dangerous and not-dangerous, we can’t decide to stop dumping CO2 in the atmosphere like there’s no tomorrow. The second one presumes that we need to give a definite description of the word “dangerous,” when such words don’t work that way and when what matters are the risks associated with the impacts of AGW.
I don’t see any reason to believe that we need to identify a specific level of dangerous warming, nor do I see any reason to characterize dangerosity more precisely than some risk of adverse or harmful impacts. RTFR.
***
For memory’s sake, here’s a previous episode of leading questions at Judy’s:
> This implies that the mainstream messages are not alarmist.
Not at all. It only implies that those who make claims have the onus to show their evidence for it.
If the claim is that mainstream climate science is alarmist, then the onus is on the claimant to show that it is indeed alarmist.
The philosophical burden of proof or onus (probandi) is the obligation on a party in an epistemic dispute to provide sufficient warrant for their position.
http://en.wikipedia.org/wiki/Philosophic_burden_of_proof
This ensures productivity, a requirement “just asking questions” evades.
http://judithcurry.com/2015/03/03/ipcc-in-transition/#comment-680781
Leading questions have a tendency to push the limits of justified disingeniousness. Unless mwg can come forward and claim things on his own instead of offering clarity tests, I would not bet the farm on this other ClimateBall episode.
239. mt says:
MWG: “As mt and aTTP point out a 20 year delay has complicated matters.”
Actually a 20 year delay has made matters **more difficult** but it hasn’t made them more complex. It has **greatly simplified matters**. We must reduce net emissions to near or below zero as quickly as is practicable without major social disruption.
We’ve long since missed the point where there is much to put in the other side of the balance.
Also I don’t understand where I missed a “clarity test” above. I may have been a bit impatient but I don’t see how I was unclear.
240. mwgrant says:
Now that W. has weighed in…
friendly folks at aTTP
“Please do not be put off by them. It reflects the essential process of trying to achieve clarity in a decision characterization. ”
Clarity is device used in decision analysis to assist defining the elements of a decision. Look at clarity test.
The twist here is that that which has followed my comment only reinforces the lack of clarity [in the decision sense] in your own characterizations. Perhaps this has contributed to the the difficulty in your communication efforts over the years.
You responses–unfortunately not a surprise–is duly noted…both in content and tone.
241. mwgrant,
You’re rather lost me, I’m afraid. I’m not really sure quite what you’re getting at.
242. mwgrant says:
aTTP you can google.
243. BBD says:
Nor I.
244. mwgrant,
I can, but I don’t really know what to google in this instance. The internet is a big place.
245. mwgrant says:
My apologies for being short there. The point is I have presented a perspective. It is clearly material with which you [all] are apparently not very familiar. I am not trying to convince anyone of adopting that perspective. I could really care less. If that has not been clear before I hope it is now. So people can go poke around on the internet or they can forget it. You guys are incredulous :O)
246. BBD says:
You responses–unfortunately not a surprise–is duly noted…both in content and tone.
This stuff is getting tiresome.
247. mwgrant says:
aTTP OK in time I’ll put together some stuff and email it…is your email on the site somewhere. While it has been useful it is a burden from my vantage point.
248. BBD says:
The point is I have presented a perspective. It is clearly material with which you [all] are apparently not very familiar.
Actually, you have said very little but used a great many words.
249. mwgrant,
Yes, I think we realise that you’ve been presenting a perspective. The responses have been based on the perspective you’ve presented. You may have a perspective, but the scientific evidence suggests that continuing to increase our emissions will have severe consequences and that these will likely by irreversible on human timescales. That, in itself, doesn’t tell us what to do, but does suggest that continuing to wait before we make concrete steps to reduce our emissions and – as MT points out – eventually getting them to zero, may be a strategy that carries a great deal of risk.
250. mwgrant.
The contact link above sends me an email.
251. mwgrant says:
“This stuff is getting tiresome.” You should be over here! :O) Bail. Easier for you than me. Take a couple with you.
252. BBD says:
“This stuff is getting tiresome.” You should be over here! :O) Bail. Easier for you than me. Take a couple with you.
Why is it easier for me to cease commenting here than for you to be more specific?
And why should I ‘take a couple with me’? I read this as an invitation for other commenters to defer to your ineffable brilliance and vision. A bit cheeky, if you ask me.
253. Willard says:
> The point is I have presented a perspective.
Presenting entails something more than Just Asking Questions and handwaving to the Google.
254. anoilman says:
mwgrant says:
August 11, 2015 at 7:31 pm
“My apologies for being short there. The point is I have presented a perspective. It is clearly material with which you [all] are apparently not very familiar.”
Actually we are all incredibly familiar with your material, as I’ve pointed out to you.
http://www.easterbrook.ca/steve/2010/11/do-climate-models-need-independent-verification-and-validation/
https://contrarianmatrix.wordpress.com/
Your behavior is standard troll. You delight in JAQing (Just Asking Questions) off all over the place. You BS around, avoid anything vaguely meaningful, argue every point till your blue in the face. Really… you have to say anything meaningful after two days. We’re all quite used to that too.
255. mwgrant says:
It does and it did. Sorry you didn’t get more out of it.
256. mwgrant says:
anoilman as far as Esterbrook goes and you assertion based on Esterbrook,
“…Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.”
is undercut interms of the number of codes considered and in terms of the metric by the following quotes from Esterbrook:
Many of the limitations to the present study could be overcome with more detailed and controlled replications. Mostly significantly, a larger sample size both of climate models and comparator projects would lend to the credibility of our defect density and fault analysis results.
A little later in the same section the authors quote ‘Hatton (1995)’:
“There is no shortage of things to measure, but there is a dire shortage of case histories which provide useful correlations. What is reasonably well established, however, is that there is no single metric which is continuously and monotonically related to various useful measures of software quality…”
As far as the subthread evolving from around the 8/11/15 4:19pm post, neither of the two links that you provide have any bearing on the topic. The link, Willard’s, climbitbull contrarian matrix has nothing to do with any thing either the GCMs or decision making aspects touched on. If my comments are those of a troll then I guess you folks over here could use more trolls or maybe get a different operating manual…something slipped thru on the QA.
257. f my comments are those of a troll then I guess you folks over here could use more trolls or maybe get a different operating manual…something slipped thru on the QA.
You and AoM have been going at each other a bit, so I decided to leave AoM’s troll remark. Maybe I shouldn’t have, but I did. I don’t particularly like people being called trolls, as I don’t hugely like it when it happens to me. Maybe we can all just tone this down.
258. BBD says:
mwgrant; ATTP
It may have been me that introduced the T-word to the thread – so some / much blame must attach to me.
259. Willard says:
> What is reasonably well established, however, is that there is no single metric which is continuously and monotonically related to various useful measures of software quality…”
Cue to more clarity testing from our guest wizard regarding which single metric we should have to evaluate code quality. Because that Howard dude, cited by the relevant Wiki entry, is just the formal guy we need:
http://www.jstor.org/stable/2632123
Look at Fig. 1 and 2.
An alternative perspective on clarity testing in ClimateBall exchanges is this:
http://neverendingaudit.tumblr.com/tagged/parsomatics
***
> The link, Willard’s, climbitbull contrarian matrix has nothing to do with any thing either the GCMs or decision making aspects touched on.
It actually had something with when you tried to make it about me, mwg. Do you want me to trace back how the ClimateBall exchange evolve or are you able to recall the concerns you’re raising from one comment to the next?
It would be suboptimal to conflate ClimateBall and the Contrarian Matrix in the middle of clarity testing, BTW.
260. mwgrant says:
BBD, thank you for that comment.
261. Willard says:
Next time, BBD, talk about pulling statements out of thin air instead — mwg may appreciate to hear his own words.
262. mwgrant says:
And you made it about me. Willard–the contrarian matrix pertains to people and not to the other content–which you are free to dispute. Let it rest.
Clarity test. Why the single metric reference? Read the thread W. It had been asserted earlier in the thread that defect density is the metric used from code quality. Here the authors comment that there is no single satisfactory metric. I shot the beast twice–out in the open. Pretty good, huh?
263. Willard says:
> And you made it about me.
Now you’re pulling tu quoques out of thin air.
A quote might be nice.
***
> the contrarian matrix pertains to people and not to the other content
Can you read? The Contrarian Matrix is about lines of argument. There’s a citation that backs up every single line.
I’m not sure who should let what rest. My comment was not about the content of the Matrix, but why Oily One cited it. Should I document how you yet again shift topics?
***
> I shot the beast twice–out in the open.
Which beast, dear Wizard? As far as I’m concerned, you burned down your own strawman twice. Poor strawman.
***
Oh, and did you find where SteveE was holding that V&V for GCMs was a bit silly? Look it up. Teh Google is your friend.
264. mwgrant says:
contrarian matrix/about me — my bad. Indeed AoM brought into the ring.
the beast shot — how about the first quote: “…Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.” Don’t worry about the Strawman, Liebchen, it was only a bad dream you were having. Nothing here has been damaged! Woo-hoo!
I find his views on V&V a bit silly. But then we have different standards coming from different communities.
Cheers
265. mwgrant says:
“Next time, BBD, talk about pulling statements out of thin air instead — mwg may appreciate to hear his own words.”
Owned up to. However, pickings have been thin for you. Huh, W?
266. Steven Mosher says:
Looks like I should weigh in.
The situation with GCMs lends itself to a solution that we used in Operations research.
Why is that relevant? I’ll give you an example. Let’s suppose you are buying an airplane and the
government says . The plane shall be 98% survivable to a 50 MM round
The government will demand that you use COVART
http://www.dsiac.org/resources/models_and_tools/covart
“The Computation of Vulnerable Area Tool (COVART) model predicts the ballistic vulnerability of vehicles (fixed-wing, rotary-wing, and ground targets), given ballistic penetrator impact. Each penetrator is evaluated along each shotline (line-of-sight path through the target). Whenever a critical component is struck by the penetrator, the probability that the component is defeated is computed using user-defined conditional probability of component dysfunction given a hit (Pcd/h) data. COVART evaluates the vulnerable areas of components, sets of components, systems, and the total vehicle. In its simplest form, vulnerable area is the product of the presented area of the component and the Pcd/h data. The total target vulnerable area is determined from the combined component vulnerable areas based upon various target damage definitions.
COVART is capable of modeling damage from a wide range of kinetic energy (KE) and high-explosive (HE) threats. These threats include missile fragments, projectiles (ball, armor-piercing, armor piercing incendiary and high-explosive incendiary), man-portable air defense systems (MANPADS), rocket-propelled grenades (RPG), and proximity-fuzed surface-to-air missile (SAM) and air-to-air missile (AAM) warheads.
This model is really cool. One problem ( back in the day ) is that we had NO DATA to calibrate it.
One time we took a 50Million plane into the desert to shoot rounds at it and then try to
see if that data was “consistent with” the model. Validation was hard, but you just did the best you could and documented it.
Engineers, program managers, government buyers all knew that it was just a tool for evaluating two planes that had not been built. It was the best tool we had. we all submitted to the process.
there was no crying about bad models. If you wanted to you worked on improving the standard and then submitting your changes.
You can look at these as well
https://www.dsiac.org/resources/models_and_tools
here’s another one we used
“ALARM is a generic digital computer simulation designed to evaluate the performance of a ground-based radar system attempting to detect low-altitude aircraft. The purpose of ALARM is to provide a radar analyst with a software simulation tool to evaluate the detection performance of a ground-based radar system against the target of interest in a realistic environment. The model can simulate pulsed/Moving Target Indicator (MTI), and Pulse Doppler (PD) type radar systems and has a limited capability to model Continuous Wave (CW) radar. Radar detection calculations are based on the Signal-to-Noise (S/N) radar range equations commonly used in radar analysis. ALARM has four simulation modes: Flight Path Analysis (FPA) mode, Horizontal Detection Contour (HDC) mode, Vertical Coverage Envelope (VCE) mode, and Vertical Detection Contour (VDC) mode. – See more at: https://www.dsiac.org/resources/models_and_tools#sthash.pm6IDGIA.dpuf
Tactical Air Combat Simulation BRAWLER simulates air-to-air combat between multiple flights of aircraft in both the visual and beyond-visual-range (BVR) arenas. This simulation of flight-vs.-flight air combat is considered to render realistic behaviors by Air Force pilots. BRAWLER incorporates value-driven and information-oriented principles in its structure to provide a Monte Carlo, event-driven simulation of air combat between multiple flights of aircraft with real-world stochastic features. – See more at: https://www.dsiac.org/resources/models_and_tools#sthash.pm6IDGIA.dpuf
TAC BRAWLER wasnt very good. So there was a process of improving it.
we could submit improvements. if they passed muster then they got included in the standard model.
of course BRAWLER was never “validated” because to do that you’d have to fight a war and shoot down real planes and people. The closest we came was comparing brawler to man in the loop flight simulation wars. All very messy, all very notional.
But there was a process
And there was agreement to use the tool.
Research guys could go off and do anything they wanted to come up with better models
But if you were going to tell a “decider” or buyer some results those results had to come
from the standard model.
You shuold probably look at how modtran is maintained and used. That would be another example.
The point is that you had a very messy and very uncertain problem space. That was addressed
by having a standard model. Everyone knew the limitations of the beast. And people would try to get their improvements included in the standard model. Or people tried to get their models accepted
https://www.dsiac.org/resources/models_and_tools/submitting-survivability-vulnerability-models-dsiac-management-and-distribution
contrarians will hate this because it would accept A “flawed” model as THE STANDARD for decision making.
And its pretty easy to do V&V there is no reason to oppose it.
267. Willard says:
> how about the first quote: “…Climate models are the best and most thoroughly reviewed software out there.
That there ain’t no ultimate metric does not substantiate that point. There’s no need to review SteveE’s work to [see] that Oily One relied on an hyperbole: look for “hyperbole” on this page. Recalling upfront your own experience with code standards of practice within your own community may have been more expedient than sealioning Oily One.
***
> But then we have different standards coming from different communities.
Exactly. Next time Gavin will have climate models that would run the risk to cause a nuclear winter, it is to be hoped that he’ll at least validate for safety.
Since GCMs are mostly used for climate projections, I’m not sure exactly which formal properties need to be specified. Models that approximate physical laws may be quite complex. If I read you right, the question of the GCMs’ intended use is more important, and seems more like a verification requirement anyway. Seen under that light, SteveE’s work might be invoked to substantiate the claim that they’re more than good enough for the job. It’s possible to agree to disagree on such matter. The argument oscillates between pragmatic and formal considerations, which makes the whole discussion hard to arbitrate.
In any case, V&V for GCMs implies we decide to invest even more money than we already do in that field, an investment decision which may require its own clarity testing.
268. Willard says:
> pickings have been thin for you.
269. anoilman says:
mwgrant: Have you read/reviewed the code that is publicly available?
270. Eli Rabett says:
The rewrite of GISSTemp was the Clear Climate Code Project run by Nick BARNES not Nick STOKES. NS does a bunch of other very useful and interesting stuff
271. mwgrant says:
No. There was and is no need for the task at hand. I didn’t criticize the code. At one point I even indicated that
“…the important question is are they good enough for the intended application? Frankly in my opinion the answer is ‘yes’. The big problem lies with what is the intended use? …
I read did enough in your referenced work alone evaluate your ‘hyperbole’ as Willard generously calls. (I do not.)
I also commented on how QA is handled elsewhere and suggested that there might be benefits to that. This is not a criticism–frankly it is a reasonable idea looking at other approaches in contentious environments. I am continuously amazed at the insecurity some of you folks have about the codes used.
All of that and not reading the code. Can you hear me now?
272. mwgrant says:
Thanks Eli.
273. Howard says:
MWGrant: You are trying to have a discussion with people (a limited few, not all posting on aTTP) who have no capacity to listen and consider. The psychological defect has the same taproot of teabaggers who hate Mexicans and want to arrest/deport/fence in all brown people. Anyone presenting any hint of “the other” is to be attacked. When commentators spew the “T” word, they are looking in the mirror. In one sense, you are playing the fool for their idle amusement, so you have earned this abortion of a thread. Welcome to climateball.
274. Willard says:
> I didn’t criticize the code.
The question if teh modulz are good enough for the intended application can only be settled by looking into the code. The same obviously applies to SQA issues. ISO standards ain’t cheap.
Speaking of “good enough”:
In Winnicott’s writing, the “False Self” is a defence, a kind of mask of behaviour that complies with others’ expectations. Winnicott thought that In health, a False Self was what allowed one to present a “polite and mannered attitude” in public.
But he saw more serious emotional problems in patients who seemed unable to feel spontaneous, alive or real to themselves anywhere, in any part of their lives, yet managed to put on a successful “show of being real.” Such patients suffered inwardly from a sense of being empty, dead or “phoney.”
Winnicott thought that this more extreme kind of False Self began to develop in infancy, as a defence against an environment that felt unsafe or overwhelming because of a lack of reasonably attuned caregiving. He thought that parents did not need to be perfectly attuned, but just “ordinarily devoted” or “good enough” to protect the baby from often experiencing overwhelming extremes of discomfort and distress, emotional or physical. But babies who lack this kind of external protection, Winnicott thought, had to do their best with their own crude defences.
https://en.wikipedia.org/wiki/Donald_Winnicott
In fairness, we must admit that Donald has not presented a concept of good enough that would satisfy a clairvoyant who’d know the future and be able to tell if an artifact or an agent is good enough in an operational manner.
275. Willard says:
> Anyone presenting any hint of “the other” is to be attacked.
In other news:
I wonder if you have ever made a real contribution to a blog?
276. Willard says:
Even better:
The psychological defect has the same taproot of teabaggers who hate Mexicans and want to arrest/deport/fence in all brown people.
Anyone presenting any hint of “the other” is to be attacked.
277. Joshua says:
On a thread of 365 comments, Howard contributed one:
Howard (Comment #137916)
July 30th, 2015 at 5:28 pm
Joshua is the best at getting people to pick nits out of scabs. I saw it on the SkS top secret blog that they send out these type of disruption bloggers to lukewarmist blogs to tie up the time and effort of skeptics thereby preventing real investigation of the commie plot to tax carbon in ice cream… childrens ice cream.
278. mwgrant says:
Howard,
Let’s just say this was a test of myself in a different environment. I went in with eyes open with hip-waders on and am satisfied. The was some good pushing in there. To me there is no harm to be their fool because climatebawl is their game. It is not mine. I can stay or I can walk away. As a friend once told me they can’t chop me up and eat me. :O)
But hey, it is an away game … I guess.
BTW I knew where I likely was after your comment on conceptual models.
279. mwgrant says:
Willard – zzzz. I’ve been inoculated
280. Willard says:
> zzzz
The question if teh modulz are good enough for the intended application can only be settled by looking into the code, mwg.
Can you hear me now?
281. Willard says:
> climatebawl is their game
I point at this:
I read did enough in your referenced work alone evaluate your ‘hyperbole’ as Willard generously calls. (I do not.)
and I point at this:
I suspect many people use hyperbole here from time to time to telegraph a thought or concept efficiently. That is how I took ‘kill the economy’ and do not take Steven to task for that.
http://judithcurry.com/2015/08/03/president-obamas-clean-power-plan/#comment-722601
That is all.
282. mwgrant says:
“The question if teh modulz are good enough for the intended application can only be settled by looking into the code, mwg.”
Loving every moment, you are failing miserably.
283. You have left yourself complete freedom to determine a forcing function.
Obviously you have not read much of what I have written. There is no flexibility in applying the QBO as a forcing function. This has a long-term mean period of 2.33 years and it can not be changed.
Everyone knows that QBO and ENSO are closely tied together.
So I applied that as a forcing function in Allan Clarke’s differential equation [1].
I hope that you can see that I am not making things up.
All I am doing is applying the components that climate scientists have shown to be as factors.
Why they do not follow through on their own suggestions, I haven’t a clue.
Why you assert that what I am doing is wrong, I can easily guess. Probably because you have set yourself up as a gatekeeper. “You ignored Coriolis!” “You ignored east-west asymmetry!” You are convincing yourself that it can’t be right and using “just so” stories to seed doubt in other people’s minds that what I am doing just has to be wrong.
[1]A. J. Clarke, S. Van Gorder, and G. Colantuono, “Wind stress curl and ENSO discharge/recharge in the equatorial Pacific,” Journal of physical oceanography, vol. 37, no. 4, pp. 1077–1091, 2007.
284. anoilman says:
mwgrant: Have you contacted the people who have reviewed or used the code that is available?
285. mwgrant says:
Willard,
“I suspect many people use hyperbole here from time to time to telegraph a thought or concept efficiently. That is how I took ‘kill the economy’ and do not take Steven to task for that.”
1.) Context and usage, Willard, also impact how any of us interpret writing or speech.
One case uses a very common idiom,e.g., “It’s not going to kill you, to eat your broccoli, Willard.”
The other case s not hyperbole“…Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.” Walk it back to TE’s question Willard. Hyperbole in AoM’s answer does not make sense.
Something else you missed too. aTTP came by put oil on the water and AoM and I moved on. Yeah, he visited, but why not? You stirred the pot. It your problem. AoM makes comments, does some good pushing, same for BBD, we go to other things. Not you. You pertually chase blog dustbunnies.
Keep looking though and I am sure you can find someplace where I an inconsistent or hypocritical, but it does not matter. You’re the Vulcan wannabee I’ve see here. They rest of us have realistic expectations of others and ourselves.
So “That is all?”
I pretty much doubt it, but till loving it.
286. mwgrant says:
anoilman. Again no need to, but why do you ask? I went and looked at CESM but that site material does not seem applicable for a QA look. I’ll definitely and I did signup. Here is the reply I wrote to Eli regarding CESM
Eli Rabett wrote:
perhaps mw should go find out about how the community earth system model has been put together.
CESM 1.2 User Manual writes:
CESM Validation
Although CESM can be run out-of-the-box for a variety of resolutions, component combinations, and machines, MOST combinations of component sets, resolutions, and machines have not undergone rigorous scientific climate validation. … Users should carry out their own validations on any platform prior to doing scientific runs or scientific analysis and documentation. [CESM ucase]
http://www.cesm.ucar.edu/models/cesm1.2/cesm/doc/usersguide/x32.html#software_system_prerequisites
Well, that was short and sweet. Looking at CESM will not be that useful from a QA perspective. That is not why it has been made available anyway. (Eli doesn’t seem tuned to this thread. Guess that was the case.) Better Model E documentation. Really not a surprise.
287. Steven Mosher says:
“The question if teh modulz are good enough for the intended application can only be settled by looking into the code. The same obviously applies to SQA issues. ISO standards ain’t cheap.”
Not true.
the classic case would be RCS code. To look at RCS code you needed a level 4 TS/SAR
security clearance. but the people who decided it was GOOD ENOUGH never had to look at the code. What they compared was the code output and real live range test data.
You were not allowed to look at the code.
288. Steven Mosher says:
Willard
“Since GCMs are mostly used for climate projections, I’m not sure exactly which formal properties need to be specified.”
its not that hard. in fact it can be somewhat arbitary.
Validation means the model meets SPEC.
It doesnt have to match reality it has to match the spec.
You could say the following.
Spec
1. The climate model shall calculate the surface air temperature of the globe to with 1 degree C
That’s it. A model that met that spec would be valid.
You can add as many parameters as you like (SST etc ) and you can set the allowable error
at whatever you like.
the difficulty lies in a different place
289. Steven Mosher says:
“mwgrant: Have you read/reviewed the code that is publicly available?”
I have. I read modelE read the MIT GCM I ‘ve done a bunch of work with the regional outputs
for some specialized studies ( the models are crap, but they are the best we got )
IV&V can be as simple as you want to make it or as tough and rigorous as you want to.
290. Eli Rabett says:
It appears that Steve Mosher and Willard have undergone a mind meld.
291. Willard says:
> Hyperbole in AoM’s answer does not make sense.
Of course it does. Nobody in his right mind would presume that Oily One looked at every single piece of code on the planet, evaluated them all on a standardized benchmark, and then concluded that climate code came on top. To interpret Oily One as saying that teh modulz are quite good enough at what they do is way more charitable.
Here’s Turbulent One’s question, BTW:
How much of that is a reflection of natural variability and how much is artificial variance introduced by lines of FORTRAN?
Pray tell more about how a conceptual model would help answer Turbulent’s question without overseeing the code, mgw. Share the love.
***
> What they compared was the code output and real live range test data.
Fair enough, but that’s just one of the V, and it’s not the one that would answer Turbulent One’s question.
One does not simply validate formal properties by running the code in Mordor.
292. Steven Mosher says:
“Fair enough, but that’s just one of the V, and it’s not the one that would answer Turbulent One’s question.”
Yes thats Validation.
VERIFICATION… the other V.. what’s that willard and how is verification done?
Again. you verify against a SPEC.
Give us an example of how thats done. and why it is necessarily expensive to do with a GCM
293. mwgrant says:
AoM was not looking at any code. He was doing something reasonable, he was recalling at Easterbrook’s video/paper.
As for TE question. The only relevant aspect here is that it as a specific question about the code. Details of that content are secondary. It was a legitimate serious question and the response –good answer or bad answer–hyperbole in that context would not make sense.
I let it go. You, sir, may now have the field. Throw daisies and run under them to your heart’s content.
294. Willard says:
> how is verification done?
Start here:
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
https://en.wikipedia.org/wiki/Formal_verification
While this may be expensive, since we’re asking for correct code, I was talking about SQA, which ain’t cheap:
Many estimates say that analytical SQA constitutes about 50% of the total development costs.
Documentation alone costs about 11%:
A considerable share of software projects’ costs are spent on documentation, e.g., a ratio of 11% was reported in (Sanchez-Rosado et al., 2009).
http://www.sciencedirect.com/science/article/pii/S0164121214002131
Perhaps mgw’s experience in managing projects could help here in bringing more anecdata.
If these numbers are correct, and assuming that modulz are created without spending any money either on docs or SQA, we’re looking at a substantial investment, an investment we might need to “clarity test” beforehand.
295. Willard says:
> AoM was not looking at any code.
Of course he wasn’t, mgw. However, that’s what he does for a living, and my guess is that he wants to know if you’re ready to scratch your own itch. That question might also help him recognize if the concern you made comes from a coder or from a project manager.
***
> hyperbole in that context would not make sense.
Let’s quote Oily One’s remark again:
Turbulent Eddie: Nice try. Its not the FORTRAN. Climate models are the best and most thoroughly reviewed software out there. There is nothing like it.
There are lots of expressions with the word “best” that work as an hyperbole: you’re simply the best, that’s the best damn wine I have ever tasted, Judy’s the best ClimateBall blog there is these days, etc. The idiom “there is nothing like it” appears mostly in hyperbolic stances, for the obvious reason that one does not simply compare everything together before inserting a meliorative in one sentence from a blog. The same applies to the “most thoroughly reviewed” bit, since the word “most” cannot be realistically interpreted as applying to a universal and objective claim in the context of comparing code quality. Moreover, the expression “most thoroughly” can be read as another way to say “quite thoroughly.”
Language is a social art.
296. mt says:
I agree somewhat with Mosher this time.
On the one hand, it’s not obvious to me that it’s possible to write a spec for a GCM the way one would do for an online store or a bank.
On the other, in principle it should be feasible to write tests for each of the smaller modules. Indeed, going back to ATTP’s original point way back when, there are lots of physical constraints at play and these can be treated very much like software predicates. (This is greatly complicated by the imprecision of floating point numbers. One would really need a higher level of coding to do this effectively. It’s something I’ve thought about.) To my knowledge this sort of thing is not systematically done, and there is no formal test suite of the sort common in transaction processing, web services, etc.
I should add the caveat that I’m not entirely privy to the process at CCSM and merely speculating at all the others. I could be wrong. It’s not without precedent.
I think the validation phase is sufficiently robust for the fluid dynamics core and the radiation core that they can be excused from this sort of test. But all the other details? There are probably still small bugs lurking there that a more formal development process could expose.
However, there is a certain open-sourciness to it in that researchers do go over the bits of code closest to their own work and so the many-eyeballs approach does operate.
I think it’s amazing that the models work as well as they do. On the other hand, one could argue that, as a sort of curve-fitting argument would have it, the more complicated they get, the less amazing it is. I continue to maintain that there’s a space of dynamic models of intermediate complexity that ought to be more fully explored.
297. Willard says:
> I let it go.
I thought you were enjoying yourself.
Have at it. I’m gone for two weeks. Meanwhile, try not to whine too much, now that your “pickings have been thin” doesn’t look plausible anymore.
298. anoilman says:
mwgrant: So…
You want a external/public review of climate models. (For reasons unknown other that you think it will generate some sort of results you consider relevant.)
Said external/public audit will generate new metrics. (… of your as yet undisclosed choosing)
You haven’t been able to find what you consider to be design documents. I’m not disagreeing, but you may have to ask. (I’ll wager you’ll be disappointed. My bet is that the gritty bits will still require familiarity with a branch science and or the relevant scientific papers.)
You haven’t contacted anyone who’s reviewed/looked at the code.
On the other hand we have evidence of external review (Easterbrook) with published peer reviewed results.
He harps on defect count, but he’s also documented their coding processes.
You’ll find a lot of peer reviewed documents on design and architecture used by climate models;
which cross references to this;
IMO I think you’ll need a phd in climate science to know where this may go wrong.
Here’s a break down of Easterbrook’s efforts to locate all the source code; (I’ll wager there are a lot of interesting licensing agreements making a mess of full disclosure on a lot of them.)
http://www.easterbrook.ca/steve/2009/06/getting-the-source-code-for-climate-models/
Lastly, the results are competitively reviewed all over the place.
299. anoilman says:
mt: Here’s what they say in AR4;
300. Joshua says:
M-dub…
Now that you’re finished with the other discussion…???
301. mt says:
re anoilman, ar4 ch 8 agrees with my understanding – it’s all validation and no verification.
The model space I am interested in is the minimally parameterized GCM, possibly but not necessarily run at low resolution. This is between the EMIC and the cutting edge GCM discussed in AR4, We could profitably do something like what WHT has pointlessly done with ENSO, because unlike his, our models have actual physics in them and actual predictive success. But the more free parameters there are, the less meaningfully we can tune them, and simultaneously the more likely that we fall into the trap of overtuning.
But I didn’t run my career well enough to get to attempt this. Somebody else ought to try it.
302. anoilman says:
mt says:
August 12, 2015 at 4:58 pm
“re anoilman, ar4 ch 8 agrees with my understanding – it’s all validation and no verification.”
I don’t agree. The code is also verified with many eyes on it.
As per your previous comment, “However, there is a certain open-sourciness to it in that researchers do go over the bits of code closest to their own work and so the many-eyeballs approach does operate.”
303. WebHubTelescope says:
We could profitably do something like what WHT has pointlessly done with ENSO, because unlike his, our models have actual physics in them and actual predictive success.
More physics in my simplified model than anything else.
ENSO is more geophysics than it is climate science. What is happening with the sloshing model is a first-order approximation of a body of fluid’s response to angular momentum changes in the forcing.
As far as predictive success, predictions are not the defining aspect of physics. Physics is about understanding first. There are any number of ways that one can use the historical data to test this understanding.
As an example, take the case of tides. Is it that important that we need predictive success to establish that the basic theory of tides is correct? Or can we go back in time and look at historical data to establish what is happening?
Eventually, the cycles of ENSO will be understood to the extent that they are as well accepted as tidal cycles.
BTW, ENSO predictions currently are very short-term. If I were to actually work on predictions, all that I would need to show was that it was better than the others. So if the current predictions are only good to 6 months out, if I had something that could predict 2 years out, my model would win. The reason that I am concentrating on understanding over the whole range of ENSO data available is that I am doing scientist first, and not attempting to start a climate forecasting business 🙂 In other science disciplines, such as materials sciences and condensed matter physics, we refer to this process as characterization.
304. Steven Mosher says:
Oilman
Figure 9
That is the kind of metric that would be required in a spec.
read the text if you dont understand the issue ( its related to leapfrog )
basically the kind of thing that mwgrant and I are asking for would be this.
A) a STANDARD set of diagnostics that have to be performed. ( figure 9 shows you one )
B) a criteria pass/no pass
C) publication of diagnostics.
Let me give you an example from Ar4.
if you read Ar4 on attribution ( I think it was in the supplementary material so you will have to look )
you will find that NOT ALL models were used in attribution.
Model results were submitted and then a subset of models were selected. Those with a low drift
in control runs.
well. what you actually want to do is Specify an acceptable drift BEFORE model results can be submitted. In short, If a model had a 5% drift it could be used in any of the other charts but for attribution they realized that too much drift could look like a “natural’ trend.
The point being the Standard of low drift should be applied across the board. otherwise
you end up mixing shitty models with good models. if you are doing forecasts you dont want drift in a control run.
So some basics. You specify allowable drift in control runs. you preclude submissions that violate that. people are driven to improve that aspect. you get better science. You publish the spec
you publish a score card. that builds discipline and trust.
The spec doesnt have to be hard to meet ! and over time you add elements.
basically you cant improve what you dont measure.
305. Steven Mosher says:
oilman
“I don’t agree. The code is also verified with many eyes on it.”
It’s not that simple.
http://softwaretestingfundamentals.com/verification-vs-validation/
harder is this
http://embedded.eecs.berkeley.edu/research/vis/doc/VisUser/vis_user/node4.html
306. anoilman says:
Steven Mosher: Thanks. That was clear and concise.
Have you posed such notions to any experts?
307. Steven Mosher says:
“Here’s a break down of Easterbrook’s efforts to locate all the source code; (I’ll wager there are a lot of interesting licensing agreements making a mess of full disclosure on a lot of them.)
http://www.easterbrook.ca/steve/2009/06/getting-the-source-code-for-climate-models/
1. There is no evidence of difficulties with licences.
2. If there is it is easily handled by journals and the IPCC. If you want to submit
results you have to have a GNU licence ( or pick creative commons )
3. It is common even for companies to be forced to open to their code in order
to do business with competitors. Your code is basically given to everyone else.
If stupid business people can figure this out it cant be rocket science. Stop underestimating
scientists.
“Lastly, the results are competitively reviewed all over the place.”
1. Actually not.
2. A standard set of benchmarks is lacking. the best I’ve seen in taylor diagrams
on a couple of parameters.
3. Competition, real competition, should reduce the spread. Not seeing evidence of that.
308. MT,
If I understand what you’re suggesting, then I largely agree. I, personally, am always much more interested in models with physics that requires little tuning and in which one can understand the results in terms of the physics that you included. When it starts to become extremely complex it becomes much more difficult to associate the results with the physics that you know you’ve included and over-tuning does indeed become an issue.
309. Steven Mosher says:
mt and I agree 100% on this.
I’ll add this. There are a bunch of people ( Gavin included ) who are skeptical of the
‘democracy’ of models approach. That is for IPCC submission if you submit you are
pretty much accepted.
So some sort of standard some sort of scoring mechanism should be inplace for submission
310. Steven,
I don’t know enough about the details to have a strong view, but what you suggest seems reasonable. It’s certainly pretty standard to have at least passed some standard tests before using a model to solve a certain class of problems.
311. Eli Rabett says:
Steve Easterbrook is no babe in the woods in the IV&V game.
FWIW, ESMs and GCMs are not production software, but constantly evolving experimental testbeds.
312. FWIW, ESMs and GCMs are not production software, but constantly evolving experimental testbeds.
Yes, a good point. These aren’t pieces of productive software that we validate in some way and then fix. They really are – as you say – experimental testbeds that are used to try and probe how our climate responds under different circumstances.
313. Kevin O'Neill says:
“FWIW, ESMs and GCMs are not production software, but constantly evolving experimental testbeds.”
Underline this, then highlight it.
I don’t think much has changed in the twenty years since Naomi Oreskes et al wrote Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences
314. mt says:
GCMs are ALSO community service platforms with a variety of use cases. They don’t have to be as informal as they are on the developer side or as painful as they are on the end user side.
I think where our friends the lukewarmers and naysayers of various stripes come in is to say that this is applied science, not pure science, so everything has to be multiply documented and so on like medical research. This wrong idea of the role of climate models is not entirely their fault – press offices and to some extent program managers are guilty of promoting it, without really understanding how much more difficult it would make life if that were real.
If at some point geoengineering becomes a real prospect, the pressures on climate modeling will be immense, though. Let’s hope it does[n]’t come to that for more important reasons, but really, would it hurt to do a ground up design using as many principles from the commercial software world as possible?
The culture gaps are immense – scientists don’t know how vast the skill set engineers have to offer is, and vice versa. Both accordingly tend to arrogance, which we see constantly on the blogs.
What’s more, I don’t know how it is overseas, but interdisciplinary collaborations in America tend to fall woefully flat even within the sciences. I can’t imagine how it would work with [engineering researchers as well as science researchers].
A climate model is very different even from most other scientific software (since it is seeking to characterize the phase space of the system and not its trajectory). Its distance from what commercial software houses normally do is even more vast. So there’s no cookbook approach. Teams will need to combine deep understanding of what the software principles are and what the models do. That’s not trivial by any means.
But engineering software is progressing in spectacular ways, and climate modeling is still plodding. If there’s really benefit to be had from significant progress in GCMs, it may be time to start with a nearly blank slate (aside from the well-tested low-level math libraries, BLAS etc. and maybe the radiative transfer code) insofar as the code base goes.
[Added the omitted text. -W]
315. mt says:
I suggest that the FLASH effort in astrophysics refutes Oreskes et al 1993’s claim that “In its application to models of natural systems, the term verification is highly misleading.
It suggests a demonstration of proof that is simply not accessible”
316. MT,
I’m not sure I quite agree. I think they’re verifying the numerical scheme by doing some standard tests. I don’t think they’re verifying it in terms of it’s application to a specific physical problem. FLASH is one of the standard schemes in astrophysics, but is used for a wide range of different applications. I assume that GCMs have at least tested their numerical schemes using standard test problems. I think that’s different to verifying that it correctly represent the complex systemt that it’s trying to model. It is late (I’ve been trying to watch the Perseids – largely unsuccessfully) so maybe I’m confused.
317. Steven Mosher says:
mt
‘I think where our friends the lukewarmers and naysayers of various stripes come in is to say that this is applied science, not pure science, so everything has to be multiply documented and so on like medical research. This wrong idea of the role of climate models is not entirely their fault – press offices and to some extent program managers are guilty of promoting it, without really understanding how much more difficult it would make life if that were real.”
Again agreement.
my modest proposal is this.
For policy I would suggest that the US pick one model. Actually issue a RFP for people to submit their model along with a proposed budget for documentation, tests, ect. and bringing it under control much as programs like MODTRAN are under control.
That model would be standard.
Research folks can continue their research and pure science.
If they find or develop something cool, they can submit proposals to improve the standard.
318. Steven Mosher says:
“FWIW, ESMs and GCMs are not production software, but constantly evolving experimental testbeds.”
Underline this, then highlight it.”
#####################
This is no barrier to developing a standard model that is put under production processes.
its called a branch einstein
319. mt says:
Mosher, okay now we disagree again, so the world is back on its axis.
We don’t really need a model “for policy” except insofar as regional models are needed for adaptation policy, which in the US would not occur at the federal level.
The mitigation horse has long since left the barn. “Zero net emissions as fast as feasible” is the only sane course; all we need to be discussing is how fast is feasible. We don’t need GCMs to tell us anything about mitigation.
320. anoilman says:
Steven Mosher: So you didn’t ask an expert. Hmm..
I’ve often found it valuable to ask someone who knows the right stuff. They often have insight that I may not. Just because you think it might be concern doesn’t mean it actually is a concern.
While many companies may have resolved the sharing issues, I’d say many many more haven’t. Many companies I’ve been at have walked from IP sharing agreements. Part of the problem for climate models may be incomplete ownership. Furthermore open source now supports closed binary blobs (no source share) to satisfy businesses that don’t want to share.
I have to say that I think it should be sharable, but then you may not know what was put in way back. Some of this stuff is really really old, and practices may have been different back then. Retrofitting software licenses can be an expensive proposition. Its potentially a full rewrite for open source. (Luxrender is undergoing that now, its cost them years. But the new license will finally allow the render engine to be included in 3rd party applications.)
321. Having a blast not doing GCMs while modeling ENSO. All the Mathematica code fits on a single page, about 20 lines.
322. BBD says:
mt says
The mitigation horse has long since left the barn. “Zero net emissions as fast as feasible” is the only sane course; all we need to be discussing is how fast is feasible. We don’t need GCMs to tell us anything about mitigation.
This has been repeated by mt several times and was part of ATTP’s original response to mwg. It was also my original point. This is not about the bloody models.
So the endless back-and-forth about ‘issues’ with the models serves only one purpose – to generate a miasma of doubt and uncertainty tied to the incorrect (and never withdrawn) claim by mwg that ‘the models’ have an undue influence on policy.
To paraphrase the estate agent’s (realtor’s) mantra: insinuation, insinuation, insinuation.
323. Eli Rabett says:
This is no barrier to developing a standard model
*********************’
Money and time and slots. These sort of things can only be done as line items in a national science agency budget. There simply is no way to get grants to do this as the response will be “nothing new here, don’t fund” and even at the agencies/labs it is not clear whose mission fits.
Moreover, the update cycle will kill any such effort as new methods/information comes in.
324. KR says:
I’ve dealt with (and created) validated models and software that has on occasion been subjected to qualification tests by examinations and against standards (FDA and pharmaceutical review, for example).
But there’s a problem with those software platforms – validation and verification are expensive, and once that expense has been incurred changes are not welcome. Meaning that as the science develops, the ‘standard model’ won’t include it at anything like the rate of scientific development, simply because of the cost of V&V.
I cannot count how many times I’ve had the following conversation:
“We need a bug fix for problem XXXX – can you patch it for us?”
“XXXX was fixed two releases ago, the patch is to update to the latest version.”
“NOOOoooooo – we would have to recertify the new version! Whine whine whine… I guess we’ll live with it.”
Quite frankly, insisting on a ‘standard model’ means insisting on a outdated model, several iterations of knowledge behind the state of the art. Not a good idea at all.
325. Steven Mosher says:
“Steven Mosher: Thanks. That was clear and concise.
Have you posed such notions to any experts?”
Experts in what?
1. Production software?
2. Bringing Research Codes into a Production Mode?
3. experts in GCMs?
4. Experts in bring GCMs under production software rules?
Experts in WHAT?
i consider this to be a problem that would require input from #2. Those with experience
bringing research codes into production mode. “Fork it” is not a deep concept. The question isnt how you do it. The first newsworthy contract we won from the Air force was to bring research simulation code into a production mode.And yes, we keep a research fork alive
and thriving.
As ATTP notes there is nothing strange unique or unreasonable about what I propose.
The only cogent objection is “mt’s” objection. That GCMs are not needed for policy
and so creating a standard model would just be busy work.
Ask yourself why mt is able to come with the best counter arguments.
A) he knows more about code that you do
B) he knows more about GCMs than you do.
C) he is past personalizing our disagreements.
I’ll suggest you pick up mt’s argument and spend brain cycles on that.
326. Steven Mosher says:
“Quite frankly, insisting on a ‘standard model’ means insisting on a outdated model, several iterations of knowledge behind the state of the art. Not a good idea at all.”
Gosh this is so simple to solve.
The same argument is made repeatedly about standardizing models.
The procedure is simple.
You have a standard set of tests and metrics. your standard model has a documented performance on these metrics. If someone’s research code out performs that any decision maker can take that into consideration. And of course you threaten the company that maintains the standard that they can be replaced.
As a developer of research codes your dream is to get your stuff into the standard.
But lets take your argument and flip it on its head! lets do a little judo on your butt
Standard models are outdated!! Agreed
yup— throw out MODTRAN!!
Good argument dude you just scored one for the skeptics.
327. Steven Mosher says:
“This has been repeated by mt several times and was part of ATTP’s original response to mwg. It was also my original point. This is not about the bloody models.”
As mt and you argue it is about the policy.
Anything that could POSSIBLY cause a bump in the road to policy has to be objected to.
No other principles need apply. ever. the planet is stake.
1. So we defend data hiding
2. So we defend opaque processes.
3. so we defend inferior models
4. so we refuse to compromise on anything
because policy comes first and foremost.
and oh ya we dont have the money to do it right. thanks Eli. agreed.
328. WebHubTelescope says:
Lots of effort wasted working on huge GCMs to figure out behavior of ENSO. And then they end up punting anyways and deem it stochastic, or chaotic and too dependent on initial conditions.
Yet the recent finding is that ENSO is clearly deterministic and the cyclic period will be as easy to figure out as the tides.
Here are some sour grapes for making some whine:
“But I didn’t run my career well enough to get to attempt this. Somebody else ought to try it.”
The guy wants some “intermediate complexity” dynamic models, yet dissmisses what I have to offer. Jeez.
329. anoilman says:
Steven Mosher: Thanks for those questions, but it sounds like JAQing to me.
What experts?
I’d start with the people working on the models and work around from there. Does that sound reasonable? It might help you figure out if you need to find someone else to talk to for instance. You’re the one with an opinion, who do you think would find it interesting? You pointed to drift in one model as a potential concern. Oh ma gosh! I bet the model builders know a lot more about that than you. Just saying. I mean they measured it for a reason. Right?
It’s strange unreasonable and unusual to present your claims/concerns to a group of people wholly unrelated to the subject at hand. While simultaneously you’ve never contacted the one group of people who might have something to say about it. Seem reasonable?
Just so you know, I’ve contacted a key scientist about material/concerns that I was interested in. I was enlightened, as was the the scientist in question.
I’ve long since been warming what mt has said, but I haven’t seen anything I consider a concern.
330. KR says:
Re: MODTRAN:
“With over a 30 year heritage, MODTRAN® has been extensively validated, and it serves as the community standard atmospheric band model.”
Yes, this has been validated – by use. Not by qualifying against a panel of pre-defined standards on each iteration (although I expect that’s part of their internal release process), mind you, but by demonstration of validity over time leading to broad acceptance. Community tested, not committee tested.
I’ll say it again – I’ve had the validation conversation over and over, and groups without bottomless funding _will_ be resistant to change due to the cost and time of re-validating against a standards array every time you update the physics. And all of your military examples are of models used to replicate conditions, avoiding much of the cost and risk of physical tests – they are not models used to investigate phenomena or physics what-if questions.
GCMs are constantly being validated in the same fashion as MODTRAN – tested against observations and by evaluating the fidelity of the replication of various physical processes. However, what you _appear_ to be suggesting – an external panel of V&V tests to qualify as meeting a ‘standard’ – seems IMO to be both limited (which parts of the physics need to be in the panel of tests? All of them?) and limiting (as our observations gain detail and our physical understanding changes, the standards themselves may turn out to be in error). Not to mention that different models will have different strengths, for example in the replication of ENSO dynamics – is your ‘standard’ model going to require a super-set of capabilities?
“You have a standard set of tests and metrics. your standard model has a documented performance on these metrics.” – This only works if you’ve pre-defined the results possible from the GCM – or if you’re not testing the portions that can reveal new relationships. In which case you’re not going to learn much new from standardized models. The kind of testing you’re proposing would certainly prevent ‘doh’ errors, but it wouldn’t and couldn’t test how well the models generate new information.
Far better, IMO, to put the GCM results out in as much detail as possible, as in the CMIP runs, and evaluate model veracity accordingly.
331. mt says:
Mosher, so many straw men in such a short piece.
First of all, “As mt and you argue it is about the policy.”
No. Climate modeling is about science, science which is not relevant to mitigation policy. It is potentially relevant to adaptation policy and to remediation (geoengineering) policy, but it remains speculative whether such an advance is possible.
“Anything that could POSSIBLY cause a bump in the road to policy has to be objected to.”
That at least is a good question, especially given how successfully bumps have been wilfully constructed over the past two decades. I think, though, it is fair to say that a given question is not relevant to mitigation policy. Attacks on climate modeling are a red herring. We should strictly separate the very interesting and scientifically important questions which are of interest to a scientifically curious audience from the crucial policy questions which affect everyone and which any citizen should be aware of. The relevance of models to the global mitigation question is arguably negligible within the policy time scale window. I believe that, and do so argue. It’s a constructed speed bump, not a real issue.
“No other principles need apply. ever. the planet is stake.”
I make no such claim. I frequently remind people that there are multiple existential threats and we have to respond effectively to all of them. Sacrificing principle to solve on problem could easily prevent progress on another. So putting these words in my mouth is unacceptable to me.
“1. So we defend data hiding”
As far as I know, the only instance of “data hiding” that arguably comes from real scientists was an informal communication, the cover of a WMO report, in which an inconvenient part of Briffa’s data was willfully truncated. I have never defended this. But I object to any characterization of habitual data hiding, or myself for one defending it.
“2. So we defend opaque processes.”
Not me.
I am an enthusaist for restructuring science, open science, reproducibility, and finding some way to filter nonsense other than credentialism. I’m absolutely and adamantly opposed to the current for-profit journal-as-gatekeeper system.
Climatology is no exception among the sciences in having this problem. (Computer science has long been taking the lead in overcoming it.) Attacks on climate science in particular under this rubric makes enemies for the open science movement.
“3. so we defend inferior models”
In my defense I offer pretty much everything I’ve written on this thread, so no.
“4. so we refuse to compromise on anything”
Well, I’m not sure what that’s about. Regarding the topics of this thread, where we aren’t discussing policy but merely the science that informs policy, I’ll admit that I won’t compromise what I see to be true with half-truth.
” oh ya we dont have the money to do it right”
Well, after decades of overstated and sometimes baseless attacks, largely driven by policy and not science, climate science is not especially popular in congress.
So, in short, no, no, no, no, no, huh, and yeah right to your “yeah right”.
332. mt i know you are the exception.
333. mt says:
“mt i know you are the exception” How so? You mentioned me explicitly.
334. Steven,
I must admit you did seem to imply that people here who had made arguments about the policy relevance of GCMs were also doing the various things that you mention in your comment. I don’t think that that is true. I’ll add a comment about defending data hiding, opaque processes, etc. I’m with MT in this regard; everything should be as open as is possible (there are reasonable exceptions). However, that doesn’t mean that we should all be condemning and demonising all those who may not have lived up to these ideals in the past.
335. BBD says:
Steven
because policy comes first and foremost.
Physics comes first and foremost. Tactical rhetoric about teh models only arises because policy comes first and foremost.
336. Mt.. not everyone engages in every one of the ‘look the other way” types of behaviors.
but I think in general if one cares about the policy there will come a time and place
where you bend other principals.
some people make excuses on transparency some people make excuses on model testing..
maybe some people abandon their normal forgiving natures.
some people are just silent whereas they normally would object.
Its those points of friction that I find really interesting . I find it interesting when people talk about giving ammunition to skeptics..
For me the friction is with my libertarian politics.
I cant imagine anyone who doesnt or hasnt experienced some ethical friction
337. BBD says:
I cant imagine anyone who doesnt or hasnt experienced some ethical friction
Insinutate! Insinuate! (ClimateBall Dalek)
338. mt says:
BBD says I shouldn’t take the bait. But I’m obviously foolish that way.
Yes. Of course the tension between precision and solidarity is a real factor in the life of anyone competently attempting to link complex evidence to complex policy. Stipulated. I agreed with Gaius Publius on that on Twitter recently. I agree with your 11:13 pm. I think it’s a good point.
(Steve Schneider’s reputation endured all sorts of hits for saying that rather hamhandedly, you’ll recall. http://climatesight.org/2009/04/12/the-schneider-quote/ Once you know your stuff, you’re constantly trading off being pedantically correct against being close to correct but more effective. It’s just a fact of life.)
But Mosher, that doesn’t justify your outburst of 4:28 pm ATTP time today and your peculiar attempt to suggest you never meant it as a provocation. Nor that the post wasn’t in part about me despite you naming me specifically. I can’t even tell if you’re serious about that.
“As mt and you argue it is about the policy.
Anything that could POSSIBLY cause a bump in the road to policy has to be objected to.
No other principles need apply. ever. the planet is stake.”
but
“mt i know you are the exception”
Please. What’s the punchline? Because you must be joking.
Really, I’d like you a lot better if you found it in your heart once in a while to apologize rather than sneakily try to walk your literal claims back while leaving the dog whistles in place, like a backbench politician. Own up to saying things, and own up to changing your mind when you do, please.
One may have to bite one’s tongue on occasion but there’s no need to waffle and prevaricate.
339. dhogaza says:
Mosher/MT:
““Anything that could POSSIBLY cause a bump in the road to policy has to be objected to.”
That at least is a good question, especially given how successfully bumps have been wilfully constructed over the past two decades. I think, though, it is fair to say that a given question is not relevant to mitigation policy. Attacks on climate modeling are a red herring.”
Attacks on climate modeling would continue even if the models were delivered by God on stone tablets on a mountain top marked with a burning bush.
So, yes, red herring. And Mosher would be one of those continuing to spread FUD.
340. Steven Mosher says:
No that’s an invitation not an insinuation.
I pointed at my ethical friction, as a libertarian I am willing to comprimise certain principles to keep us more safe than we would be if we did nothing.
I can’t imagine that a issue this complex would not present others with similar friction points.
whether it be remaining silent over something like Gleick, or any number of other points.
It’s no great flaw to announce that your principles have priorities.
then again That may mean I have a limited imagination.
341. Joshua says:
“I pointed at my ethical friction, as a libertarian I am willing to comprimise certain principles to keep us more safe than we would be if we did nothing.
I can’t imagine that a issue this complex would not present others with similar friction points.”
As a non-libertarian, I am willing to compromise certain principles to keep us more safe than we would be if we did nothing.
I can’t imagine that an issue this complex would not present more self-identified libertarians (and self identified non-libertarians) with similar friction oints.
342. Joshua says:
==> “then again That may mean I have a limited imagination.”
I don’t think that it’s a matter of your lack of imagination. I think that basically everyone, depending on context, recognizes those friction points of which you speak. Certainly, we can point to many situations where libertarians and non-libertarians alike respond to risk in similar fashion.
IMO, what we see with issues like climate change is that the polarized context stimulates cultural cognition which alters risk analysis as we see in non-polarized contexts. The problem is that people view their response as a kind of ID tag (“See. I’m a libertarian and my refusal to go along with a statist, authoritarian cabal who wants to impose climate change mitigation so they they can tax me and destroy capitalism is proof of my identity. So I wear this “skeptic” badge.)
Additionally, of course, there are also characteristics of risk assessment even in non-polarized contexts where the risk is dramatic but perhaps improbable, and where the risk plays out on long time horizons, where more typical “friction point” responses become affected.
343. anoilman says:
Steven Mosher: Actually I’ve been attempting to elicit a reasonable business case. Change for change’s sake is not a successful policy for anything. So far I haven’t heard a reasonable business case.
Someone running around saying something might be wrong isn’t unusual, even in business. The gauntlet that you have run is to make a business case;
“What is the concern?” (Not entirely sure…)
“What do others think about it?” (Never asked anyone who would be affected… )
“How will you solve the concern?” (You hope for public reviews? Install more metrics?)
“What will be the result of solution?” (Not sure…)
To top it off, you have another guy on the other side of the table saying…
“I’ve run a public peer reviewed review of the models showing no concerns.” The bar is set pretty high for you.
I’m not saying you’re wrong, but to my eyes… meh. You got nothing but hand waving.
344. Reduce the Navier-Stokes equations to the wave equation, apply the QBO angular momentum changes as a near-periodic forcing on the equatorial Pacific thermocline, and voila, you have a very decent model of ENSO behavior over multiple decades and centuries.
This essentially explains a significant fraction of the world’s temperature anomaly.
Who needs a GCM?
345. WebHubTelescope says:
Recall all the discussion of the need for GCMs to model the complexity of climate variability. Imagine what kind of ghosts and goblins inhabit the scientific code base.
Then consider how simple a model of the seemingly intractable behavior of ENSO one can construct:
http://contextearth.com/2015/08/17/enso-redux/
Mainstream earth sciences has forgotten the utility of applying first-order physics, while every other scientific discipline applies it routinely.
346. mt says:
WHuT’s finest performance to date – he has pretty well matched a dozen wobbles with only 50 or so magic numbers. “First order physics” isn’t curve fitting.
There’s an easy test, of course. See if the “model” has any predictive skill. So given your excellent match through 1913 in the above, what happens after that?
347. > Teh models
You mean teh modulz.
348. Hank Roberts says:
“… continuing the integration beyond 1913 on to 1980, the fit shown in Figure 2 …”
349. russellseitz says:
From the great sucking sounds economic bubbles produce, it is clear that thr Navier-Stokes analogy is sound on one point :
Markets are prone to cavitation when things get spun too fast.
350. Hank Roberts says:
> cavitation
I suspect that’s a brilliant observation the economists will utterly ignore
————-
On that model:
> Only when the integration reaches 1981-1982 does a real discrepancy
> occur. See Figure 3 ….
So I’m metamodeling the slosh model as a simple pendulum, a swing occupied by a kid who is sitting without wiggling too much.
Until 1981-1982, when the kid learns how to pump a swing by timing when to extend and when to pull in …. that is, the point at which the warming signal emerges.
Too simple, I trust.
351. Neutron-Powered, High Side, Sideways Racer says:
What this equation is essentially saying is that the density, , in a particular volume cannot change unless there is a net flux, , into that volume.
The density can change, even in the case of zero net flux, if either of the two thermodynamically independent variables in the Equation of State (EoS) for the material change. Two thermodynamically independent variables that are frequently used in an EoS are pressure and temperature. Energy addition through the walls of a closed container, for example, will cause density change.
It also follows that in the case of zero net flux the density can change if the energy content of the fluxing material is different from the initial energy content of the material to which the equation is applied.
The statement quoted above actually doesn’t have consistent units; the density and the flux have different units.
What the equation is actually saying is that the time-rate-of-change of the density is determined by the divergence of the flux field. This does not refer to the density itself, but instead the time-rate-of-change of the density. Consider the case of an incompressible fluid, for which the general formulation holds, and the density is constant. The equation then reduces to state that the velocity field is divergence free.
Mass is conserved, not density. An equation for mass conservation is obtained by integrating the presented equation over a volume that contains the material of interest. The volume integral of the divergence of the flux is transformed by application of the Gauss Theorem to the surface integral of the flux over the surface bounding the volume. The time-rate-of-change of the mass, dM/dt [=] kg/sec, can change only if there is a net non-zero mass flux, rho*u*A = W [=] kg/sec, through the surface bounding the volume. Where [=] means “has units of”.
352. Neutron,
I think you’re thinking Lagrangian, not Eulerian. In the equations I used the assumption is that the volume remains unchanged. Hence the density can only change if there is an addition of mass. In a Lagrangian framework you follow the fluid, rather than work on a fixed grid. In a Lagrangian framework the local density can change, but that’s because you’re not working in a framework in which you’re assuming that the simulation volume remains unchanged.
The statement quoted above actually doesn’t have consistent units; the density and the flux have different units.
Yes, it does have consistent units. A change in density is density over time. The gradient of the flux is density times velocity divided by a length, which also has units of density over time.
353. Actually this is completely wrong
Energy addition through the walls of a closed container, for example, will cause density change.
Density is simply mass divided by volume. If neither the mass nor the volume change, then the density cannot change. Adding energy will change the pressure and – consequently – the temperature, but can’t change the density if the mass and volume are fixed.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2021-10-25 00:06:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4489390254020691, "perplexity": 1667.751454500669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00087.warc.gz"} |
https://gateoverflow.in/user/MiNiPanda/questions | # Questions by MiNiPanda
1
How many strings of three decimal digits do not contain the same digit three times? have exactly two digits that are 4s? I know question is easy but the answer is not matching with the one given over here Please someone verify.. Does the word “string” mean that we can take 0 as the first digit as well?
2
In the given network system, station A needs to send a payload of 1600 B from its network layer to station B. If fragmentation is done, then the actual data size to be transmitted is ______________
3
Consider the following ER diagram: How many number of relations are required for the above ER diagram? 2 3 5 1 Solution: My doubt is: Since $E_2$ isn't involved in total participation with $E_1$ so on merging we might get 2NF violation. Eg: Let $E_1$ be: <p1,q1> ... $p \rightarrow q$ 2-NF violation. So by default which case is to be considered when normalization form is not mentioned?? 1-NF?
4
Consider the following language over ∑={0,1} $L_{1} = \left \{ a^{\left \lfloor \frac{m}{n} \right \rfloor}| m,n \geq 1; n<m \right \}$ $L_{2} = \left \{ a^{m^{n}}| m,n \geq 1; n<m \right \}$ Which of them are regular? Both L1 and L2 Only L2 Only L1 None Ans. A. Both Please explain.
5
2’s complement representation of the number $(-89)_{10}$ is 7 5 4 3 I don’t understand their solution. Please help.
6
Consider a TCP connection using the multiplicative additive congestion control algorithm where the window size is 1 MSS and the threshold is 32 MSS. At the $8^{th}$ transmission timeout occurs and enters in the congestion detection phase. The value of the window size (in MSS) at the ... end of $12^{th}$ transmission. So we have to take the window size after the 12 RTTs right and not at 12th RTT?
7
Let L be the language of all strings on [0,1] ending with 1. Let X be the language generated by the grammar G. $S \rightarrow 0S/1A/ \epsilon$ $A \rightarrow 1S/0A$ Then $L \cup X=$ ∅ ∑* L X Ans given : B. ∑* They said that X is a language which contains all strings that do not end with 1. But is it so? Can’t we generate 11 from the grammar? Please verify.
8
Consider the hashing table with m' slots and n' keys. If the expected number of probes in an unsuccessful search is 3, the expected number of probes in successful search is _____(Up to 2 decimals) Ans. 1.647 Here by default which hashing should be considered? ... the formula given here in the table http://cs360.cs.ua.edu/notes/hashing_formulas.pdf With linear hashing, I am getting around 1.61
9
Consider the following function foo() void foo(int n) { if(n<=0) printf("Bye"); else { printf("Hi"); foo(n-3); printf("Hi"); foo(n-1); } } Let P(n) represent the recurrence relation indicating the number of times the print statement (both Hi and Bye included) is executed ... 2 = 1+P(-2)+2=3+P(-2) But nothing is mentioned about P value when n<0. How to solve for P(-2) and other negative values?
10
A packet of 20 batteries is known to include 4 batteries that are defective. If 8 batteries are randomly chosen and tested, the probability that finding among them not more than 1 defective is Ans: 0.5033 Solution provided: How can we apply Binomial distribution here? The batteries are chosen without replacement ...
1 vote
11
Consider the following 2 functions P and Q which share 2 common variables A and B: P() Q() { { A=A+5; A=B+6; B=A-3; B=A-2; } } If P and Q execute concurrently, the initial value of A=2 and B=3 then the sum of all different values that B can take ____ (do not count B=3) Why have they taken intermediate values of B like 4..? :/
12
In a 4-bit binary ripple counter, for every input clock pulse All the flip flops get clocked simultaneously Only one flip flop get clocked at a time Two flip flops get clocked at a time All the above statements are false Ans. D Why not B? The i/p clock is given to the LSB flip flop isn’t it?
13
Triangles ABC and CDE have a common vertex C with the side AB of triangle ABC being parallel to side DE of triangle CDE. If the length of side AB=4 cm and length of side DE=10 cm and perpendicular distance between sides AB and DE is 9.8 cm, then the sum of areas of ... CDE is ________ $cm^2$ . Now here I don't understand why they have taken BCD and ACE on the same line! Isn't this possible?
14
Given M = (Q,Σ,δ,q0,F) a DFA with n states. Prove: The language L(M) is infinite iff it contains a string with length t, where n ≤ t < 2n. Please provide a prove. I am not getting it from the resources available on the net. Why isn't the criteria like ... a loop in DFA we can accept infinite no. of strings isn't it? Then why doesn't this condition suffice? Please point out where I am going wrong.
1 vote
15
How does option B ensure Bounded Waiting? Process A can keep on entering the CS while process B tries to enter the CS but keeps on spinning over Wait(P). I mean if there is no context switch from A to B then A can keep on visiting the CS as many times as possible(no bound on the no. of times it enters) making B to wait.
16
Doubt 1 : The answer is 3. When it is asked to find min. no. of temporary variables then we get 3. But here temporary is not mentioned still we have to assume that temporary is implicit? Doubt 2: The answer given is 4. But my process is giving 2. What is ... According to the rules it shouldn't be in a separate block right? Doubt 4: Is this statement True or False. Please give reason to support.
17
The CPU of a system having an execution rate of 1 million instructions per second needs 4 machine cycles on an average for executing an instruction. On an average, 50% of the cycles use memory bus. For execution of the programs, the system utilizes 90% of the ... : in-status, check-status, branch and read/write in memory, each requiring one machine cycle. Please explain the solution with details. | 2020-09-30 06:41:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.602755606174469, "perplexity": 1092.7240409013418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402118004.92/warc/CC-MAIN-20200930044533-20200930074533-00694.warc.gz"} |
https://www.alignmentforum.org/users/ramana-kumar | Ramana Kumar
Posts
Sorted by New
Time in Cartesian Frames
I have something suggestive of a negative result in this direction:
Let be the prime-detector situation from Section 2.1 of the coarse worlds post, and let be the (non-surjective) function that "heats" the outcome (changes any "C" to an "H"). The frame is clearly in some sense equivalent to the one from the example (which deletes the temperature from the outcome) -- I am using my version just to stay within the same category when comparing frames. As a reminder, primality is not observable in but is observable in .
Claim: No frame of the form is biextensionally equivalent to
Proof Idea
The kind of additional observability we get from coarsening the world seems in this case to be very different from the kind that comes from externalising part of the agent's decision.
Eight Definitions of Observability
With the other problem resolved, I can confirm that adding an escape clause to the multiplicative definitions works out.
Eight Definitions of Observability
Using the idea we talked about offline, I was able to fix the proof - thanks Rohin!
Summary of the fix:
When and are defined, additionally assume they are biextensional (take their biextensional collapse), which is fine since we are trying to prove a biextensional equivalence. (By the way this is why we can't take , since we might have after biextensional collapse.) Then to prove , observe that for all which means , hence since a biextensional frame has no duplicate columns.
Eight Definitions of Observability
I presume the fix here will be to add an explicit escape clause to the multiplicative definitions. I haven't been able to confirm this works out yet (trying to work around this), but it at least removes the counterexample.
Eight Definitions of Observability
How is this supposed to work (focusing on the claim specifically)?
and so
Thus, .
Earlier, was defined as follows:
given by and
but there is no reason to suppose above.
Time in Cartesian Frames
It suffices to establish that
I think the and here are supposed to be and
Eight Definitions of Observability
Indeed I think the case may be the basis of a counterexample to the claim in 4.2. I can prove for any (finite) with that there is a finite partition of such that 's agent observes according to the assuming definition but does not observe according to the constructive multiplicative definition, if I take
Eight Definitions of Observability
Let
nit: should be here
and let be an element of .
and the second should be . I think for these and to exist you might need to deal with the case separately (as in Section 5). (Also couldn't you just use the same twice?)
Eight Definitions of Observability
UPDATE: I was able to prove in general whenever and are disjoint and both in , with help from Rohin Shah, following the "restrict attention to world " approach I hinted at earlier.
Eight Definitions of Observability
this is clearly isomorphic to , where , where . Thus, 's agent can observe according to the nonconstructive additive definition of observables.
I think this is only true if partitions , or, equivalently, if is surjective. This isn't shown in the proof. Is it supposed to be obvious?
EDIT: may be able to fix this by assigning any that is not in to the frame so it is harmless in the product of s -- I will try this. | 2021-03-05 01:43:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8864768147468567, "perplexity": 1342.5818267258262}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00568.warc.gz"} |
http://tex.stackexchange.com/questions?page=1773&sort=newest&pagesize=15 | # All Questions
2k views
### How can I an place a figure next to an equation?
I'm new to LaTeX. This is my first script, an exercise. I'm try to place a 4 figures (SCfigure) I already charted with KmPlot next to its correspondent equations, but they always appear on the next ...
669 views
### How can I number the references in a bibliography irrespective of their order of appearance in a document?
I'm putting together a linguistics bibliography using lsalike.bib and lsalike.sty. I'm generating it by using \nocite{*} and would like the references to be numbered (but kept in the usual order). ...
161 views
### Too many erros after upgrading TeX
I am getting too many errors since I upgraded my computer with the latest version of TeX. For instance, the following piece of code doesn't work any more. Could you help me do a clean install and \ ...
1k views
### Enumerate over two columns in tabular environment
I can't seem to figure out the minute details needed to use enumerate inside a tabular environment. If I just sort of brute force the code I can get something similar to what I'm trying to achieve, ...
278 views
### Without using intersection method, can we find the following vector component?
Given that two vectors, i.e., green and red ones. The objective is to find the blue one such that the resultant of the green and blue ones is parallel to the red one and with the same direction as the ...
249 views
### What's the best way to align binary operators to the right of a relation symbol?
I know that you're generally told to align binary operators to the right of a relation symbol like this (using {align*}): \begin{align*} x={}&a+b+c\\ &+d+e+f \end{align*} But would it ...
166 views
### Paragraph titles disappearing
So, I'm trying to use the listings package for some code samples in my document. When I converted my verbatim sections to lstlisting my paragraph titles disappeared. I've attached some example code ...
2k views
### framed or mdframed? (Pros/Cons)
I'm currently thinking about using shaded and framed sections in my document. But for now, I couldn't clarify the differences between framed and mdframed (or other packages). Although the ...
1k views
### Globally set a word in Small Caps (or italics)
I am using Lyx with Preamble. I am working on a project where the abbreviation "hon." needs to be set to small caps globally. (/sc). Tried to use \renewcommand, no luck there.
3k views
### How can I repeat the header but not the caption with longtable?
I am trying to set up a longtable where the header is repeated on every new page but the caption only appears once at the top of the page. The table is defined as follows: \begin{longtable}{|l|l|l|} ...
1k views
### Tikz: joining points on a circle
I have the following figure I would like to draw portions of circles between some of the red points. More explicitly, I would like to go from ac1 to ab1 and then to ac2 following circle A, then go ...
2k views
### How can I add Section numbering and a Contents entry to the PDF bookmarks?
I'm having a bit of trouble with the PDF bookmarks. I am trying to Add "Contents" as an entry to the PDF bookmarking. Have numbering for my sections and subsections. I am looking to achieve ...
318 views
### passing text to alltt embeded in command (extra newline needed)
I wanted to close alltt inside a command (to add some ornaments around the text), but the spaces that were used for putting the monospaced text into columns was not preserved. I found the solution ...
3k views
### Longtable caption below table in LyX
In LyX I've created a longtable. In LyX the caption is below the table but in the PDF it is shown above the table. Is there a way to force the caption to be below the table in the PDF without ...
7k views
### Which document class for writing resume? [duplicate]
Possible Duplicate: LaTeX template for resume/curriculum vitae I am trying to create a new resume in LaTeX previously I used Scribus but not happy with the formatting and pdf size. To ...
530 views
### Romanian fonts in bibtex
I need romanian characters ş and ţ in citations. For this I updated chicago style (chicago.bst file) and for et. al. i replaced: first " ş.a. " but i got ?.a. instead second " \cb{s}.a." but i get ...
1k views
### Repeat the same reference in footnote on different pages
I'm using BibLaTeX to handle my bibliography and I want a reference to be placed in a footnote on every page where I cite the same work and have the same number. Also I want to be able to cite the ...
226 views
### Include pdf-graphic without including the whole pdf
I have drawn a lattice in latex with tikz, successfully compiled it to a pdf and been able to include it in my document. However, the whole PDF (i.e. A4 page) is included and I only want to include ...
2k views
### How to highlight (colorize) the syntax of configuration files (like .ini or .conf)?
I need to highlight the syntax of files containing lines with parameters and their values in LaTeX. Let's say I have a following file: # some comment parameter1 = value of parameter1 parameter2 = ...
189 views
### Name “Chapter” is not showed in the table of the content
In my LaTeX code I add the line \tableofcontents And begging of each chapter i have these lines. \renewcommand{\vspace}[2]{}\chapter{ } {\huge\bf Introduction} ...
223 views
### width multicolum not sum of combined cells
I have the following table, but the multicolumn cell width is not the same as the sum of the cells it is combined from. I assume, this is caused by some margins in each cell, i.e. that the cel width ...
1k views
### Cannot customize the footer of the classicthesis package
I am using the classicthesis package as a template for my document. It adds a footer to the document when it is in draft mode. The footer is configured in classicthesis.sty by the following lines. % ...
3k views
### Do I need fontenc and inputenc
My question feels like a duplicate of: fontenc vs inputenc, but in that question the OP uses German and non-ascii characters. I speak and write in English exclusively. My keyboard is the standard ...
3k views
### Extending the width of caption for subfigure
The caption of the subfigure follows the size of the figure/table. Is it possible to stretch the width so that it does not follow the size. \documentclass[11pt,a4paper]{report} \usepackage{ ...
295 views
### Objective: To separate text column from figures column in every page
I don't know if this has been asked before. But so far nothing has come up with my search. In some books, the figures are placed at the outer side of the page like in the picture below. Is this ...
1k views
### itemize without pagebreak
I have the following structure of my lists (commonly): ..... Here is the list: * item1 * item2 .... I would like to make my list inseverable (the list with its 'caption' won't be divided by ...
768 views
### Accessing randomly selected problems via the probsoln package
I have two datasets, each containing problems dealing with a specific topic. These problems are stored in external files (again, one file per topic) and loaded into the datasets in a random order. I ...
505 views
### natbib, hyperref and citation inside a float figure
I use Lyx and MikTeX 2.9 to write my thesis based on book KOMA-script. Right at the finish I have an error at pdflatex run: Extra }, or forgotten \ endgroup. <argument> ... \IeC {\c s}i ...
135 views
### Defect of measuring into the count of alphabet width
Question: Why in the example below I get (with CM at 10pt) \alphabet=342.93138pt and \myalphabetwidth=342.6536pt why I have this difference between the two measures? What is the more correct? ...
2k views
### using [] in the caption of the subfigure
The subfigure caption uses []. Because of that, I am unable to use [] in its caption. \subfigure[ This [] is important] How do I use the brackets in the caption of the subfigure?
7k views
### pgfplot: plotting a large dataset
I try to plot a dataset which is large using pgfplots. Since I'm aware of problems with large files, I used the external mode. I additionally increased main_memory from 3000000 to 6000000. It crashes ...
1k views
### creating a fifo symbol with pgfdeclareshape
I'm compiling with: pdfTeX, Version 3.1415926-2.3-1.40.12 (MiKTeX 2.9 64-bit) (preloaded format=pdflatex 2011.7.8) pgf 2008/01/15 v2.10 (rcs-revision 1.12) tikz 2010/10/13 v2.10 (rcs-revision 1.76) ...
864 views
### Create landscape image page in pdf, without breaking text flow
Here is an MWE. If you compile this to PDF, you get a half-empty page on the first page, as the last part of the text doesn't come up, but is forced onto a new page. Compare this with the same thing, ...
9k views
### How to center one node exactly between two others with TikZ?
How can I center a TikZ node exactly between two others? Hypothetically, \node (a) {a} \node (c) [right of=c] {c} \node (b) [between={a,c}] {b}
370 views
### Regular Expression for logical implication?
I am massively unfamiliar with the usage and syntax of regular expressions so the following problem is making my brain bleed. Background: I am taking logical expressions, parsing them, making some ...
5k views
I am using a MacOS and I want to copy the IEEEtran.cls file into the latex directory such that it is read automatically. I have a file in the directory ...
551 views
### One column figure blocks text in adjacent column
I am facing this strange problem where a figure in one column of a two-column page is blocking the second column. In other words, text is not going there in other column with a figure in the adjacent ...
6k views
### \$TEXMFHOME setting
I use texlive 2012 on ubuntu 12.04. I wanted my ~/texmf folder invisible, so I edited my texmf.cnf file which is in /usr/local/texlive/2012 directory: TEXMFHOME = ~/.texmf I rebooted my computer ...
483 views
### Align bottom of table cell when table cell has height
I currently have at able whose row's height is set, so I can control it. This is what it looks like: \begin{tabular}{ @{} p{1.60in} l @{} } \hspace{-0.8in}\textbf{Fundacion Tecnologica ...
273 views
### PDF Pages Included, but it does't work
I am writing my thesis, and I need to add a single pdf page in the appendix, so I found a way to use the pdfpage package. Normally I added the package, when I added the package and I continued writing ...
316 views
### ignoring chapter easily
When working on a book I'd like to "ignore"(not compile) all chapters but the current one. I know there are methods to use like include and such BUT I do not want to use external files or some complex ...
565 views
### Vertical node spacing
Ok. I'm trying to make the left look like the right. I'm kinda lost here. Suggestions? \documentclass{article} \usepackage{tikz} \usepackage[margin=0.5in]{geometry} \pagestyle{empty} ...
1k views
### Set Table height to fixed height
I found the \resizebox command but I don't want to scale my table. I want to be able to set it to X height and have it stay there. Is there a way to set the height of a table and leave it there no ...
83 views
### How to temporarily suppress image insertion when compiling?
There are some big images in my document and compiling is taking substantial amount of time. When compiling a tex file to a dvi (and subsequently ps and pdf), is there any way I can temporarily ...
423 views
### Format date without year
Using \formatdate from datetime, I can get a date properly formatted in multiple languages: \formatdate{3}{4}{2012} However, I would like to only use the day and month and leave the year behind. ...
3k views
### tikz: plotting a one column file [closed]
I use tikz/pgf to plot plots with data from a file. This works perfectly fine if I have at least to columns in the file that I can choose from. However, I have a rather huge file with just one column. ...
91 views
### Subtract height from column
The following latex example is kicking the third table down to the second page. But I don't need it on the second page, I need it all on the first and currently it's set at 3.5in but I really need it ...
128 views
### Node base error
Encountering a strange issue here: I have a node style that I have been using for many flow-charts. It's base has always been in the exact center of the shape, but now i'ts been translated up about ... | 2016-05-24 09:57:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500699043273926, "perplexity": 2512.3366329524397}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270527.3/warc/CC-MAIN-20160524002110-00234-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://en.m.wikipedia.org/wiki/Bernoulli_number | # Bernoulli number
Bernoulli numbers B±
n
n fraction decimal
0 1 +1.000000000
1 ±1/2 ±0.500000000
2 1/6 +0.166666666
3 0 +0.000000000
4 1/30 −0.033333333
5 0 +0.000000000
6 1/42 +0.023809523
7 0 +0.000000000
8 1/30 −0.033333333
9 0 +0.000000000
10 5/66 +0.075757575
11 0 +0.000000000
12 691/2730 −0.253113553
13 0 +0.000000000
14 7/6 +1.166666666
15 0 +0.000000000
16 3617/510 −7.092156862
17 0 +0.000000000
18 43867/798 +54.97117794
19 0 +0.000000000
20 174611/330 −529.1242424
Graphs of modern Bernoulli numbers, B (circles) compared with Bernoulli numbers in older literature, B* (crosses) (Weisstein 2016).
In mathematics, the Bernoulli numbers Bn are a sequence of rational numbers which occur frequently in number theory. The values of the first 20 Bernoulli numbers are given in the adjacent table. For every even n other than 0, Bn is negative if n is divisible by 4 and positive otherwise. For every odd n other than 1, Bn = 0.
The superscript ± used in this article designates the two sign conventions for Bernoulli numbers. Only the n = 1 term is affected:
• B
n
with B
1
= −1/2
( / ) is the sign convention prescribed by NIST and most modern textbooks (Arfken 1970, p. 278).
• B+
n
with B+
1
= +1/2
( / ) is sometimes used in the older literature (Weisstein 2016).
In the formulas below, one can switch from one sign convention to the other with the relation ${\displaystyle B_{n}^{+{}}=(-1)^{n}B_{n}^{-{}}}$.
The Bernoulli numbers are special values of the Bernoulli polynomials ${\displaystyle B_{n}(x)}$, with ${\displaystyle B_{n}^{-{}}=B_{n}(0)}$ and ${\displaystyle B_{n}^{+}=B_{n}(1)}$ (Weisstein 2016).
Since Bn = 0 for all odd n > 1, and many formulas only involve even-index Bernoulli numbers, some authors write "Bn" to mean B2n. This article does not follow this notation.
The Bernoulli numbers appear in the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of powers of the first positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.
The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Kōwa. Seki's discovery was posthumously published in 1712 (Selin 1997, p. 891; Smith & Mikami 1914, p. 108) in his work Katsuyo Sampo; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine (Menabrea 1842, Note G). As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program.
## History
### Early history
The Bernoulli numbers are rooted in the early history of the computation of sums of integer powers, which have been of interest to mathematicians since antiquity.
A page from Seki Takakazu's Katsuyo Sampo (1712), tabulating binomial coefficients and Bernoulli numbers
Methods to calculate the sum of the first n positive integers, the sum of the squares and of the cubes of the first n positive integers were known, but there were no real 'formulas', only descriptions given entirely in words. Among the great mathematicians of antiquity to consider this problem were Pythagoras (c. 572–497 BCE, Greece), Archimedes (287–212 BCE, Italy), Aryabhata (b. 476, India), Abu Bakr al-Karaji (d. 1019, Persia) and Abu Ali al-Hasan ibn al-Hasan ibn al-Haytham (965–1039, Iraq).
During the late sixteenth and early seventeenth centuries mathematicians made significant progress. In the West Thomas Harriot (1560–1621) of England, Johann Faulhaber (1580–1635) of Germany, Pierre de Fermat (1601–1665) and fellow French mathematician Blaise Pascal (1623–1662) all played important roles.
Thomas Harriot seems to have been the first to derive and write formulas for sums of powers using symbolic notation, but even he calculated only up to the sum of the fourth powers. Johann Faulhaber gave formulas for sums of powers up to the 17th power in his 1631 Academia Algebrae, far higher than anyone before him, but he did not give a general formula.
Blaise Pascal in 1654 proved Pascal's identity relating the sums of the pth powers of the first n positive integers for p = 0, 1, 2, …, k.
The Swiss mathematician Jakob Bernoulli (1654–1705) was the first to realize the existence of a single sequence of constants B0, B1, B2,… which provide a uniform formula for all sums of powers (Knuth 1993).
The joy Bernoulli experienced when he hit upon the pattern needed to compute quickly and easily the coefficients of his formula for the sum of the cth powers for any positive integer c can be seen from his comment. He wrote:
"With the help of this table, it took me less than half of a quarter of an hour to find that the tenth powers of the first 1000 numbers being added together will yield the sum 91,409,924,241,424,243,424,241,924,242,500."
Bernoulli's result was published posthumously in Ars Conjectandi in 1713. Seki Kōwa independently discovered the Bernoulli numbers and his result was published a year earlier, also posthumously, in 1712 (Selin 1997, p. 891). However, Seki did not present his method as a formula based on a sequence of constants.
Bernoulli's formula for sums of powers is the most useful and generalizable formulation to date. The coefficients in Bernoulli's formula are now called Bernoulli numbers, following a suggestion of Abraham de Moivre.
Bernoulli's formula is sometimes called Faulhaber's formula after Johann Faulhaber who found remarkable ways to calculate sum of powers but never stated Bernoulli's formula. To call Bernoulli's formula Faulhaber's formula does injustice to Bernoulli and simultaneously hides the genius of Faulhaber as Faulhaber's formula is in fact more efficient than Bernoulli's formula. According to Knuth (Knuth 1993) a rigorous proof of Faulhaber's formula was first published by Carl Jacobi in 1834 (Jacobi 1834). Knuth's in-depth study of Faulhaber's formula concludes (the nonstandard notation on the LHS is explained further on):
"Faulhaber never discovered the Bernoulli numbers; i.e., he never realized that a single sequence of constants B0, B1, B2, … would provide a uniform
${\displaystyle \quad \sum n^{m}={\frac {1}{m+1}}\left(B_{0}n^{m+1}+{\binom {m+1}{1}}B_{1}^{+}n^{m}+{\binom {m+1}{2}}B_{2}n^{m-1}+\cdots +{\binom {m+1}{m}}B_{m}n\right)}$
or
${\displaystyle \quad \sum n^{m}={\frac {1}{m+1}}\left(B_{0}n^{m+1}-{\binom {m+1}{1}}B_{1}^{-{}}n^{m}+{\binom {m+1}{2}}B_{2}n^{m-1}-\cdots +(-1)^{m}{\binom {m+1}{m}}B_{m}n\right)}$
for all sums of powers. He never mentioned, for example, the fact that almost half of the coefficients turned out to be zero after he had converted his formulas for nm from polynomials in N to polynomials in n." (Knuth 1993, p. 14)
### Reconstruction of "Summae Potestatum"
Jakob Bernoulli's "Summae Potestatum", 1713
The Bernoulli numbers (n)/(n) were introduced by Jakob Bernoulli in the book Ars Conjectandi published posthumously in 1713 page 97. The main formula can be seen in the second half of the corresponding facsimile. The constant coefficients denoted A, B, C and D by Bernoulli are mapped to the notation which is now prevalent as A = B2, B = B4, C = B6, D = B8. The expression c·c−1·c−2·c−3 means c·(c−1)·(c−2)·(c−3) – the small dots are used as grouping symbols. Using today's terminology these expressions are falling factorial powers ck. The factorial notation k! as a shortcut for 1 × 2 × … × k was not introduced until 100 years later. The integral symbol on the left hand side goes back to Gottfried Wilhelm Leibniz in 1675 who used it as a long letter S for "summa" (sum).[a] The letter n on the left hand side is not an index of summation but gives the upper limit of the range of summation which is to be understood as 1, 2, …, n. Putting things together, for positive c, today a mathematician is likely to write Bernoulli's formula as:
${\displaystyle \sum _{k=1}^{n}k^{c}={\frac {n^{c+1}}{c+1}}+{\frac {1}{2}}n^{c}+\sum _{k=2}^{c}{\frac {B_{k}}{k!}}c^{\underline {k-1}}n^{c-k+1}.}$
This formula suggests setting B1 = 1/2 when switching from the so-called 'archaic' enumeration which uses only the even indices 2, 4, 6… to the modern form (more on different conventions in the next paragraph). Most striking in this context is the fact that the falling factorial ck−1 has for k = 0 the value 1/c + 1 (Graham, Knuth & Patashnik 1989, Section 2.51). Thus Bernoulli's formula can be written
${\displaystyle \sum _{k=1}^{n}k^{c}=\sum _{k=0}^{c}{\frac {B_{k}}{k!}}c^{\underline {k-1}}n^{c-k+1}}$
if B1 = 1/2, recapturing the value Bernoulli gave to the coefficient at that position.
The formula for ${\displaystyle \textstyle \sum _{k=1}^{n}k^{9}}$ in the first half contains an error at the last term; it should be ${\displaystyle -{\tfrac {3}{20}}n^{2}}$ instead of ${\displaystyle -{\tfrac {1}{12}}n^{2}}$ .
## Definitions
Many characterizations of the Bernoulli numbers have been found in the last 300 years, and each could be used to introduce these numbers. Here only three of the most useful ones are mentioned:
• a recursive equation,
• an explicit formula,
• a generating function.
For the proof of the equivalence of the three approaches see (Ireland & Rosen 1990) or (Conway & Guy 1996).
### Recursive definition
The Bernoulli numbers obey the sum formulas (Weisstein 2016)
{\displaystyle {\begin{aligned}\sum _{k=0}^{m}{\binom {m+1}{k}}B_{k}^{-{}}&=\delta _{m,0}\\\sum _{k=0}^{m}{\binom {m+1}{k}}B_{k}^{+{}}&=m+1\end{aligned}}}
where ${\displaystyle m=0,1,2...}$ and δ denotes the Kronecker delta. Solving for ${\displaystyle B_{m}^{\mp {}}}$ gives the recursive formulas
{\displaystyle {\begin{aligned}B_{m}^{-{}}&=\delta _{m,0}-\sum _{k=0}^{m-1}{\binom {m}{k}}{\frac {B_{k}^{-{}}}{m-k+1}}\\B_{m}^{+}&=1-\sum _{k=0}^{m-1}{\binom {m}{k}}{\frac {B_{k}^{+}}{m-k+1}}.\end{aligned}}}
### Explicit definition
In 1893 Louis Saalschütz listed a total of 38 explicit formulas for the Bernoulli numbers (Saalschütz 1893), usually giving some reference in the older literature. One of them is:
{\displaystyle {\begin{aligned}B_{m}^{-{}}&=\sum _{k=0}^{m}\sum _{v=0}^{k}(-1)^{v}{\binom {k}{v}}{\frac {v^{m}}{k+1}}\\B_{m}^{+}&=\sum _{k=0}^{m}\sum _{v=0}^{k}(-1)^{v}{\binom {k}{v}}{\frac {(v+1)^{m}}{k+1}}.\end{aligned}}}
### Generating function
The exponential generating functions are
{\displaystyle {\begin{aligned}{\frac {t}{e^{t}-1}}&={\frac {t}{2}}\left(\operatorname {coth} {\frac {t}{2}}-1\right)&=\sum _{m=0}^{\infty }{\frac {B_{m}^{-{}}t^{m}}{m!}}\\{\frac {t}{1-e^{-t}}}&={\frac {t}{2}}\left(\operatorname {coth} {\frac {t}{2}}+1\right)&=\sum _{m=0}^{\infty }{\frac {B_{m}^{+}t^{m}}{m!}}.\end{aligned}}}
The (ordinary) generating function
${\displaystyle z^{-1}\psi _{1}(z^{-1})=\sum _{m=0}^{\infty }B_{m}^{+}z^{m}}$
is an asymptotic series. It contains the trigamma function ψ1.
## Bernoulli numbers and the Riemann zeta function
The Bernoulli numbers as given by the Riemann zeta function.
The Bernoulli numbers can be expressed in terms of the Riemann zeta function:
B+
n
= −(1 − n)
for n ≥ 1.
Here the argument of the zeta function is 0 or negative.
By means of the zeta functional equation and the gamma reflection formula the following relation can be obtained (Arfken 1970, p. 279):
${\displaystyle B_{2n}={\frac {(-1)^{n+1}2(2n)!}{(2\pi )^{2n}}}\zeta (2n)}$ for n ≥ 1.
Now the argument of the zeta function is positive.
It then follows from ζ → 1 (n → ∞) and Stirling's formula that
${\displaystyle |B_{2n}|\sim 4{\sqrt {\pi n}}\left({\frac {n}{\pi e}}\right)^{2n}}$ for n → ∞.
## Efficient computation of Bernoulli numbers
In some applications it is useful to be able to compute the Bernoulli numbers B0 through Bp − 3 modulo p, where p is a prime; for example to test whether Vandiver's conjecture holds for p, or even just to determine whether p is an irregular prime. It is not feasible to carry out such a computation using the above recursive formulae, since at least (a constant multiple of) p2 arithmetic operations would be required. Fortunately, faster methods have been developed (Buhler et al. 2001) which require only O(p (log p)2) operations (see big O notation).
David Harvey (Harvey 2010) describes an algorithm for computing Bernoulli numbers by computing Bn modulo p for many small primes p, and then reconstructing Bn via the Chinese remainder theorem. Harvey writes that the asymptotic time complexity of this algorithm is O(n2 log(n)2 + ε) and claims that this implementation is significantly faster than implementations based on other methods. Using this implementation Harvey computed Bn for n = 108. Harvey's implementation has been included in SageMath since version 3.1. Prior to that, Bernd Kellner (Kellner 2002) computed Bn to full precision for n = 106 in December 2002 and Oleksandr Pavlyk (Pavlyk 2008) for n = 107 with Mathematica in April 2008.
Computer Year n Digits*
J. Bernoulli ~1689 10 1
L. Euler 1748 30 8
J. C. Adams 1878 62 36
D. E. Knuth, T. J. Buckholtz 1967 1672 3330
G. Fee, S. Plouffe 1996 10000 27677
G. Fee, S. Plouffe 1996 100000 376755
B. C. Kellner 2002 1000000 4767529
O. Pavlyk 2008 10000000 57675260
D. Harvey 2008 100000000 676752569
* Digits is to be understood as the exponent of 10 when Bn is written as a real number in normalized scientific notation.
## Applications of the Bernoulli numbers
### Asymptotic analysis
Arguably the most important application of the Bernoulli numbers in mathematics is their use in the Euler–Maclaurin formula. Assuming that f is a sufficiently often differentiable function the Euler–Maclaurin formula can be written as (Graham, Knuth & Patashnik 1989, 9.67)
${\displaystyle \sum _{k=a}^{b-1}f(k)=\int _{a}^{b}f(x)\,dx+\sum _{k=1}^{m}{\frac {B_{k}^{-}}{k!}}(f^{(k-1)}(b)-f^{(k-1)}(a))+R_{-}(f,m).}$
This formulation assumes the convention B
1
= −1/2
. Using the convention B+
1
= +1/2
the formula becomes
${\displaystyle \sum _{k=a+1}^{b}f(k)=\int _{a}^{b}f(x)\,dx+\sum _{k=1}^{m}{\frac {B_{k}^{+}}{k!}}(f^{(k-1)}(b)-f^{(k-1)}(a))+R_{+}(f,m).}$
Here f(0) = f (i.e. the zeroth-order derivative of a f is just f). Moreover, let f(−1) denote an antiderivative of f. By the fundamental theorem of calculus,
${\displaystyle \int _{a}^{b}f(x)\,dx=f^{(-1)}(b)-f^{(-1)}(a).}$
Thus the last formula can be further simplified to the following succinct form of the Euler–Maclaurin formula
${\displaystyle \sum _{k=a}^{b}f(k)=\sum _{k=0}^{m}{\frac {B_{k}}{k!}}(f^{(k-1)}(b)-f^{(k-1)}(a))+R(f,m).}$
This form is for example the source for the important Euler–Maclaurin expansion of the zeta function
{\displaystyle {\begin{aligned}\zeta (s)&=\sum _{k=0}^{m}{\frac {B_{k}^{+}}{k!}}s^{\overline {k-1}}+R(s,m)\\&={\frac {B_{0}}{0!}}s^{\overline {-1}}+{\frac {B_{1}^{+}}{1!}}s^{\overline {0}}+{\frac {B_{2}}{2!}}s^{\overline {1}}+\cdots +R(s,m)\\&={\frac {1}{s-1}}+{\frac {1}{2}}+{\frac {1}{12}}s+\cdots +R(s,m).\end{aligned}}}
Here sk denotes the rising factorial power (Graham, Knuth & Patashnik 1989, 2.44 and 2.52).
Bernoulli numbers are also frequently used in other kinds of asymptotic expansions. The following example is the classical Poincaré-type asymptotic expansion of the digamma function ψ.
${\displaystyle \psi (z)\sim \ln z-\sum _{k=1}^{\infty }{\frac {B_{k}^{+}}{kz^{k}}}}$
### Sum of powers
Bernoulli numbers feature prominently in the closed form expression of the sum of the mth powers of the first n positive integers. For m, n ≥ 0 define
${\displaystyle S_{m}(n)=\sum _{k=1}^{n}k^{m}=1^{m}+2^{m}+\cdots +n^{m}.}$
This expression can always be rewritten as a polynomial in n of degree m + 1. The coefficients of these polynomials are related to the Bernoulli numbers by Bernoulli's formula:
${\displaystyle S_{m}(n)={\frac {1}{m+1}}\sum _{k=0}^{m}{\binom {m+1}{k}}B_{k}^{+}n^{m+1-k}=m!\sum _{k=0}^{m}{\frac {B_{k}^{+}n^{m+1-k}}{k!(m+1-k)!}},}$
where (m + 1
k
)
denotes the binomial coefficient.
For example, taking m to be 1 gives the triangular numbers 0, 1, 3, 6, … .
${\displaystyle 1+2+\cdots +n={\frac {1}{2}}(B_{0}n^{2}+2B_{1}^{+}n^{1})={\tfrac {1}{2}}(n^{2}+n).}$
Taking m to be 2 gives the square pyramidal numbers 0, 1, 5, 14, … .
${\displaystyle 1^{2}+2^{2}+\cdots +n^{2}={\frac {1}{3}}(B_{0}n^{3}+3B_{1}^{+}n^{2}+3B_{2}n^{1})={\tfrac {1}{3}}\left(n^{3}+{\tfrac {3}{2}}n^{2}+{\tfrac {1}{2}}n\right).}$
Some authors use the alternate convention for Bernoulli numbers and state Bernoulli's formula in this way:
${\displaystyle S_{m}(n)={\frac {1}{m+1}}\sum _{k=0}^{m}(-1)^{k}{\binom {m+1}{k}}B_{k}^{-{}}n^{m+1-k}.}$
Bernoulli's formula is sometimes called Faulhaber's formula after Johann Faulhaber who also found remarkable ways to calculate sums of powers.
Faulhaber's formula was generalized by V. Guo and J. Zeng to a q-analog (Guo & Zeng 2005).
### Taylor series
The Bernoulli numbers appear in the Taylor series expansion of many trigonometric functions and hyperbolic functions.
Tangent
{\displaystyle {\begin{aligned}\tan x&=\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}2^{2n}(2^{2n}-1)B_{2n}}{(2n)!}}\;x^{2n-1},&\left|x\right|&<{\frac {\pi }{2}}\\\end{aligned}}}
Cotangent
{\displaystyle {\begin{aligned}\cot x&{}={\frac {1}{x}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}B_{2n}(2x)^{2n}}{(2n)!}},&\qquad 0<|x|<\pi .\end{aligned}}}
Hyperbolic tangent
{\displaystyle {\begin{aligned}\tanh x&=\sum _{n=1}^{\infty }{\frac {2^{2n}(2^{2n}-1)B_{2n}}{(2n)!}}\;x^{2n-1},&|x|&<{\frac {\pi }{2}}.\end{aligned}}}
Hyperbolic cotangent
{\displaystyle {\begin{aligned}\coth x&{}={\frac {1}{x}}\sum _{n=0}^{\infty }{\frac {B_{2n}(2x)^{2n}}{(2n)!}},&\qquad \qquad 0<|x|<\pi .\end{aligned}}}
### Laurent series
The Bernoulli numbers appear in the following Laurent series (Arfken 1970, p. 463):
Digamma function: ${\displaystyle \psi (z)=\ln z-\sum _{k=1}^{\infty }{\frac {B_{k}^{+{}}}{kz^{k}}}}$
### Use in topology
The Kervaire–Milnor's formula for the order of the cyclic group of diffeomorphism classes of exotic (4n − 1)-spheres which bound parallelizable manifolds involves Bernoulli numbers. Let ESn be the number of such exotic spheres for n ≥ 2, then
${\displaystyle {\textit {ES}}_{n}=(2^{2n-2}-2^{4n-3})\operatorname {Numerator} \left({\frac {B_{4n}}{4n}}\right).}$
The Hirzebruch signature theorem for the L genus of a smooth oriented closed manifold of dimension 4n also involves Bernoulli numbers.
## Connections with combinatorial numbers
The connection of the Bernoulli number to various kinds of combinatorial numbers is based on the classical theory of finite differences and on the combinatorial interpretation of the Bernoulli numbers as an instance of a fundamental combinatorial principle, the inclusion–exclusion principle.
### Connection with Worpitzky numbers
The definition to proceed with was developed by Julius Worpitzky in 1883. Besides elementary arithmetic only the factorial function n! and the power function km is employed. The signless Worpitzky numbers are defined as
${\displaystyle W_{n,k}=\sum _{v=0}^{k}(-1)^{v+k}(v+1)^{n}{\frac {k!}{v!(k-v)!}}.}$
They can also be expressed through the Stirling numbers of the second kind
${\displaystyle W_{n,k}=k!\left\{{n+1 \atop k+1}\right\}.}$
A Bernoulli number is then introduced as an inclusion–exclusion sum of Worpitzky numbers weighted by the harmonic sequence 1, 1/21/3, …
${\displaystyle B_{n}=\sum _{k=0}^{n}(-1)^{k}{\frac {W_{n,k}}{k+1}}\ =\ \sum _{k=0}^{n}{\frac {1}{k+1}}\sum _{v=0}^{k}(-1)^{v}(v+1)^{n}{k \choose v}\ .}$
B0 = 1
B1 = 1 − 1/2
B2 = 1 − 3/2 + 2/3
B3 = 1 − 7/2 + 12/36/4
B4 = 1 − 15/2 + 50/360/4 + 24/5
B5 = 1 − 31/2 + 180/3390/4 + 360/5120/6
B6 = 1 − 63/2 + 602/32100/4 + 3360/52520/6 + 720/7
This representation has B+
1
= +1/2
.
Consider the sequence sn, n ≥ 0. From Worpitzky's numbers , applied to s0, s0, s1, s0, s1, s2, s0, s1, s2, s3, … is identical to the Akiyama–Tanigawa transform applied to sn (see Connection with Stirling numbers of the first kind). This can be seen via the table:
1 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 −1 0 2 −2 0 0 3 −3 0 0 0 4 −4 1 −3 2 0 4 −10 6 0 0 9 −21 12 1 −7 12 −6 0 8 −38 54 −24 1 −15 50 −60 24
The first row represents s0, s1, s2, s3, s4.
Hence for the second fractional Euler numbers (n) / (n + 1):
E0 = 1
E1 = 1 − 1/2
E2 = 1 − 3/2 + 2/4
E3 = 1 − 7/2 + 12/46/8
E4 = 1 − 15/2 + 50/460/8 + 24/16
E5 = 1 − 31/2 + 180/4390/8 + 360/16120/32
E6 = 1 − 63/2 + 602/42100/8 + 3360/162520/32 + 720/64
A second formula representing the Bernoulli numbers by the Worpitzky numbers is for n ≥ 1
${\displaystyle B_{n}={\frac {n}{2^{n+1}-2}}\sum _{k=0}^{n-1}(-2)^{-k}\,W_{n-1,k}.}$
The simplified second Worpitzky's representation of the second Bernoulli numbers is:
(n + 1) / (n + 1) = n + 1/2n + 2 − 2 × (n) / (n + 1)
which links the second Bernoulli numbers to the second fractional Euler numbers. The beginning is:
1/2, 1/6, 0, −1/30, 0, 1/42, … = (1/2, 1/3, 3/14, 2/15, 5/62, 1/21, …) × (1, 1/2, 0, −1/4, 0, 1/2, …)
The numerators of the first parentheses are (see Connection with Stirling numbers of the first kind).
### Connection with Stirling numbers of the second kind
If S(k,m) denotes Stirling numbers of the second kind (Comtet 1974) then one has:
${\displaystyle j^{k}=\sum _{m=0}^{k}{j^{\underline {m}}}S(k,m)}$
where jm denotes the falling factorial.
If one defines the Bernoulli polynomials Bk(j) as (Rademacher 1973):
${\displaystyle B_{k}(j)=k\sum _{m=0}^{k-1}{\binom {j}{m+1}}S(k-1,m)m!+B_{k}}$
where Bk for k = 0, 1, 2,… are the Bernoulli numbers.
Then after the following property of binomial coefficient:
${\displaystyle {\binom {j}{m}}={\binom {j+1}{m+1}}-{\binom {j}{m+1}}}$
one has,
${\displaystyle j^{k}={\frac {B_{k+1}(j+1)-B_{k+1}(j)}{k+1}}.}$
One also has following for Bernoulli polynomials (Rademacher 1973),
${\displaystyle B_{k}(j)=\sum _{n=0}^{k}{\binom {k}{n}}B_{n}j^{k-n}.}$
The coefficient of j in (j
m + 1
)
is (−1)m/m + 1.
Comparing the coefficient of j in the two expressions of Bernoulli polynomials, one has:
${\displaystyle B_{k}=\sum _{m=0}^{k}(-1)^{m}{\frac {m!}{m+1}}S(k,m)}$
(resulting in B1 = +1/2) which is an explicit formula for Bernoulli numbers and can be used to prove Von-Staudt Clausen theorem (Boole 1880; Gould 1972; Apostol, p. 197).
### Connection with Stirling numbers of the first kind
The two main formulas relating the unsigned Stirling numbers of the first kind [n
m
]
to the Bernoulli numbers (with B1 = +1/2) are
${\displaystyle {\frac {1}{m!}}\sum _{k=0}^{m}(-1)^{k}\left[{m+1 \atop k+1}\right]B_{k}={\frac {1}{m+1}},}$
and the inversion of this sum (for n ≥ 0, m ≥ 0)
${\displaystyle {\frac {1}{m!}}\sum _{k=0}^{m}(-1)^{k}\left[{m+1 \atop k+1}\right]B_{n+k}=A_{n,m}.}$
Here the number An,m are the rational Akiyama–Tanigawa numbers, the first few of which are displayed in the following table.
Akiyama–Tanigawa number
m
n
0 1 2 3 4
0 1 1/2 1/3 1/4 1/5
1 1/2 1/3 1/4 1/5
2 1/6 1/6 3/20
3 0 1/30
4 1/30
The Akiyama–Tanigawa numbers satisfy a simple recurrence relation which can be exploited to iteratively compute the Bernoulli numbers. This leads to the algorithm shown in the section 'algorithmic description' above. See /.
An autosequence is a sequence which has its inverse binomial transform equal to the signed sequence. If the main diagonal is zeroes = , the autosequence is of the first kind. Example: , the Fibonacci numbers. If the main diagonal is the first upper diagonal multiplied by 2, it is of the second kind. Example: /, the second Bernoulli numbers (see ). The Akiyama–Tanigawa transform applied to 2n = 1/ leads to (n) / (n + 1). Hence:
Akiyama–Tanigawa transform for the second Euler numbers
m
n
0 1 2 3 4
0 1 1/2 1/4 1/8 1/16
1 1/2 1/2 3/8 1/4
2 0 1/4 3/8
3 1/4 1/4
4 0
See and . (n) / (n + 1) are the second (fractional) Euler numbers and an autosequence of the second kind.
( (n + 2)/ (n + 2) = 1/6, 0, −1/30, 0, 1/42, …) × ( 2n + 3 − 2/n + 2 = 3, 14/3, 15/2, 62/5, 21, …) = (n + 1)/ (n + 2) = 1/2, 0, −1/4, 0, 1/2, ….
Also valuable for / (see Connection with Worpitzky numbers).
### Connection with Pascal’s triangle
There are formulas connecting Pascal's triangle to Bernoulli numbers[b]
${\displaystyle B_{n}^{+}={\frac {|A_{n}|}{(n+1)!}}~~~}$
where ${\displaystyle |A_{n}|}$ is the determinant of a n-by-n square matrix part of Pascal’s triangle whose elements are: ${\displaystyle a_{i,k}={\begin{cases}0&{\text{if }}k>1+i\\{i+1 \choose k-1}&{\text{otherwise}}\end{cases}}}$
Example:
${\displaystyle B_{6}^{+}={\frac {\det {\begin{pmatrix}1&2&0&0&0&0\\1&3&3&0&0&0\\1&4&6&4&0&0\\1&5&10&10&5&0\\1&6&15&20&15&6\\1&7&21&35&35&21\end{pmatrix}}}{7!}}={\frac {120}{5040}}={\frac {1}{42}}}$
### Connection with Eulerian numbers
There are formulas connecting Eulerian numbers n
m
to Bernoulli numbers:
{\displaystyle {\begin{aligned}\sum _{m=0}^{n}(-1)^{m}\left\langle {n \atop m}\right\rangle &=2^{n+1}(2^{n+1}-1){\frac {B_{n+1}}{n+1}},\\\sum _{m=0}^{n}(-1)^{m}\left\langle {n \atop m}\right\rangle {\binom {n}{m}}^{-1}&=(n+1)B_{n}.\end{aligned}}}
Both formulae are valid for n ≥ 0 if B1 is set to 1/2. If B1 is set to −1/2 they are valid only for n ≥ 1 and n ≥ 2 respectively.
## A binary tree representation
The Stirling polynomials σn(x) are related to the Bernoulli numbers by Bn = n!σn(1). S. C. Woon (Woon 1997) described an algorithm to compute σn(1) as a binary tree:
Woon's recursive algorithm (for n ≥ 1) starts by assigning to the root node N = [1,2]. Given a node N = [a1, a2, …, ak] of the tree, the left child of the node is L(N) = [−a1, a2 + 1, a3, …, ak] and the right child R(N) = [a1, 2, a2, …, ak]. A node N = [a1, a2, …, ak] is written as ±[a2, …, ak] in the initial part of the tree represented above with ± denoting the sign of a1.
Given a node N the factorial of N is defined as
${\displaystyle N!=a_{1}\prod _{k=2}^{\operatorname {length} (N)}a_{k}!.}$
Restricted to the nodes N of a fixed tree-level n the sum of 1/N! is σn(1), thus
${\displaystyle B_{n}=\sum _{\stackrel {N{\text{ node of}}}{{\text{ tree-level }}n}}{\frac {n!}{N!}}.}$
For example:
B1 = 1!(1/2!)
B2 = 2!(−1/3! + 1/2!2!)
B3 = 3!(1/4!1/2!3!1/3!2! + 1/2!2!2!)
## Integral representation and continuation
The integral
${\displaystyle b(s)=2e^{si\pi /2}\int _{0}^{\infty }{\frac {st^{s}}{1-e^{2\pi t}}}{\frac {dt}{t}}}$
has as special values b(2n) = B2n for n > 0.
For example, b(3) = 3/2ζ(3)π−3i and b(5) = −15/2ζ(5)π−5i. Here, ζ is the Riemann zeta function, and i is the imaginary unit. Leonhard Euler (Opera Omnia, Ser. 1, Vol. 10, p. 351) considered these numbers and calculated
{\displaystyle {\begin{aligned}p&={\frac {3}{2\pi ^{3}}}\left(1+{\frac {1}{2^{3}}}+{\frac {1}{3^{3}}}+\cdots \right)=0.0581522\ldots \\q&={\frac {15}{2\pi ^{5}}}\left(1+{\frac {1}{2^{5}}}+{\frac {1}{3^{5}}}+\cdots \right)=0.0254132\ldots \end{aligned}}}
## The relation to the Euler numbers and π
The Euler numbers are a sequence of integers intimately connected with the Bernoulli numbers. Comparing the asymptotic expansions of the Bernoulli and the Euler numbers shows that the Euler numbers E2n are in magnitude approximately 2/π(42n − 22n) times larger than the Bernoulli numbers B2n. In consequence:
${\displaystyle \pi \sim 2(2^{2n}-4^{2n}){\frac {B_{2n}}{E_{2n}}}.}$
This asymptotic equation reveals that π lies in the common root of both the Bernoulli and the Euler numbers. In fact π could be computed from these rational approximations.
Bernoulli numbers can be expressed through the Euler numbers and vice versa. Since, for odd n, Bn = En = 0 (with the exception B1), it suffices to consider the case when n is even.
{\displaystyle {\begin{aligned}B_{n}&=\sum _{k=0}^{n-1}{\binom {n-1}{k}}{\frac {n}{4^{n}-2^{n}}}E_{k}&n&=2,4,6,\ldots \\E_{n}&=\sum _{k=1}^{n}{\binom {n}{k-1}}{\frac {2^{k}-4^{k}}{k}}B_{k}&n&=2,4,6,\ldots \end{aligned}}}
These conversion formulas express an inverse relation between the Bernoulli and the Euler numbers. But more important, there is a deep arithmetic root common to both kinds of numbers, which can be expressed through a more fundamental sequence of numbers, also closely tied to π. These numbers are defined for n > 1 as
${\displaystyle S_{n}=2\left({\frac {2}{\pi }}\right)^{n}\sum _{k=-\infty }^{\infty }(4k+1)^{-n}\qquad k=0,-1,1,-2,2,\ldots }$
and S1 = 1 by convention (Elkies 2003). The magic of these numbers lies in the fact that they turn out to be rational numbers. This was first proved by Leonhard Euler in a landmark paper (Euler 1735) ‘De summis serierum reciprocarum’ (On the sums of series of reciprocals) and has fascinated mathematicians ever since. The first few of these numbers are
${\displaystyle S_{n}=1,1,{\frac {1}{2}},{\frac {1}{3}},{\frac {5}{24}},{\frac {2}{15}},{\frac {61}{720}},{\frac {17}{315}},{\frac {277}{8064}},{\frac {62}{2835}},\ldots }$ ( / )
These are the coefficients in the expansion of sec x + tan x.
The Bernoulli numbers and Euler numbers are best understood as special views of these numbers, selected from the sequence Sn and scaled for use in special applications.
{\displaystyle {\begin{aligned}B_{n}&=(-1)^{\left\lfloor {\frac {n}{2}}\right\rfloor }[n{\text{ even}}]{\frac {n!}{2^{n}-4^{n}}}\,S_{n}\ ,&n&=2,3,\ldots \\E_{n}&=(-1)^{\left\lfloor {\frac {n}{2}}\right\rfloor }[n{\text{ even}}]n!\,S_{n+1}&n&=0,1,\ldots \end{aligned}}}
The expression [n even] has the value 1 if n is even and 0 otherwise (Iverson bracket).
These identities show that the quotient of Bernoulli and Euler numbers at the beginning of this section is just the special case of Rn = 2Sn/Sn + 1 when n is even. The Rn are rational approximations to π and two successive terms always enclose the true value of π. Beginning with n = 1 the sequence starts ( / ):
${\displaystyle 2,4,3,{\frac {16}{5}},{\frac {25}{8}},{\frac {192}{61}},{\frac {427}{136}},{\frac {4352}{1385}},{\frac {12465}{3968}},{\frac {158720}{50521}},\ldots \quad \longrightarrow \pi .}$
These rational numbers also appear in the last paragraph of Euler's paper cited above.
Consider the Akiyama–Tanigawa transform for the sequence (n + 2) / (n + 1):
0 1 2 3 4 5 6 1 1/2 0 −1/4 −1/4 −1/8 0 1/2 1 3/4 0 −5/8 −3/4 −1/2 1/2 9/4 5/2 5/8 −1 −7/2 −3/4 15/2 5/2 −11/2 −99/4 8 77/2 −61/2
From the second, the numerators of the first column are the denominators of Euler's formula. The first column is −1/2 × .
## An algorithmic view: the Seidel triangle
The sequence Sn has another unexpected yet important property: The denominators of Sn divide the factorial (n − 1)!. In other words: the numbers Tn = Sn(n − 1)!, sometimes called Euler zigzag numbers, are integers.
${\displaystyle T_{n}=1,\,1,\,1,\,2,\,5,\,16,\,61,\,272,\,1385,\,7936,\,50521,\,353792,\ldots \quad n=0,1,2,3,\ldots }$ (). See ().
Thus the above representations of the Bernoulli and Euler numbers can be rewritten in terms of this sequence as
{\displaystyle {\begin{aligned}B_{n}&=(-1)^{\left\lfloor {\frac {n}{2}}\right\rfloor }[n{\text{ even}}]{\frac {n}{2^{n}-4^{n}}}\,T_{n-1}\ &n&=2,3,\ldots \\E_{n}&=(-1)^{\left\lfloor {\frac {n}{2}}\right\rfloor }[n{\text{ even}}]T_{n+1}&n&=0,1,\ldots \end{aligned}}}
These identities make it easy to compute the Bernoulli and Euler numbers: the Euler numbers En are given immediately by T2n + 1 and the Bernoulli numbers B2n are obtained from T2n by some easy shifting, avoiding rational arithmetic.
What remains is to find a convenient way to compute the numbers Tn. However, already in 1877 Philipp Ludwig von Seidel (Seidel 1877) published an ingenious algorithm, which makes it simple to calculate Tn.
${\displaystyle {\begin{array}{crrrcc}{}&{}&{\color {red}1}&{}&{}&{}\\{}&{\rightarrow }&{\color {blue}1}&{\color {red}1}&{}\\{}&{\color {red}2}&{\color {blue}2}&{\color {blue}1}&{\leftarrow }\\{\rightarrow }&{\color {blue}2}&{\color {blue}4}&{\color {blue}5}&{\color {red}5}\\{\color {red}16}&{\color {blue}16}&{\color {blue}14}&{\color {blue}10}&{\color {blue}5}&{\leftarrow }\end{array}}}$
Seidel's algorithm for Tn
1. Start by putting 1 in row 0 and let k denote the number of the row currently being filled
2. If k is odd, then put the number on the left end of the row k − 1 in the first position of the row k, and fill the row from the left to the right, with every entry being the sum of the number to the left and the number to the upper
3. At the end of the row duplicate the last number.
4. If k is even, proceed similar in the other direction.
Seidel's algorithm is in fact much more general (see the exposition of Dominique Dumont (Dumont 1981)) and was rediscovered several times thereafter.
Similar to Seidel's approach D. E. Knuth and T. J. Buckholtz (Knuth & Buckholtz 1967) gave a recurrence equation for the numbers T2n and recommended this method for computing B2n and E2n ‘on electronic computers using only simple operations on integers’.
V. I. Arnold rediscovered Seidel's algorithm in (Arnold 1991) and later Millar, Sloane and Young popularized Seidel's algorithm under the name boustrophedon transform.
Triangular form:
1 1 1 2 2 1 2 4 5 5 16 16 14 10 5 16 32 46 56 61 61 272 272 256 224 178 122 61
Only , with one 1, and , with two 1s, are in the OEIS.
Distribution with a supplementary 1 and one 0 in the following rows:
1 0 1 −1 −1 0 0 −1 −2 −2 5 5 4 2 0 0 5 10 14 16 16 −61 −61 −56 −46 −32 −16 0
This is , a signed version of . The main andiagonal is . The main diagonal is . The central column is . Row sums: 1, 1, −2, −5, 16, 61…. See . See the array beginning with 1, 1, 0, −2, 0, 16, 0 below.
The Akiyama–Tanigawa algorithm applied to (n + 1) / (n) yields:
1 1 1/2 0 −1/4 −1/4 −1/8 0 1 3/2 1 0 −3/4 −1 −1 3/2 4 15/4 0 −5 −15/2 1 5 5 −51/2 0 61 −61
1. The first column is . Its binomial transform leads to:
1 1 0 −2 0 16 0 0 −1 −2 2 16 −16 −1 −1 4 14 −32 0 5 10 −46 5 5 −56 0 −61 −61
The first row of this array is . The absolute values of the increasing antidiagonals are . The sum of the antidiagonals is (n + 1).
2. The second column is 1 1 −1 −5 5 61 −61 −1385 1385…. Its binomial transform yields:
1 2 2 −4 −16 32 272 1 0 −6 −12 48 240 −1 −6 −6 60 192 −5 0 66 32 5 66 66 61 0 −61
The first row of this array is 1 2 2 −4 −16 32 272 544 −7936 15872 353792 −707584…. The absolute values of the second bisection are the double of the absolute values of the first bisection.
Consider the Akiyama-Tanigawa algorithm applied to (n) / ( (n + 1) = abs( (n)) + 1 = 1, 2, 2, 3/2, 1, 3/4, 3/4, 7/8, 1, 17/16, 17/16, 33/32.
1 2 2 3/2 1 3/4 3/4 −1 0 3/2 2 5/4 0 −1 −3 −3/2 3 25/4 2 −3 −27/2 −13 5 21 −3/2 −16 45 −61
The first column whose the absolute values are could be the numerator of a trigonometric function.
is an autosequence of the first kind (the main diagonal is ). The corresponding array is:
0 −1 −1 2 5 −16 −61 −1 0 3 3 −21 −45 1 3 0 −24 −24 2 −3 −24 0 −5 −21 24 −16 45 −61
The first two upper diagonals are −1 3 −24 402… = (−1)n + 1 × . The sum of the antidiagonals is 0 −2 0 10… = 2 × (n + 1).
is an autosequence of the second kind, like for instance / . Hence the array:
2 1 −1 −2 5 16 −61 −1 −2 −1 7 11 −77 −1 1 8 4 −88 2 7 −4 −92 5 −11 −88 −16 −77 −61
The main diagonal, here 2 −2 8 −92…, is the double of the first upper one, here . The sum of the antidiagonals is 2 0 −4 0… = 2 × (n + 1). − = 2 × .
## A combinatorial view: alternating permutations
Around 1880, three years after the publication of Seidel's algorithm, Désiré André proved a now classic result of combinatorial analysis (André 1879) & (André 1881). Looking at the first terms of the Taylor expansion of the trigonometric functions tan x and sec x André made a startling discovery.
{\displaystyle {\begin{aligned}\tan x&=x+{\frac {2x^{3}}{3!}}+{\frac {16x^{5}}{5!}}+{\frac {272x^{7}}{7!}}+{\frac {7936x^{9}}{9!}}+\cdots \\[6pt]\sec x&=1+{\frac {x^{2}}{2!}}+{\frac {5x^{4}}{4!}}+{\frac {61x^{6}}{6!}}+{\frac {1385x^{8}}{8!}}+{\frac {50521x^{10}}{10!}}+\cdots \end{aligned}}}
The coefficients are the Euler numbers of odd and even index, respectively. In consequence the ordinary expansion of tan x + sec x has as coefficients the rational numbers Sn.
${\displaystyle \tan x+\sec x=1+x+{\tfrac {1}{2}}x^{2}+{\tfrac {1}{3}}x^{3}+{\tfrac {5}{24}}x^{4}+{\tfrac {2}{15}}x^{5}+{\tfrac {61}{720}}x^{6}+\cdots }$
André then succeeded by means of a recurrence argument to show that the alternating permutations of odd size are enumerated by the Euler numbers of odd index (also called tangent numbers) and the alternating permutations of even size by the Euler numbers of even index (also called secant numbers).
## Related sequences
The arithmetic mean of the first and the second Bernoulli numbers are the associate Bernoulli numbers: B0 = 1, B1 = 0, B2 = 1/6, B3 = 0, B4 = −1/30, / . Via the second row of its inverse Akiyama–Tanigawa transform , they lead to Balmer series / .
The Akiyama–Tanigawa algorithm applied to (n + 4) / (n) leads to the Bernoulli numbers / , / , or without B1, named intrinsic Bernoulli numbers Bi(n).
1 5/6 3/4 7/10 2/3 1/6 1/6 3/20 2/15 5/42 0 1/30 1/20 2/35 5/84 −1/30 −1/30 −3/140 −1/105 0 0 −1/42 −1/28 −4/105 −1/28
Hence another link between the intrinsic Bernoulli numbers and the Balmer series via (n).
(n − 2) = 0, 2, 1, 6,… is a permutation of the non-negative numbers.
The terms of the first row are f(n) = 1/2 + 1/n + 2. 2, f(n) is an autosequence of the second kind. 3/2, f(n) leads by its inverse binomial transform to 3/2 −1/2 1/3 −1/4 1/5 ... = 1/2 + log 2.
Consider g(n) = 1/2 - 1 / (n+2) = 0, 1/6, 1/4, 3/10, 1/3. The Akiyama-Tanagiwa transforms gives:
0 1/6 1/4 3/10 1/3 5/14 ... −1/6 −1/6 −3/20 −2/15 −5/42 −3/28 ... 0 −1/30 −1/20 −2/35 −5/84 −5/84 ... 1/30 1/30 3/140 1/105 0 −1/140 ...
0, g(n), is an autosequence of the second kind.
Euler (n) / (n + 1) without the second term (1/2) are the fractional intrinsic Euler numbers Ei(n) = 1, 0, −1/4, 0, 1/2, 0, −17/8, 0, … The corresponding Akiyama transform is:
1 1 7/8 3/4 21/32 0 1/4 3/8 3/8 5/16 −1/4 −1/4 0 1/4 25/64 0 −1/2 −3/4 −9/16 −5/32 1/2 1/2 −9/16 −13/8 −125/64
The first line is Eu(n). Eu(n) preceded by a zero is an autosequence of the first kind. It is linked to the Oresme numbers. The numerators of the second line are preceded by 0. The difference table is:
0 1 1 7/8 3/4 21/32 19/32 1 0 −1/8 −1/8 −3/32 −1/16 −5/128 −1 −1/8 0 1/32 1/32 3/128 1/64
## A companion to the Bernoulli numbers
See . The following fractional numbers are an autosequence of the first kind.
/ = 0, 1/2, 1/2, 1/3, 1/6, 1/15, 1/30, 1/35, 1/70, –1/105, –1/210, 41/1155, 41/2310, –589/5005, −589/10010
Apply T(n + 1, k) = 2T(n, k + 1) − T(n,k) to T(0,k) = (k)/(k):
0 1/2 1/2 1/3 1/6 1/15 1 1/2 1/6 0 −1/30 0 0 −1/6 −1/6 −1/15 1/30 1/21 −1/3 −1/6 1/30 2/15 13/210 −2/21 0 7/30 7/30 −1/105 −53/210 −13/105 7/15 7/30 −53/210 −52/105 1/210 92/105
The rows are alternatively autosequences of the first and of the second kind. The second row is /. For the third row, see .
The first column is 0, 1, 0, −1/3, 0, 7/15, 0, −31/21, 0, 127/105, 0, −511/33,… from Mersenne primes, see . For the second column see .
Consider the triangle (n + 1) = Fiba(n) =
0 1 0 1 1 0 1 2 1 0
This is Pascal's triangle bordered by zeroes. The antidiagonals' sums are , the Fibonacci numbers. Two elementary transforms yield the array ASPEC0, a companion to ASPEC in .
0 1 1 1 1 0 1 2 3 4 0 1 3 6 10 0 1 4 10 20 0 1 5 15 35
Multiplying the SBD array in by ASPEC0, we have by row sums /:
0 1/2 1/2 0 1/2 −1/6 1/2 −2/6 0 1/2 −3/6 1/15 1/2 −4/6 3/15 0 1/2 −5/6 6/15 −4/105
This triangle is unreduced.
## Arithmetical properties of the Bernoulli numbers
The Bernoulli numbers can be expressed in terms of the Riemann zeta function as Bn = −(1 − n) for integers n ≥ 0 provided for n = 0 the expression (1 − n) is understood as the limiting value and the convention B1 = 1/2 is used. This intimately relates them to the values of the zeta function at negative integers. As such, they could be expected to have and do have deep arithmetical properties. For example, the Agoh–Giuga conjecture postulates that p is a prime number if and only if pBp − 1 is congruent to −1 modulo p. Divisibility properties of the Bernoulli numbers are related to the ideal class groups of cyclotomic fields by a theorem of Kummer and its strengthening in the Herbrand-Ribet theorem, and to class numbers of real quadratic fields by Ankeny–Artin–Chowla.
### The Kummer theorems
The Bernoulli numbers are related to Fermat's Last Theorem (FLT) by Kummer's theorem (Kummer 1850), which says:
If the odd prime p does not divide any of the numerators of the Bernoulli numbers B2, B4, …, Bp − 3 then xp + yp + zp = 0 has no solutions in nonzero integers.
Prime numbers with this property are called regular primes. Another classical result of Kummer (Kummer 1851) are the following congruences.
Let p be an odd prime and b an even number such that p − 1 does not divide b. Then for any non-negative integer k
${\displaystyle {\frac {B_{k(p-1)+b}}{k(p-1)+b}}\equiv {\frac {B_{b}}{b}}{\pmod {p}}.}$
A generalization of these congruences goes by the name of p-adic continuity.
If b, m and n are positive integers such that m and n are not divisible by p − 1 and mn (mod pb − 1 (p − 1)), then
${\displaystyle (1-p^{m-1}){\frac {B_{m}}{m}}\equiv (1-p^{n-1}){\frac {B_{n}}{n}}{\pmod {p^{b}}}.}$
Since Bn = −(1 − n), this can also be written
${\displaystyle \left(1-p^{-u}\right)\zeta (u)\equiv \left(1-p^{-v}\right)\zeta (v){\pmod {p^{b}}},}$
where u = 1 − m and v = 1 − n, so that u and v are nonpositive and not congruent to 1 modulo p − 1. This tells us that the Riemann zeta function, with 1 − ps taken out of the Euler product formula, is continuous in the p-adic numbers on odd negative integers congruent modulo p − 1 to a particular a ≢ 1 mod (p − 1), and so can be extended to a continuous function ζp(s) for all p-adic integers p, the p-adic zeta function.
### Ramanujan's congruences
The following relations, due to Ramanujan, provide a method for calculating Bernoulli numbers that is more efficient than the one given by their original recursive definition:
${\displaystyle {\binom {m+3}{m}}B_{m}={\begin{cases}{\frac {m+3}{3}}-\sum \limits _{j=1}^{\frac {m}{6}}{\binom {m+3}{m-6j}}B_{m-6j},&{\text{if }}m\equiv 0{\pmod {6}};\\{\frac {m+3}{3}}-\sum \limits _{j=1}^{\frac {m-2}{6}}{\binom {m+3}{m-6j}}B_{m-6j},&{\text{if }}m\equiv 2{\pmod {6}};\\-{\frac {m+3}{6}}-\sum \limits _{j=1}^{\frac {m-4}{6}}{\binom {m+3}{m-6j}}B_{m-6j},&{\text{if }}m\equiv 4{\pmod {6}}.\end{cases}}}$
### Von Staudt–Clausen theorem
The von Staudt–Clausen theorem was given by Karl Georg Christian von Staudt (von Staudt 1840) and Thomas Clausen (Clausen 1840) independently in 1840. The theorem states that for every n > 0,
${\displaystyle B_{2n}+\sum _{(p-1)\,\mid \,2n}{\frac {1}{p}}}$
is an integer. The sum extends over all primes p for which p − 1 divides 2n.
A consequence of this is that the denominator of B2n is given by the product of all primes p for which p − 1 divides 2n. In particular, these denominators are square-free and divisible by 6.
### Why do the odd Bernoulli numbers vanish?
The sum
${\displaystyle \varphi _{k}(n)=\sum _{i=0}^{n}i^{k}-{\frac {n^{k}}{2}}}$
can be evaluated for negative values of the index n. Doing so will show that it is an odd function for even values of k, which implies that the sum has only terms of odd index. This and the formula for the Bernoulli sum imply that B2k + 1 − m is 0 for m even and 2k + 1 − m > 1; and that the term for B1 is cancelled by the subtraction. The von Staudt–Clausen theorem combined with Worpitzky's representation also gives a combinatorial answer to this question (valid for n > 1).
From the von Staudt–Clausen theorem it is known that for odd n > 1 the number 2Bn is an integer. This seems trivial if one knows beforehand that the integer in question is zero. However, by applying Worpitzky's representation one gets
${\displaystyle 2B_{n}=\sum _{m=0}^{n}(-1)^{m}{\frac {2}{m+1}}m!\left\{{n+1 \atop m+1}\right\}=0\quad (n>1{\text{ is odd}})}$
as a sum of integers, which is not trivial. Here a combinatorial fact comes to surface which explains the vanishing of the Bernoulli numbers at odd index. Let Sn,m be the number of surjective maps from {1, 2, …, n} to {1, 2, …, m}, then Sn,m = m!{n
m
}
. The last equation can only hold if
${\displaystyle \sum _{{\text{odd }}m=1}^{n-1}{\frac {2}{m^{2}}}S_{n,m}=\sum _{{\text{even }}m=2}^{n}{\frac {2}{m^{2}}}S_{n,m}\quad (n>2{\text{ is even}}).}$
This equation can be proved by induction. The first two examples of this equation are
n = 4: 2 + 8 = 7 + 3,
n = 6: 2 + 120 + 144 = 31 + 195 + 40.
Thus the Bernoulli numbers vanish at odd index because some non-obvious combinatorial identities are embodied in the Bernoulli numbers.
### A restatement of the Riemann hypothesis
The connection between the Bernoulli numbers and the Riemann zeta function is strong enough to provide an alternate formulation of the Riemann hypothesis (RH) which uses only the Bernoulli number. In fact Marcel Riesz (Riesz 1916) proved that the RH is equivalent to the following assertion:
For every ε > 1/4 there exists a constant Cε > 0 (depending on ε) such that |R(x)| < Cεxε as x → ∞.
Here R(x) is the Riesz function
${\displaystyle R(x)=2\sum _{k=1}^{\infty }{\frac {k^{\overline {k}}x^{k}}{(2\pi )^{2k}\left({\frac {B_{2k}}{2k}}\right)}}=2\sum _{k=1}^{\infty }{\frac {k^{\overline {k}}x^{k}}{(2\pi )^{2k}\beta _{2k}}}.}$
nk denotes the rising factorial power in the notation of D. E. Knuth. The numbers βn = Bn/n occur frequently in the study of the zeta function and are significant because βn is a p-integer for primes p where p − 1 does not divide n. The βn are called divided Bernoulli numbers.
## Generalized Bernoulli numbers
The generalized Bernoulli numbers are certain algebraic numbers, defined similarly to the Bernoulli numbers, that are related to special values of Dirichlet L-functions in the same way that Bernoulli numbers are related to special values of the Riemann zeta function.
Let χ be a Dirichlet character modulo f. The generalized Bernoulli numbers attached to χ are defined by
${\displaystyle \sum _{a=1}^{f}\chi (a){\frac {te^{at}}{e^{ft}-1}}=\sum _{k=0}^{\infty }B_{k,\chi }{\frac {t^{k}}{k!}}.}$
Apart from the exceptional B1,1 = 1/2, we have, for any Dirichlet character χ, that Bk,χ = 0 if χ(−1) ≠ (−1)k.
Generalizing the relation between Bernoulli numbers and values of the Riemann zeta function at non-positive integers, one has the for all integers k ≥ 1:
${\displaystyle L(1-k,\chi )=-{\frac {B_{k,\chi }}{k}},}$
where L(s,χ) is the Dirichlet L-function of χ (Neukirch 1999, §VII.2).
## Appendix
### Assorted identities
• Umbral calculus gives a compact form of Bernoulli's formula by using an abstract symbol B:
${\displaystyle S_{m}(n)={\frac {1}{m+1}}((\mathbf {B} +n)^{m+1}-B_{m+1})}$
where the symbol Bk that appears during binomial expansion of the parenthesized term is to be replaced by the Bernoulli number Bk (and B1 = +1/2). More suggestively and mnemonically, this may be written as a definite integral:
${\displaystyle S_{m}(n)=\int _{0}^{n}(\mathbf {B} +x)^{m}\,dx}$
Many other Bernoulli identities can be written compactly with this symbol, e.g.
${\displaystyle (\mathbf {B} +1)^{m}=B_{m}}$
• Let n be non-negative and even
${\displaystyle \zeta (n)={\frac {(-1)^{{\frac {n}{2}}-1}B_{n}(2\pi )^{n}}{2(n!)}}}$
• The nth cumulant of the uniform probability distribution on the interval [−1, 0] is Bn/n.
• Let n? = 1/n! and n ≥ 1. Then Bn is the following (n + 1) × (n + 1) determinant (Malenfant 2011):
{\displaystyle {\begin{aligned}B_{n}&=n!{\begin{vmatrix}1&0&\cdots &0&1\\2?&1&\cdots &0&0\\\vdots &\vdots &&\vdots &\vdots \\n?&(n-1)?&\cdots &1&0\\(n+1)?&n?&\cdots &2?&0\end{vmatrix}}\\[8pt]&=n!{\begin{vmatrix}1&0&\cdots &0&1\\{\frac {1}{2!}}&1&\cdots &0&0\\\vdots &\vdots &&\vdots &\vdots \\{\frac {1}{n!}}&{\frac {1}{(n-1)!}}&\cdots &1&0\\{\frac {1}{(n+1)!}}&{\frac {1}{n!}}&\cdots &{\frac {1}{2!}}&0\end{vmatrix}}\end{aligned}}}
Thus the determinant is σn(1), the Stirling polynomial at x = 1.
• For even-numbered Bernoulli numbers, B2p is given by the (p + 1) × (p + 1) determinant (Malenfant 2011):
${\displaystyle B_{2p}=-{\frac {(2p)!}{2^{2p}-2}}{\begin{vmatrix}1&0&0&\cdots &0&1\\{\frac {1}{3!}}&1&0&\cdots &0&0\\{\frac {1}{5!}}&{\frac {1}{3!}}&1&\cdots &0&0\\\vdots &\vdots &\vdots &&\vdots &\vdots \\{\frac {1}{(2p+1)!}}&{\frac {1}{(2p-1)!}}&{\frac {1}{(2p-3)!}}&\cdots &{\frac {1}{3!}}&0\end{vmatrix}}}$
• Let n ≥ 1. Then (Leonhard Euler)
${\displaystyle {\frac {1}{n}}\sum _{k=1}^{n}{\binom {n}{k}}B_{k}B_{n-k}+B_{n-1}=-B_{n}}$
• Let n ≥ 1. Then (von Ettingshausen 1827)
${\displaystyle \sum _{k=0}^{n}{\binom {n+1}{k}}(n+k+1)B_{n+k}=0}$
• Let n ≥ 0. Then (Leopold Kronecker 1883)
${\displaystyle B_{n}=-\sum _{k=1}^{n+1}{\frac {(-1)^{k}}{k}}{\binom {n+1}{k}}\sum _{j=1}^{k}j^{n}}$
• Let n ≥ 1 and m ≥ 1. Then (Carlitz 1968)
${\displaystyle (-1)^{m}\sum _{r=0}^{m}{\binom {m}{r}}B_{n+r}=(-1)^{n}\sum _{s=0}^{n}{\binom {n}{s}}B_{m+s}}$
• Let n ≥ 4 and
${\displaystyle H_{n}=\sum _{k=1}^{n}k^{-1}}$
the harmonic number. Then (H. Miki 1978)
${\displaystyle {\frac {n}{2}}\sum _{k=2}^{n-2}{\frac {B_{n-k}}{n-k}}{\frac {B_{k}}{k}}-\sum _{k=2}^{n-2}{\binom {n}{k}}{\frac {B_{n-k}}{n-k}}B_{k}=H_{n}B_{n}}$
• Let n ≥ 4. Yuri Matiyasevich found (1997)
${\displaystyle (n+2)\sum _{k=2}^{n-2}B_{k}B_{n-k}-2\sum _{l=2}^{n-2}{\binom {n+2}{l}}B_{l}B_{n-l}=n(n+1)B_{n}}$
• Faber–PandharipandeZagier–Gessel identity: for n ≥ 1,
${\displaystyle {\frac {n}{2}}\left(B_{n-1}(x)+\sum _{k=1}^{n-1}{\frac {B_{k}(x)}{k}}{\frac {B_{n-k}(x)}{n-k}}\right)-\sum _{k=0}^{n-1}{\binom {n}{k}}{\frac {B_{n-k}}{n-k}}B_{k}(x)=H_{n-1}B_{n}(x).}$
Choosing x = 0 or x = 1 results in the Bernoulli number identity in one or another convention.
• The next formula is true for n ≥ 0 if B1 = B1(1) = 1/2, but only for n ≥ 1 if B1 = B1(0) = −1/2.
${\displaystyle \sum _{k=0}^{n}{\binom {n}{k}}{\frac {B_{k}}{n-k+2}}={\frac {B_{n+1}}{n+1}}}$
• Let n ≥ 0. Then
${\displaystyle -1+\sum _{k=0}^{n}{\binom {n}{k}}{\frac {2^{n-k+1}}{n-k+1}}B_{k}(1)=2^{n}}$
and
${\displaystyle -1+\sum _{k=0}^{n}{\binom {n}{k}}{\frac {2^{n-k+1}}{n-k+1}}B_{k}(0)=\delta _{n,0}}$
• A reciprocity relation of M. B. Gelfand (Agoh & Dilcher 2008):
${\displaystyle (-1)^{m+1}\sum _{j=0}^{k}{\binom {k}{j}}{\frac {B_{m+1+j}}{m+1+j}}+(-1)^{k+1}\sum _{j=0}^{m}{\binom {m}{j}}{\frac {B_{k+1+j}}{k+1+j}}={\frac {k!m!}{(k+m+1)!}}}$ | 2019-12-13 14:18:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 98, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684830665588379, "perplexity": 1333.583130940795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540555616.2/warc/CC-MAIN-20191213122716-20191213150716-00187.warc.gz"} |
http://dmitrybrant.com/2007/10 | Monthly Archives: October 2007
• If you’re using Internet Explorer, the cache folder should be located at “C:\Documents and Settings\yourname\Local Settings\Temporary Internet Files“, where yourname is your Windows login name.
If you’re using Firefox, the cache folder should be located at “C:\Documents and Settings\yourname\Local Settings\Application Data\Mozilla\Firefox\Profiles\default\Cache“.
• It’s a bit complicated to actually find the video you want in the cache folder, since neither Internet Explorer and Firefox give cached items proper file extensions. The best you can do here is sort the files by size, and look for files of a video-worthy size (several megabytes). Under Firefox, the files named _CACHE_nnn_ are special files, and not videos. A good method of doing this would be to clear your browser’s cache, then go to YouTube, view the video you want (and only that video), then go to the cache folder: the largest file in the cache should be the video. Now copy it out of the cache folder and rename it with a “.flv” extension, and you’ve got it! | 2013-12-09 06:08:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6546091437339783, "perplexity": 4327.265631289907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163915534/warc/CC-MAIN-20131204133155-00088-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://blomquist.xyz/3dcollisions/content/Chapter4/robust_sat.html | # Robust SAT
The reason that our triangle - triangle collision has a false negative is because the two triangles are on the exact same plane. When this happens, the cross product of any of the vectors yields 0!
We can actually fix this issue. First, we have to know if the result of the cross product is 0. Remember, floating point errors tend to creep up, so we have to check against an epsilon. The easyest way to do this is to check the square magnitude of the vector
bool isZeroVector = crossVector.LengthSquared() < 0.0001f;
## Constructing a new axis
The key here is, if we are given two parallel vectors, to create a new vector that is perpendicular to both. For this all we need is a new axis to test.
We have the function TestAxis defined as so
bool TestAxis(Triangle triangle1, Triangle triangle2, Vector3 axis)
The key here is, instead of passing in the axis to test, we want to pass in the components that are used in the cross product to create the axis. The above would be rewritten like so
bool TestAxis(Triangle triangle1, Triangle triangle2, Vector3 A, Vector3 B, Vector3 C, Vector3 D) {
Vector3 axis = Vector3.Cross(A - B, C - D);
if (axis.LengthSquared() < 0.0001f) {
// Axis is zero, try other combination
Vector3 n = Vector3.Cross(A - B, C - A);
axis = Vector3.Cross(AB, n);
if (axis.LengthSquared() < 0.0001f) {
// Axis is still zero, not a seperating axis
return false;
}
}
// The rest of the function
}
## With a triangle
We know a trianlge has the following properties:
Vector3 edge1 = p1 - p0; // Y - X
Vector3 edge2 = p2 - p1; // Z - Y
Vector3 edge3 = p0 - p2; // X - Z
Vector3 faceNormal = Cross(edge1, edge2); YX CROSS ZY
When you look at our triangle, everything is constructed out of 4 vectors. The face normal for example is constructed out of YX CROSS ZY, or p1 - p0 CROSS p2 - p1. So, for the axis that is the face normal, we would call the TestAxis method like so:
if (TestAxis(triangle1, triangle2, p1, p0, p2, p1)
Similarly, to construct the test axis out of the edges of the triangle, at some point the axis is found like this:
Vector3 axis = Cross(triangle1.edge1, triangle2.edge1)
This is repeated 9 times in all edge combinations. The above edge would be passed into the TestAxis function like this:
if (TestAxis(triangle1, triangle2, triangle1.p1, triangle1.p0, triangle2.p1, triangle2.p0)
Take note, all we're doing here is substituting.
This does mean that you can no longer hold a simple array of vectors called the testAxis and loop trough them. You now have to hold an array of 4 vectors, that was used to previously build the testAxis.
Implement this however you want, you can write it all out, or try to make a clever loop. Once you adjust your code to be a robust SAT test, the unit test for same plane triangles that has been failing until now will start to work. | 2023-03-22 00:48:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3357100188732147, "perplexity": 1660.2799344182415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00021.warc.gz"} |
https://mathematica.stackexchange.com/questions/73361/problem-with-recursion-depth | # Problem with recursion depth
I want to solve a problem where I need to compute the Euler-Lagrange equation. However I receive a recursion problem, and I don't understand why.
\$RecursionLimit::reclim: Recursion depth of 1024 exceeded.
Everything executes fine except for the last three rows where this recursion problem occurs. I'm completely new to Mathematica so probably im just missing something trivial. Anyhow, my code is below: Any ideas on how to solve this?
v1x = D[L1*Cos[φ1[t]]*Cos[φ2[t]], t];
v1y = D[L1*Sin[φ2[t]], t];
v1z = D[L1*Cos[φ2[t]]*Sin[φ1[t]], t];
v2x = v1x +
D[(1/2)*L2*Cos[φ1[t]]*
Cos[φ2[t] + φ3[t]], t];
v2y = v1y + D[(1/2)*L2*Sin[φ2[t] + φ3[t]], t];
v2z = v1z +
D[(1/2)*L2*Cos[φ2[t] + φ3[t]]*
Sin[φ1[t]], t];
v1 = v1x^2 + v1y^2 + v1z^2;
v2 = v2x^2 + v2y^2 + v2z^2;
T1 = (1/2)*I1*D[φ1[t], t]^2;
T2 = (1/2)*I2*D[φ2[t], t]^2;
T3 = (1/2)*I3*(D[φ2[t], t] + D[φ3[t], t])^2;
T4 = (1/2)*m2*v1;
T5 = (1/2)*m3*v2;
U1 = (1/2)*L1*m2*g*Sin[φ2[t]];
U2 = m3*g (L1*Sin[φ2[t]] + (1/2)*L2*
Sin[φ2[t] + φ3[t]]);
L = Simplify[T1 + T2 + T3 + T4 + T5 - U1 - U2]
L1 = D[D[L, D[φ1[t], t]], t] - D[L, φ1[t]];
L2 = D[D[L, D[φ2[t], t]], t] - D[L, φ2[t]];
L3 = D[D[L, D[φ3[t], t]], t] - D[L, φ3[t]];
• Welcome to Mathematica.SE! I suggest that: 1) You take the introductory Tour now! 2) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! 3) As you receive help, try to give it too, by answering questions in your area of expertise. – bbgodfrey Feb 5 '15 at 20:54
You define L1 in terms of L1, which leads to a recursion. A much simplified but completely analogous example is
a = a+1
When this line is evaluated,
1. Mathematica creates the definition a = a+1
2. The expression a=a+1 evaluates to a+1, i.e. its RHS
3. Now it's time to evaluate a+1 part by part. The first part is a. But a has a definition which now evaluates to a+1.
4. ... and so on ad infinitum.
So the question is: what are you actually trying to achieve with a definition where a symbol appears both on the left and right hand side of the assignment operator? | 2021-03-07 18:23:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5556881427764893, "perplexity": 2896.963228848653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00265.warc.gz"} |
https://www.coursehero.com/file/56327600/32-matrix-operationspptx/ | # 3.2 matrix operations.pptx - Operations on Matrices u2022...
• 20
This preview shows page 1 - 6 out of 20 pages.
Operations on Matrices Definition and Use Basic operations Multiplication Inverses
Definition of Matrix A matrix is an array of objects, usually numbers. You used some in your work in Unit 2. All of these are examples of matrices. The first two are “square”, the first 2x2 and the second 3x3. The third one is 3x2, the fourth one a “row” matrix at 1x3, and the fifth one a “column” matrix at 3x1.
Use of Matrices Sometimes, matrices are just lists of numbers and a matrix is the best way to organize them. More frequently, the numbers stand for something. You used matrices where the numbers stood for coordinate points; they can also stand for coefficients of equations, among other things.
Use of Matrices For example, the system of equations 4x 1 – 5x 2 = 13 2x 1 + x 2 = 7 can be inserted into a matrix that looks like this: In matrix (array) form, the system of equations is much easier for a computer to work with.
Basic Operations on Matrices Matrices that are the same size can be added or subtracted: = A matrix can also be multiplied easily by a constant: | 2022-01-22 18:43:38 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134498596191406, "perplexity": 605.4381456915365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00180.warc.gz"} |
https://a2i2.deakin.edu.au/2019/12/03/raspberry-pi-hardware-interactions-part-4-i2c-temperature-and-humidity-sensor/ | # Hardware Interactions: Part 4 – I2C Temperature and Humidity Sensor
Blog / Rhys Hill / December 3, 2019
This next post in our series on hardware interactions marks a departure from our usual examples. In the previous three posts we’ve focused on the Raspberry Pi but, this time around we’ll be looking at the ESP32 as we diverge from single board computers to explore a system on a chip (SoC). While the ESP32 does lose out to the Pi in raw power, it makes up for this by being smaller, faster, and cheaper.
The strengths of the ESP32 are highlighted when it comes to tasks like the example we have in this post. That is, communicating to other integrated circuits via the Inter-Integrated Circuit protocol (I2C or I squared C). We’ll examine this protocol by developing a simple temperature and humidity sensor. During this post we’ll step through,
1. how to wire up the circuit
2. how to configure the ESP32 to interact with the I2C bus,
3. how the I2C bus works more generally,
4. how these factors impact the software design decisions made when interacting with this kind of hardware
## Hardware Setup
If you would like to attempt this demonstration for yourself you will need,
You will also need to set up the toolchain required for working with ESP IDF.
### Basic Circuit
We’ll be using the 3V3 line from the ESP32 development board to power our sensor and the 0V line as our common ground. GPIO pins 4 and 14 are suitable for use as our clock and data line used for communication, but alternatives for these are available in the datasheet.
It should be noted that even though the above circuit shows two pull up resistors, these are optional in our case. The breakout board available from AdaFruit already incorporates these pull up resistors into its design. These resistors will become important if you are attaching extra devices to the same bus as the bus being free is indicated by both of these lines being high.
## ESP32 Configuration
For our example we’ll be using the freeRTOS kernel from Amazon which has become the defacto standard for embedded real time devices. Including freeRTOS and the I2C driver required is as simple as adding these includes to your our applications main file. We’ll also set up logging by including the esp_log library and defining a log tag.
#include <stdlib.h>
#include <esp_log.h>
#include <freertos/FreeRTOS.h>
#include <driver/i2c.h>
#define TAG "i2c_bus"
Now that we have access to our I2C driver we can configure the bus. To do this we create a config object and then change the settings to match our breadboard circuit. In our case we’re using GPIO 4 as the IO for our data line, SDA, and GPIO 14 as the IO for our clock line. As mentioned before, these two lines need to be pulled high to 3.3V which we can also reflect in our setup by enabling GPIO pull ups on these lines. Devices attached to a shared I2C bus can be either a master device, as in our case, or a slave device. Master meaning that they send commands to other devices and read results; slave meaning that they listen for and carry out commands and only write to the bus in response to requests from a master. The clock speed configures how fast the clock line oscillates and dictates the speed at which devices on the bus can exchange information. This will depend on the physical limitations of the particular sensor you are using, but 100kHz will work for the Si7021. With our config set we pass the config object to the I2C driver to load these options into memory.
i2c_config_t config;
config.sda_io_num = 4;
config.sda_pullup_en = GPIO_PULLUP_ENABLE;
config.scl_io_num = 14;
config.scl_pullup_en = GPIO_PULLUP_ENABLE;
config.mode = I2C_MODE_MASTER;
config.master.clk_speed = 100000;
esp_err_t error = i2c_param_config(I2C_NUM_0, &config);
if (error != ESP_OK) {
ESP_LOGI(TAG, "Failed to configure shared i2c bus: %s", esp_err_to_name(error));
return;
}
A common issue found when trying to communicate via this protocol is that the driver will not wait long enough for some devices to respond to commands and requests, which will result in a timeout error. To give ourselves a little more leeway we can extend the default timeout to something a bit more forgiving. Since the ESP32s global clock ticks at 80 MHz, the value we’re using in our example, 100k ticks, gives us a 1.25ms buffer.
$$\frac{100,000 \text{ ticks}}{80,000,000 \text{ ticks/s}}= 0.00125\text{s}$$
error = i2c_set_timeout(I2C_NUM_0, 100000);
if (error != ESP_OK) {
ESP_LOGI(TAG, "Failed set timeout for i2c bus: %s", esp_err_to_name(error));
return;
}
With our config loaded we can now install the I2C driver. You may have noticed that all of the functions we’ve been using return an esp_error_t. These errors are to let us know about the success or failure of each step. For our example we simply log errors if they occur, but the same method could be used to implement more robust error handling or retry logic.
error = i2c_driver_install(I2C_NUM_0, config.mode, 512, 512, 0);
if (error != ESP_OK) {
ESP_LOGI(TAG, "Failed to install driver for i2c bus: %s", esp_err_to_name(error));
}
Now that we have the bus set up, it’s time to pull out the datasheet for our Si7021 and figure out how to talk to this thing. The first thing we’ll need is the slave address of our device. Each I2C enabled device has a 7 bit address hardwired into its silicon. This specific series of 1s and 0s will be listened for by the device and everything it sees before this address will be ignored. This is how multiple devices are able to share the same bus. Next, we need to find the commands we’re going to use to retrieve values from the sensor. The important two for our example are, ‘Measure Relative Humidity, Hold Master Mode’ and ‘Measure Temperature, Hold Master Mode’. As the names suggest, these commands tell the sensor to take a measurement of either humidity or the temperature and respond with the result. Let’s define these commands so we can use them later.
// Commands for Si7021
#define SI_7021_MEASURE_HUMIDITY 0xE5
#define SI_7021_MEASURE_TEMPERATURE 0xE3
#define SI_7021_TIMEOUT 1000 / portTICK_RATE_MS
To take humidity measurement we need to first write the device slave address to let all the devices on the bus know that the following command is meant for our sensor. This is followed by a single bit indicating to the sensor that we are about to write some information to the bus. We can then write our measure humidity command, thereby triggering a measurement.
// Write measure humidity command
i2c_master_start(handle);
i2c_master_write_byte(handle,
I2C_MASTER_ACK);
i2c_master_write_byte(handle,
SI_7021_MEASURE_HUMIDITY,
I2C_MASTER_ACK);
i2c_master_stop(handle);
esp_err_t error = i2c_master_cmd_begin(I2C_NUM_0, handle, SI_7021_TIMEOUT);
if (error != ESP_OK) {
ESP_LOGI(TAG, "Failed to write humidity command: %s", esp_err_to_name(error));
}
Now that our sensor has taken a measurement we can read it from the bus. Again we start by writing the device slave address to give our sensor the all clear to start writing. This time the address is followed by a read bit to let the sensor know that it is expected to transmit data. We can see from our datasheet that the Si7021 produces two byte measurements, so we read in the next two bytes transmitted.
// Read two bytes from the temperature and humidity sensor
uint8_t humMSB;
uint8_t humLSB;
i2c_master_start(handle);
i2c_master_write_byte(handle,
I2C_MASTER_ACK);
i2c_master_stop(handle);
esp_err_t error = i2c_master_cmd_begin(I2C_NUM_0, handle, SI_7021_TIMEOUT);
if (error != ESP_OK) {
ESP_LOGI(TAG, "Failed to read humidity: %s", esp_err_to_name(error));
}
The final step is to turn these two bytes into something readable. The two bytes we’ve just read in form a 16 bit RH_Code. To translate this code into an actual relative humidity reading we apply the equation described in the datasheet,
$$\text{%RH} = \frac{125 * \text{RHCode}}{65536} – 6$$
double humidity = ((uint16_t) humMSB << 8) | (uint16_t) humLSB;
humidity *= 125;
humidity /= 65536;
humidity -= 6;
## Behind the Scenes
As you may have noticed, there’s a bit more going on here than writing a command and reading the response. We haven’t yet discussed the importance of the start and stop functions that are being called in our example, nor the read-write bit and ack and nack options being passed into functions. Let’s quickly get a few of these definitions out of the way before explaining the flow of data that we’ve just implemented.
Start condition – As we discussed early in this post, by default both lines of the I2C bus are pulled high. To indicate that communication is about to start the master device, in our case the ESP32, pulls the data line low while leaving the clock line high. This condition is set by the calls to i2c_master_start in our example.
Transmission – Can now occur until the stop condition is called. This clock line being pulled low triggers the output of 1 bit of data onto the bus. The clock then flips back to high triggering a read on the receiving end of the transmission. Once the clock line returns to low the data line is updated to the next bit and the process repeats until all information has been transmitted.
Stop condition – To indicate the end of a transmission the master device pulls the data line back up to its initial high position while the clock line is high. This frees the bus for other devices to use and is why we call i2c_master_stop.
The key to this protocol working is that the data line is never updated while the clock line is high unless it is to start or end of the transmission. Figure 2 below shows the timing involved in a transmission over the I2C bus for any number of bits.
Now for the flow of information we’re actually sending in our example. Let’s wrap up the last few definitions.
Read bit – The device slave address which starts communication with a device is 7 bits long. The final bit of this byte signifies the direction of communication. When starting a read a 1 is written to this bit.
Write bit – Similarly a 0 is written to this final bit when starting a write to a device.
Ack – After each byte is transmitted to a device the device can respond with an optional acknowledgement bit. In the same way, our master device can elect to write that same acknowledgement bit each time it reads a byte from a slave. In the case of an Ack this bit is a 0.
Nack – Another option after reading a byte is to respond with a Nack, a single bit of 1. This lets the slave device know that the byte just read is the final byte expected by the master.
This gives us our final information flow as shown in Figure 3 below,
## What’s Next
Now that we’ve got our sensor reporting humidity data why not dig through the datasheet a bit further and try to implement taking a temperature measurement for yourself. The data flow will follow the same pattern we’ve worked out here in Figure 3. You just need to write a new command and figure out how to convert the two byte temperature code into a readable temperature. Or better yet, why not try applying what we’ve covered here to implement the interface for other I2C devices. | 2023-02-08 21:04:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.268857479095459, "perplexity": 1765.8524300979134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00822.warc.gz"} |
https://mathhelpboards.com/threads/polynomial-types.3618/ | # [SOLVED]Polynomial types
#### karush
##### Well-known member
$2x^{-3}-2x^{2}+3$
Decide whether the function is a polynomial function, If so, states its degree, type, and leading coefficient.
well, I presume it is a polynomial since it the sum of powers in the variable x. Its degree is 2 since that is highest power, but I don't know its type, since it is not quadratic or cubic. and has a vertical asymptote. and of course the leading coefficient is 2
couldn't find the answer in the book?
Last edited:
#### Bacterius
##### Well-known member
MHB Math Helper
Polynomials cannot have negative powers of $x$, so it's not a polynomial. Polynomials are continuous everywhere, differentiable everywhere, are well-behaved and have no asymptotes.
See the Wikipedia article, it says "non-negative integer exponents" | 2020-10-24 17:22:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7263809442520142, "perplexity": 850.4933570126864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00451.warc.gz"} |
http://www.apmonitor.com/wiki/index.php/PmWiki/OtherVariables?action=print | # PmWiki: OtherVariables
$FmtV This variable is an array that is used for string substitutions at the end of a call to FmtPageName(). For each element in the array, the "key" (interpreted as a string) will be replaced by the corresponding "value". The variable is intended to be a place to store substitution variables that have frequently changing values (thus avoiding a rebuild of the variable cache making FmtPageName() faster). Also see $FmtP. Values of $FmtV are set by the internal functions FormatTableRow, LinkIMap, HandleBrowse, PreviewPage, HandleEdit, PmWikiAuth, and PasswdVar, apparently to set values for system generated string substitutions like PageText. $FmtP
This variable is an array that is used for pattern substitutions near the beginning of a call to FmtPageName. For each element in the array, the "key" (interpreted as a pattern) will be replaced by the corresponding value evaluated for the name of the current page. This is for instance used to handle $-substitutions that depend on the pagename passed to FmtPageName(). Also see $FmtV. From robots.php: If $EnableRobotCloakActions is set, then a pattern is added to $FmtP to hide any "?action=" url parameters in page urls generated by PmWiki for actions that robots aren't allowed to access. This can greatly reduce the load on the server by not providing the robot with links to pages that it will be forbidden to index anyway.
$FmtPV This variable is an array that is used for defining Page Variables. New variables can be defined with $FmtPV['$VarName'] = 'variable definition'; which can be used in markup with {$VarName}. Please note that the contents of $FmtPV['$VarName'] are eval()ed to produce the final text for $VarName, so the contents must be a PHP expression which is valid at the time of substitution. In particular, this does not work: #This doesn't work $FmtPV['$MyText'] = "This is my text."; # WARNING: Doesn't work! The problem is that the text This is my text. is not a valid PHP expression. To work it would need to be placed in quotes, so that what actually gets stored in $FmtPV['$MyText'] is "This is my text." which is a valid PHP expression for a text string. Thus the correct way to do this would be with an extra set of quotes: #This will work $FmtPV['$MyText'] = '"This is my text."'; This also has implications for how internal PHP or PmWiki variables are accessed. To have the page variable $MyVar produce the contents of the internal variable $myvar, many folks try the following which does not work: #This doesn't work either! $myvar = SomeComplexFunction();
$FmtPV['$MyVar'] = $myvar; # WARNING: Doesn't work! There are several correct ways to do this, depending on whether you need the value of the $myvar variable as it was at the time the $FmtPV entry was created, or at the time that a particular instance of $MyVar is being rendered on a page. For most simple page variables that don't change during the processing of a page its more efficient to set the value when the entry is created:
$myvar = SomeComplexFunction(); $FmtPV['$MyVar'] = "'" .$myvar . "'"; #capture contents of $myvar NOTE: If $myvar should contain single quotes, the above won't work as is, and you'll need to process the variable to escape any internal quotes.
For more complex cases where an internal variable may have different values at different places in the page (possibly due to the effects of other markup), then you need to make the $FmtPV entry make an explicit reference to the global value of the variable (and the variable had better be global) like this: global$myvar;
$FmtPV['$MyVar'] = '$GLOBALS["myvar"]'; Finally, there's nothing to stop you from simply having the evaluation of the $FmtPV entry execute a function to determine the replacement text:
# add page variable {$Today}, formats today's date as yyyy-mm-dd $FmtPV['$Today'] = 'strftime("%Y-%m-%d", time() )'; Once again, please note that the values of the elements of $FmtPV are eval()ed so always sanitize any user input. The following is very insecure:
$FmtPV['$Var'] = $_REQUEST['Var']; # critically insecure, allows PHP code injection You should sanitize the user input to contain only expected values, or make sure the value is a quoted string, for example: # we only expect numeric values of Var$FmtPV['$Var'] = intval($_REQUEST['Var']);
# properly escaped quoted string.
$FmtPV['$Var'] = '"'. addslashes($_REQUEST['Var']) . '"'; See Cookbook:MoreCustomPageVariables for more examples of how to use $FmtPV.
$MaxPageTextVars This variable prevents endless loops in accidental recursive PageTextVariables which could lock down a server. Default is 500 which means that each PageTextVariable from one page can be displayed up to 500 times in one wiki page. If you need to display it more than 500 times, set in config.php something like $MaxPageTextVars = 10000; # ten thousand times
$PageCacheDir Enables the cache of most of the HTML for pages with no conditionals. The variable contains the name of a writable directory where PmWiki can cache the HTML output to speed up subsequent displays of the same page. Default is empty, which disables the cache. See also $PageListCacheDir.
# Enable HTML caching in work.d/
\$PageCacheDir = 'work.d/'; | 2018-06-20 21:17:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27205345034599304, "perplexity": 1939.3121095169472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00277.warc.gz"} |
http://rikiheck.blogspot.com/ | ## Tuesday, November 20, 2018
### Kids These Days
A few weeks ago, a student at Brown, Lauren Black, interviewed me for a podcast she was doing about children's beliefs about the universe. The result is here, at Now Here This. The one I'm in is titled The Conference Room in the Sky.
## Saturday, November 3, 2018
### Sir Michael Anthony Eardley Dummett
A wonderful email I got from a past student reminded me of how much I miss my old BPhil supervisor, Sir Michael Dummett. I won't say much about him here. Philosophically, my work speaks to his influence, I hope, and I had my say, among many others, in the remembrances that Ernie Lepore assembled when Michael died. I also wrote about his impact on philosophy of mathematics for Philosophia Mathematica.
Michael was way ahead of his time, in so many ways. To mention just a few: His distinction between ingredient sense and assertoric content; his emphasis on indefinitely extensible concepts; his insistence, way back, that theories of meaning (for natural languages) must be theories of truth; that both must be theories of understanding; and that those in turn must be theories of what competent speakers know. Dummett, despite himself, was a neo-Davidsonian a decade or more before James Higginbotham, another of my teachers, would bring that sort of view into the center of philosophy of language and, for that matter, natural language semantics.
## Tuesday, October 30, 2018
### Anne Koedt, "The Myth of the Vaginal Orgasm"
Anne Koedt's essay "The Myth of the Vaginal Orgasm" (1970) is a classic of second wave feminism, an important example of the concern feminists of that era had with sexuality, but also an intellectual precursor of political lesbian and the divisive debates that surrounded that topic. One can find 'reprints' all over the web, and the essay was reprinted in a collection of essays, Radical Feminism, edited by Koedt that was published in 1973. But the version published in that book, and (from what I can tell) 'reprinted' elsewhere, is not quite the same as the original.
I know this because I was fortunate enough to find a copy of the original on abebooks.com. It was published by the New England Free Press (operating out of Boston) on two double-letter (8.5"x22") pages, printed both sides, and folded into a letter-sized (8.5"x11") pamphlet (with no staples, at least in mind). Cost: 10 cents, about 70 cents in 2018. It's the kind of thing one would have found in 'radical' bookstores back in those days.
Because of the historical importance of this work, it seems worth making a copy available online. So here's a DjVu and a PDF.
(If anyone should have good reason to object to my making this available, please let me know, and I'll be happy to remove it.)
## Saturday, October 20, 2018
### Corner Quotes in LaTeX
I'm posting this just because I had a hard time, today, finding the original source for a macro I've been using for a while in LaTeX. The macro in question typesets 'corner quotes', such as:
⌜A ∧ B⌝
The corners themselves are not hard to create, since LaTeX has \ulcorner and \urcorner macros. That will usually work fine, but there are issues involving the height and spacing of the corners that can arise in some cases. A macro due to Sam Buss solves these problems. (I discovered it here.) I've put the macro in a style file, godelnum.sty, which you can download here. Here's the contents:
\newbox\gnBoxA
\newdimen\gnCornerHgt
\setbox\gnBoxA=\hbox{$\ulcorner$}
\global\gnCornerHgt=\ht\gnBoxA
\newdimen\gnArgHgt
\def\Godelnum #1{%
\setbox\gnBoxA=\hbox{$#1$}%
\gnArgHgt=\ht\gnBoxA%
\ifnum \gnArgHgt<\gnCornerHgt
\gnArgHgt=0pt%
\else
\fi
\raise\gnArgHgt\hbox{$\ulcorner$} \box\gnBoxA %
\raise\gnArgHgt\hbox{$\urcorner$}}
Usage is just: \Godelnum{A \wedge B}, and the like. Note that one can do this outside math, since the macro inserts \$s around the argument. (Probably \ensuremath would be better.)
## Friday, October 12, 2018
### Newly Published: Logicism, Ontology, and the Epistemology of Second-Order Logic
In Ivette Fred and Jessica Leech, eds, Being Necessary: Themes of Ontology and Modality from the Work of Bob Hale (Oxford: Oxford University Press), pp. 140-69 (PDF here)
Abstract:
In two recent papers, Bob Hale has attempted to free second-order logic of the 'staggering existential assumptions' with which Quine famously attempted to saddle it. I argue, first, that the ontological issue is at best secondary: the crucial issue about second-order logic, at least for a neo-logicist, is epistemological. I then argue that neither Crispin Wright's attempt to characterize a `neutralist' conception of quantification that is wholly independent of existential commitment, nor Hale's attempt to characterize the second-order domain in terms of definability, can serve a neo-logicist's purposes. The problem, in both cases, is similar: neither Wright nor Hale is sufficiently sensitive to the demands that impredicativity imposes. Finally, I defend my own earlier attempt to finesse this issue, in "A Logic for Frege's Theorem", from Hale's criticisms.
And from the acknowledgements:
It is the peculiar tradition of our tribe to express our respect for other members by highlighting our disagreements with them. So, in case it is not clear, let me just say explicitly how much I admire Bob Hale’s work. I learned a lot from him over the years—both in conversation and from his written work—and greatly enjoyed the time we were able to spend together. Bob’s enthusiastic support for me and my work, early in my career, was particularly important to me. So I am honored to be able to contribute to this volume and thank Ivette and Jessica for the invitation.
I'm particularly sad, for myself, that Bob passed before we had a chance to discuss these issues one more time....
## Thursday, September 13, 2018
### Tarski: A Semantic Proof of Incompleteness, and a Possible Error
In re-reading Tarski's paper "The Semantic Conception of Truth and the Foundations of Semantics", for my course on theories of truth, I was struck by some remarks he makes in footnote 17:
...[I]n view of the elementary nature of the notion of provability, a precise definition of this notion requires only rather simple logical devices. In most cases, those logical devices which are available in the formalized discipline itself (to which the notion of provability is related) are more than sufficient for this purpose. We know, however, that as regards the definition of truth just the opposite holds. Hence, as a rule, the notions of truth and provability cannot coincide; and since every provable sentence is true, there must be true sentences which are not provable.
It was not immediately obvious to me what argument Tarski was making here. It seems worth spelling it out in some detail.
The two central premises are of course clear enough. First, we know that in any theory of sufficient strength (e.g., any theory extending Robinson arithmetic Q), the notion of provability can be formally defined. More precisely, the relation "P is a proof of S in T" can be 'represented' in Q in a familiar sense, and provability in T can then be defined as the existential quantification of this relation. Second, we have Tarski's theorem: For any sufficiently expressive language L, such as the language of arithmetic, there can be no formula Tr(x) that defines truth in L.
What's puzzling about this is what's often puzzling about Tarski's remarks on such topics: The remarks about provability are most familiarly applied to theories, such as Q; but Tarski's theorem is most familiarly applied to languages, such as the language of arithmetic. Tarski famously conflates these two notions in many of his writings on truth. And, in fact, it seems to me that Tarski's argument here is only correct if it is primarily one about languages.
Suppose we try to interpret it as one about theories. There is no formula Tr(x) such that, in PA, we can prove all instances of "Tr(S) iff S". But no matter how we interpret the remarks about provability, it will not then follow that "truth and provability cannot coincide". It's perfectly possible that Tr(x) should have the same extension as Pr(x) even though we cannot prove that fact in PA.
So Tarski must mean something like the following. Consider e.g. the language of arithmetic, or any other sufficiently expressive language. In that language, we can define the notion of provability in PA and, indeed, in any formal theory stated in that langauge: That is, there is, for each such theory, a formula PrT(x) that is true of all and only the Gödel numbers of T-provable sentences. But we know from Tarski's theorem that there is no formula Tr(x) that is true of all and only the Gödel numbers of true sentences of L. Hence, those two sets cannot coincide, no matter what formal theory T might be.
As Tarski mentions in note 18, this is no real improvement over Gödel. Most of the machinery that Gödel develops for the purposes of his proof is needed for this one: the definition of provability and the diagonal lemma. It is perhaps worth mentioning, however, that there is no need to suppose that any particular theory proves anything about T-provability, and no hypothesis of the consistency of T is needed here, either.
There's another puzzling remark that Tarski makes in this paper that I am not sure this one can be salvaged. The remark is one Tarski makes during the discussion of 'essential richness':
If the condition of “essential richness” is not satisfied, it can usually be shown that an interpretation of the meta-language in the object-language is possible; that is to say, with any given term of the meta-language a well-determined term of the object-language can be correlated in such a way that the assertible sentences of the one language turn out to be correlated with assertible sentences of the other. As a result of this interpretation, the hypothesis that a satisfactory definition of truth has been formulated in the meta-language turns out to imply the possibility of reconstructing in that language the antinomy of the liar; and this in turn forces us to reject the hypothesis in question. (pp. 351-2)
I take there to be here an assertion of the following claim. Suppose that a theory M is (relatively) interpretable in another theory O. Then truth for the language of O cannot be defined in M (since then the liar would be reproducible in M). So understood, the claim is false. Ali Enayat and Albert Visser showed that PA plus a Tarski-style truth-theory for the language of arithmetic is interpretable in PA. (A proof of the same result is given in my paper "Consistency and the Theory of Truth". I should have mentioned all this there.)
Tarski's unclarity about 'language' vs 'theory' makes it unclear, however, exactly what he meant. So, I take his claim to concern theories because I don't know of any coherent notion of interpretation for languages, and of course Tarski is largely responsible for the notion of interpretation to which he is here alluding. (The paper in which Tarski first introduces and studies this notion would not be published until 1953, however: nine years later.) Moreover, his talk of "reconstructing in that language the antinomy of the liar" certainly sounds like talk of provability. But perhaps there is something else he had in mind.
## Friday, July 6, 2018
### Pasta on the Grill
One of our "go to" summer dinners has become what we call "pasta on the grill". It's basically a roasted tomato sauce, which we started making when we started growing tomatos. Its a fast and easy way to use some of them, and we freeze several jars of it every year to use over the winter. It's simple and delicious and easily varied. I'll give the basic idea here and mention some variations.
### Ingredients
• 2-3 pounds of tomatos, whatever kind you have available
• Half a large Vidalia onion, or a single large yellow onion
• 4-5 cloves of garlic
• A small eggplant (we usually use Asian styles)
• A red or yellow pepper
• 1/2 a jalepeño or other hot pepper (optional)
• 1/2 cup chopped fresh herbs (basil, oregano, thyme, whatever)
• Olive oil
• One can tomato paste (optional)
Varying the types of tomatos and onions and herbs will affect the flavor a lot, as will omitting the garlic, which I do from time to time. You can also use zucchini or summer squash in addition to or in place of the eggplant. The jalepeño adds a nice bit of heat, if you like that, or you can use Thai basil or spicy oregano. And so forth.
### Directions
Prepare your grill. We have a gas one, so that is easy. I tend to cook this over pretty high heat, but you'll want to experiment with that. (You can also do this in the oven, if you wish. Preheat it to 450F.)
Chop and otherwise prepare the veggies and herbs, and put them in a big bowl. Add the olive oil and stir. You want just enough to coat the veggies. Line a 13x9 disposable aluminum baking pan with foil (for ease of cleanup and re-usability of the pan) and pour the veggies into it. Cover the top with more foil, and put the whole thing onto the grill. Cook until the veggies are tender, roughly 30-45 minutes.
I often find that the sauce is a bit watery at this point. If so, then I pour it into an appropriately sized pot and add some tomato paste---half a can usually suffices, but sometimes I need a full can---and simmer for about 10 minutes, just to cook the paste. | 2018-12-10 12:13:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6128501296043396, "perplexity": 1636.4978441130884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823322.49/warc/CC-MAIN-20181210101954-20181210123454-00405.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-5-exponential-and-logarithmic-functions-5-3-one-to-one-functions-inverse-functions-5-3-assess-your-understanding-page-282/79 | ## Precalculus (10th Edition)
Published by Pearson
# Chapter 5 - Exponential and Logarithmic Functions - 5.3 One-to-One Functions; Inverse Functions - 5.3 Assess Your Understanding - Page 282: 79
#### Answer
$x=-4$
#### Work Step by Step
The base is same on the 2 sides of the equation (and it is not 1), hence they will be equal if the exponents are equal. Hence $x=3x+8$, Solve the equation: \begin{align*} x&=3x+8\\ x-3x&=8\\ -2x&=8\\ \frac{-2x}{-2}&=\frac{8}{-2}\\ x&=-4 \end{align*}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2021-10-28 10:38:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196843504905701, "perplexity": 2266.669541276298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00279.warc.gz"} |
https://woosterphysicists.scotblogs.wooster.edu/ | ## Table of Nuclides
As of 2019, we have identified or synthesized 118 distinct elements with Z protons, but about 2900 distinct nuclides with N neutrons (where atom is to element as nucleus is to nuclide).
The start of my version of the table of nuclides is below, where number of protons Z increases toward 2 o’clock, number of neutrons N increases toward 11 o’clock, and atomic mass A = Z + N increases toward 12 o’clock on average (because more neutrons than protons are needed to bind large nuclei). Rainbow colors code lifetimes t from short (violet) to long (red). For example, the heavy hydrogens are very short lived. The whole chart is a very tall 880 KB PDF table of nuclides. Enjoy!
Start of table of atoms. Rainbow colors code lifetimes, violet (short) to red (long). Number of protons increases toward 2 o’clock, number of neutrons increases toward 11 o’clock.
## Intrepid-Surveyor
Fifty years ago, Apollo 12 landed within sight of another spacecraft, a dramatic demonstration of pinpoint landing capability. While Dick Gordon orbited Luna in the command module Yankee Clipper, Pete Conrad and Al Bean left the lunar module Intrepid and walked over to the robotic Surveyor, which had landed over two years earlier. They retrieved parts of Surveyor and returned them to Earth for engineering analysis. Bean’s photograph of Conrad at Surveyor with Intrepid on the horizon is a spoce exploration icon. Recently, the Lunar Reconnaissance Orbiter photographed the landing site and revealed Surveyor and Intrepid’s descent stage connected by dark tracks in the lunar regolith left by the astronauts.
Al Bean photographed Pete Conrad at the Surveyor 3 spacecraft with the lunar module Intrepid on the horizon, November 20, 1969
2011 Lunar Reconnaissance Orbiter photograph of the Apollo 12 landing site including the astronauts’ tracks from two moonwalks
## Relaxing Fermat
In 1637, while reading a copy of Diophantus’s Arithmetica, Pierre de Fermat famously scribbled
“Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos & generaliter nullam in infinitum ultra quadratum potestatem in duos eiusdem nominis fas est dividere cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.”
which roughly translates to
“It is impossible to separate a cube into two cubes, or a quartic into two quartics, or in general, a power higher than the second into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.”
In modern notation, the equation
$$x^n + y^n = z^n$$
has no positive integer solutions for exponents $n > 2$. Although Fermat did leave a proof for the case $n = 4$, 358 years past before Andrew Wiles published his proof of the general case in 1995.
Relaxing Fermat’s constraints to allow non-integers greatly expands the number of solutions. The looping animation shows all solutions for $1 \le n < \infty$ and $1 \le \{x,y,z\} \le 11$. All points on the arcs
$$y_{z,n}[x] = (z^n - x^n)^{1/n}$$
are solutions, and red dots indicate integer solutions. Watch the the famous Pythagorean triple $\{3,4,5\}$ flash by for $n = 2$. Integer solutions are visibly harder for large finite $n$. Many more solutions exist for $n < 1$.
Points along arcs are solutions to the generalized Fermat equation; red points are integer solutions
## Stainless Steel Starship
Welders in a Texas swamp have built a starship. But don’t bet against SpaceX.
Starship is a prototype upper stage for a next-generation, fully reusable, two-stage-to-orbit launch vehicle designed to enable the human exploration of the solar system and the colonization of Mars. It’s made from stainless steel. (A little carbon converts iron to hard steel; a little chromium converts steel to corrosion-resistant stainless steel.) At cryogenic temperatures, grade 301 stainless steel has higher strength-to-weight and strength-to-cost ratios than carbon fiber reinforced polymer, and it has a higher melting temperature.
Starship will dissipate orbital energy by entering a planetary atmosphere like a sky diver, belly first, its fore and aft fins rapidly moving to control its descent prior to a tail-first rocket-powered landing. Strong electric motors powered by Tesla batteries will flap the fins.
Starship is powered by next-generation Raptor engines, the first full-flow staged combustion rocket engines to fly. In these efficient closed cycle engines, no propellant is wasted: all the oxidizer (and some fuel) power the oxidizer turbopump and all the fuel (and some oxidizer) power the fuel turbopump, which together pump gaseous oxidizer and fuel into the combustion chamber to combust and thrust. The oxidizer is liquid oxygen; the fuel is liquid methane, the primary component of natural gas, which can be manufactured from the martian (or terrestrial) atmosphere.
SpaceX Stainless Steel Starship Prototype
## After the Moonwalk
Iconic is Neil Armstrong’s photograph of Buzz Aldrin during the first moon walk, with Armstrong reflected in Aldrin’s visor. Much less well-known is this pair of photographs taken just after the moon walk. To my eyes, Armstrong seems exhausted but happy; Aldrin seems satisfied … and over his shoulder, almost casually, is a window, and outside the window is the lunar horizon, with its stunning airless black sky at day! Fifty years later, I still imagine A & A trying to catch a few zzzs … in hammocks … in their home … on the moon.
Neil Armstrong and Buzz Aldrin in the LEM after the first moonwalk, Monday, July 21, 1969
## “Contact Light”
Our TV is broken, so Aunt Nora invites us to her apartment. (Aunt Nora isn’t really our aunt, but she introduced our parents to each other, so that’s what we call her.) My brother Jim and I lie on the floor close to the TV, while the adults sit on the couch. We watch NBC not CBS, so we miss Cronkite’s commentary. The late afternoon video is a simple animation; the famous 16-mm film — only later synchronized with the audio — would return to Earth with the astronauts 4 days later.
The tension is palpable. The cartoon lander reaches the surface at the expected time, but Aldrin’s monotone readouts continue. Absence of video heightens the audio. Mission control radios “60 seconds” of fuel remaining. Then “30 seconds”. I hold my breath. At last, Aldrin reports “Contact light” — we have touched the moon — followed by Armstrong’s famous, “Houston … Tranquility Base here, the Eagle has landed”. Of the landing site, my mother observes, “They’ve already named it”.
No one wants to cook, so we go to McDonald’s for dinner. As we drive, I see a small shop with photos of the three astronauts in its window. The streets are still. The world seems stopped.
10:56:15 PM EDT, Sunday, July 20, 1969
As Collins orbits the moon solo, Armstrong and Aldrin forgo a scheduled sleep period, moving forward the moonwalk. Finally video — live from the surface of the moon — shows a LEM landing leg, first inverted but quickly rectified. Armstrong describes the surface as “almost like a powder”. Again I hold my breath, a lump in my throat. “Okay. I’m going to step off the LEM now … that’s one small step for [a] man, one giant leap for mankind.” I don’t hear the indefinite article, but I immediately grasp the meaning. Armstrong reads the plaque, “We came in peace for all mankind”. Aldrin practices locomotion, which “would get rather tiring”. The president phones. We leave Aunt Nora’s as the astronauts prepare to return to the LEM.
The next morning I sit on my living room floor reading two newspapers: The New York Times, with its simple banner headline “Men Walk on Moon” — which still brings me tears of joy, triumph, and wonder as I write this 50 years later — and the local newspaper, with its astonished “Now Do You Believe!”.
My Monday morning newspapers, July 21, 1969
## Wooster Epicycles
A vector is the sum of its components, a mechanical vibration is a combination of its normal mode motions, a quantum state is a superposition of its eigenstates, and any “nice” function is a Fourier sum of real or complex sinusoids, $e^{i \varphi} = \cos \varphi + i \sin \varphi$.
The animation below traces the Wooster W in epicycles of 100 circles-moving-on-circles in the complex plane. Algebraically, the trace is a complex discrete Fourier series $\sum c_n e^{i n \omega t} =\sum r_n e^{i (n\omega t + \theta_n)}$, where $r_n$ are the circle radii, $\theta_n$ are carefully chosen phase shifts, $\omega$ is the fundamental angular frequency, and $t$ is time.
Using Fourier analysis, any reasonable path can traversed by a moon orbiting a moon orbiting a moon orbiting … a planet orbiting a star
## Redefining SI
Today the SI (Système international d’unités) base units are redefined. The following are now exact. Memorize these numbers!
Cs-133 transition frequency constant $Δν_{\text{Cs}} = 9\,192\,631\,770~\text{s}^{−1}$ defines the second.
Then light speed constant $c = 299\,792\,458~\text{m}\cdot\text{s}^{−1}$ defines the meter.
Then Planck’s constant $h = 6.626\,070\,15\times 10^{−34}~\text{kg}\cdot\text{m}^{2}\cdot\text{s}^{−1}$ defines the kilogram.
Then electron charge constant $e = 1.602\,176\,634\times 10^{−19}~\text{A}\cdot{\text{s}}$ defines the Ampere.
Then Boltzmann’s constant $k = 1.380\,649\times 10^{−23}~\text{kg}\cdot\text{m}^{2}\cdot\text{K}^{−1}⋅\text{s}^{−2}$ defines the Kelvin.
And Avogadro’s constant $N_{\text{A}} = 6.022\,140\,76\times 10^{23}~\text{mol}^{−1}$ defines the mole.
And luminous efficacy constant $K_{\text{cd}} = 683~\text{cd}\cdot\text{sr}\cdot\text{s}^{3}\cdot\text{kg}^{−1}\cdot\text{m}^{−2}$ defines the candela.
(Where “sr” is the steradian or square radian, the 3D analogue of the 2D radian.) Discussion continues about the mole and the candela, including whether they should even be base units. The new definitions break the relationship between the C-12 mass, the dalton, the kilogram, and Avogadro’s constant, and the candela is arguably a photo-biological quantity.
I wish my phone number were 919-263-1770.
I set the alarm for 8:55 AM. Brutal, but I wanted to watch live the National Science Foundation Event Horizon Telescope news conference. I was expecting the first image of a black hole, and the EHT team did not disappoint. But the black hole was not the Milky Way’s Sgr A*, but M87*, a thousand times further but a thousand times larger (billions rather than millions of solar masses).
For Astronomy Table lunch at Kitt’s Soup & Bread, I quickly created the graphic below to illustrate various radii of a mass $M$ Schwarzschild black hole, a good approximation to this rotating Kerr black hole. The event horizon with reduced circumference $R_s = 2 G M / c^2$ is the point of no return, the causal disconnection from which even light can’t escape. The innermost stable circular orbit at $3R_s$ marks the inner edge of the accretion disk. Massless particles like photons can orbit even closer, at the $1.5R_s$ photon sphere, where you can see your back by looking straight ahead! The ring of light (mainly 1.3 mm synchrotron radiation) in the EHT photo comes from photons with impact parameter $\sim 2.6R_s$ that just graze the photon sphere and can orbit multiple times before spiraling in or out.
Schwarzschild (“black shield”) black hole with event horizon, unstable photon sphere, grazing photon sphere, and innermost stable circular orbit | 2019-12-10 14:11:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 28, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3630552887916565, "perplexity": 4488.498150663992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527620.19/warc/CC-MAIN-20191210123634-20191210151634-00104.warc.gz"} |
https://studynova.com/quiz/physics/measurement-and-uncertainty-quiz/ | # Measurement and Uncertainty Quiz
## Make sure to try these on a piece of paper first
Question #1-1 Paper 1 Difficulty: Easy
The radius of a sphere is r=12.37 m. What is the volume of the sphere correct to two significant figures?
A) $7928.62 \ \textrm{m}^{3}$
B) $7920 \ \textrm{m}^{3}$
C) $7928 \ \textrm{m}^{3}$
D) $7900 \ \textrm{m}^{3}$
Question #1-2 Paper 2 Difficulty: Medium
A ball is thrown in the air and 5 different students are individually measuring the time it takes to fall back down using stopwatches. The times obtained by each student are the following:
6.2 s, 6.0 s, 6.4 s, 6.1 s, 5.8 s
a) What is the uncertainty of the results?
b) How should the resulting time be expressed?
Question #1-3 Paper 2 Difficulty: Medium
A bullet travels a distance of $l=154\pm0.5$ m in the time $t=0.4\pm0.05$ s.
a) Calculate the fractional uncertainty for the speed of the bullet.
b) Calculate the percentage uncertainty for the speed of the bullet.
c) Write down the speed of the bullet using the absolute uncertainty.
Question #1-4 Paper 1 Difficulty: Medium
The volume of a pyramid with base length l, base width w, and height h is given by $V=\frac{lwh}{3}$. The volume of the pyramid was measured with an uncertainty of 12%, while the base length and base width were measured with an uncertainty of 4%. What is the uncertainty of the height of the pyramid?
A) 2%
B) 3%
C) 4%
D) 6%
Question #1-5 Paper 2 Difficulty: Medium
The current passing through a resistor is $I=3\pm0.1\ \textrm A$ and the resistance of the resistor is $R=13\pm0.5\ \Omega$. The electrical power, measured in watt (W), supplied to the resistor is given by $P=RI^{2}$.
a) Write down the value of the supplied power correct to one significant figure.
b) Find the percentage uncertainty for the current passing through the resistor and its resistance.
c) Find the absolute uncertainty for the electrical power. | 2020-01-18 14:31:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6317617297172546, "perplexity": 671.3044129669762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00169.warc.gz"} |
http://crypto.stackexchange.com/questions/6406/why-are-rsa-keys-encoded-with-asn-1-for-tls | # Why are RSA keys encoded with ASN.1 for TLS?
Browser vendors use ASN.1 encoding for RSA certificates in the TLS protocol. RSA public keys are just a number, so why do we need encode them to something else? That increases the risk of security problems.
-
9821347676528476512348612390874073765227653408545634205496835 (note this is not a valid public key, just randomish typing).
What does that mean? Big or little endian? Hex or decimal? Specifically for RSA, where is $e$, where is $N$? What is this public key authorized to do (encrypt, sign, etc)? Who has signed this key? To whom is the key linked?
While a public RSA key is "just a number" there are a lot of reasons to encode the key in a standard way so that every computer everywhere knows how to understand it, what it is authorized to do, who it belongs to, etc.
-
what is big or little endian ? we can always use hex as an standard no need to say. for e and N we can send a text file and let put first line be e second be N. public key authorized to encrypt clien't session key and send it to service provider (just think about it what is SSL for ...). i don't see any reason to send something like javascript which make security problem for user and sadly nobody will block SSL on their side... – roger Feb 20 '13 at 18:06
@roger, but there is a need to say exactly how things are encoded. For example if I say the answer is 10 what does that mean? If it is in decimal, then it is ten; if it is hex, it is sixteen; binary, two; octal, eight; and so on. Even just saying e is on one line, N is on the next, that is an encoding. Sure if public keys are only ever used to encrypt client session key's fine, but public keys are also used to validate digital signatures, etc. – mikeazo Feb 20 '13 at 18:12
♦ we can use hex always there is no problem with just using hex as an standard. and for using a text file that first line is e second line is N thats really safe its not an encoding. the reason i hate ASN.1 is that there is already exploits that use ASN.1 to remote code execution! for what we mostly use is just Encrypt for browser SSL. the signature is not very important for us – roger Feb 20 '13 at 18:33
@roger ASN.1 does not allow for arbitrary code execution, it's just a way (which is already used in many other standards!) to unambiguously code data, like a sequence of big integers (as we need to do, among other things). The parsing library can be bad and have bugs, but it's not the encoding that is the problem. Data is just data... And javascript has nothing to do with it. (I don't understand that remark at all) – Henno Brandsma Feb 20 '13 at 19:04
There are exploits that target buggy implementations of ASN.1, but not ASN.1 itself. In the same way, there are also exploits for XML implementations, but that does not mean that XML is bad. There could be also exploits for you custom hex encoding, no matter how simple. – SquareRootOfTwentyThree Feb 27 '13 at 18:53
show 1 more comment | 2014-04-18 21:19:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38022109866142273, "perplexity": 1417.6304455285701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://livrepository.liverpool.ac.uk/516/ | # Three loop anomalous dimension of the second moment of the transversity operator in the (MS)over-bar and RI ' schemes
Gracey, JA ORCID: 0000-0002-9101-2853
(2003) Three loop anomalous dimension of the second moment of the transversity operator in the (MS)over-bar and RI ' schemes. NUCLEAR PHYSICS B, 667 (1-2). 242 - 260. | 2021-10-19 11:33:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253544211387634, "perplexity": 4774.426408092355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00409.warc.gz"} |
https://wikivisually.com/wiki/Great_snub_icosidodecahedron | # Great snub icosidodecahedron
Great snub icosidodecahedron
Type Uniform star polyhedron
Elements F = 92, E = 150
V = 60 (χ = 2)
Faces by sides (20+60){3}+12{5/2}
Wythoff symbol |2 5/2 3
Symmetry group I, [5,3]+, 532
Index references U57, C88, W116
Dual polyhedron Great pentagonal hexecontahedron
Vertex figure
34.5/2
Bowers acronym Gosid
In geometry, the great snub icosidodecahedron is a nonconvex uniform polyhedron, indexed as U57. It can be represented by a Schläfli symbol sr{5/2,3}, and Coxeter-Dynkin diagram .
This polyhedron is the snub member of a family that includes the great icosahedron, the great stellated dodecahedron and the great icosidodecahedron.
## Cartesian coordinates
Cartesian coordinates for the vertices of a great snub icosidodecahedron are all the even permutations of
(±2α, ±2, ±2β),
(±(α−βτ−1/τ), ±(α/τ+β−τ), ±(−ατ−β/τ−1)),
(±(ατ−β/τ+1), ±(−α−βτ+1/τ), ±(−α/τ+β+τ)),
(±(ατ−β/τ−1), ±(α+βτ+1/τ), ±(−α/τ+β−τ)) and
(±(α−βτ+1/τ), ±(−α/τ−β−τ), ±(−ατ−β/τ+1)),
with an even number of plus signs, where
α = ξ−1/ξ
and
β = −ξ/τ+1/τ2−1/(ξτ),
where τ = (1+5)/2 is the golden mean and ξ is the negative real root of ξ3−2ξ=−1/τ, or approximately −1.5488772. Taking the odd permutations of the above coordinates with an odd number of plus signs gives another form, the enantiomorph of the other one.
The circumradius for unit edge length is
${\displaystyle R={\frac {1}{2}}{\sqrt {\frac {2-x}{1-x}}}=0.64502\dots }$
where ${\displaystyle x}$ is the appropriate root of ${\displaystyle x^{3}+2x^{2}={\Big (}{\tfrac {1\pm {\sqrt {5}}}{2}}{\Big )}^{2}}$. The four positive real roots of the sextic in ${\displaystyle R^{2},}$
${\displaystyle 4096R^{12}-27648R^{10}+47104R^{8}-35776R^{6}+13872R^{4}-2696R^{2}+209=0}$
is the circumradius of the snub dodecahedron (U29), great snub icosidodecahedron (U57), great inverted snub icosidodecahedron (U69), and great retrosnub icosidodecahedron (U74).
## Related polyhedra
### Great pentagonal hexecontahedron
Great pentagonal hexecontahedron
Type Star polyhedron
Face
Elements F = 60, E = 150
V = 92 (χ = 2)
Symmetry group I, [5,3]+, 532
Index references DU57
dual polyhedron Great snub icosidodecahedron
The great pentagonal hexecontahedron is a nonconvex isohedral polyhedron and dual to the uniform great snub icosidodecahedron. It has 60 intersecting irregular pentagonal faces, 120 edges, and 92 vertices. | 2018-12-14 03:51:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7638153433799744, "perplexity": 10890.260768348724}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825349.51/warc/CC-MAIN-20181214022947-20181214044447-00522.warc.gz"} |
https://sage.math.clemson.edu:34567/home/pub/243/ | # LINEAR PROGRAMMING
For more detailed notes and interesting problems, see: http://mathsci2.appstate.edu/~hph/SageMath/
## Let's begin with a simple problem:
A doll factory wants to plan how many Barbie and Ken dolls to manufacture in a week to maximize company profit. A Barbie doll earns $6.00 in profit and is made of 12 ounces of plastic for her body and 5 ounces of nylon for her hair. A Ken doll earns$6.50 and is made of 14 ounces of plastic. Each doll goes in a box made of 4 ounces of cardboard. The company can only get one weekly shipment of raw materials, including 100,000 ounces of plastic, 30,000 ounces of nylon and 35,000 ounces of cardboard.
Setting this up is easy:
The first task is to identify the variables. The Barbie and Ken problem asks us that in the first sentence - “How many Barbie and Ken dolls ...?” so we let:
B = the number of Barbies to make per week
K = the number of Kens to make per week
The objective of a linear programming problem will always be to maximize or minimize a quantity. In the first sentence of the problem we see “maximize company profit.” Profit is $6.00 for each Barbie and$6.50 for each Ken, so the total amount of money, Z, that they get will be 6B + 6.50K. We can state this as:
Maximize Z = 6B + 6.5K
Now we have to translate all the limiting conditions or constraints.
1) Plastic is in short supply; the information about plastic says we must use less than or equal to 100,000 oz. 12B is the amount of plastic we use in making Barbies. Similarly, 14K is the amount of plastic we use in making Kens. So 12B + 14K is the total amount of plastic used, and it must be less than 100,000:
12B + 14K ≤ 100,000
2) Nylon is used only in Barbie dolls, at 5 oz. per doll. That gives:
5B ≤ 30,000
3) Cardboard is in short supply just like the plastic. We proceed the same way:
4B + 4K ≤ 35,000
The complete mathematical model is below. The last pair of constraints is to remind us that making negative numbers of dolls is not realistic. Even the obvious must be stated in the mathematical formulation.
B = the number of Barbies to make per week,
K = the number of Kens to make per week.
Maximize Z = 6B + 6.5K (profit)
Subject to: 12B + 14K ≤ 100,000 (plastic)
5B ≤ 30,000 (nylon)
4B + 4K ≤ 35,000 (cardboard)
B ≥ 0 and K ≥ 0 (non-negativity)
So how do we solve this? We need to calculate the (B,K) pair with the largest profit that still fits within the constraints. A point that satisfies all the constraints is called feasible. The feasible region is the set of points that satisfy all the inequalities. Let’s start by sketching the feasible region. (Note that this would be difficult with more than two variables.)
We will graph B ≥ 0 and K ≥ 0 by using only the positive parts of the axes. Now we graph the rest of the inequalities on the same coordinate system, shading the side that doesn’t work; the region with no shading will be the feasible region!
var('B, K') plastic = implicit_plot(12*B+14*K==100000, (B, 0, 10000), (K, 0, 10000)) nylon=implicit_plot(5*B==30000, (B, 0, 10000), (K, 0, 10000)) cardboard=implicit_plot(4*B+4*K==35000, (B, 0, 10000), (K, 0, 10000)) region=region_plot([12*B+14*K<100000, 5*B<30000, 4*B+4*K<35000], (B,0,10000), (K,0,10000), incol="white", outcol="yellow") show(region+plastic+nylon+cardboard)
Let’s look at the plastic constraint. There’s no room left in this constraint when 12B + 14K = 100,000. Graphically, this happens on the line we drew for the plastic constraint. So where on the graph will constraints have no room left? On the border of the region. Exactly which part of the border? We need to use the objective (which is also a line) to test where we leave the feasible region. Consider a few of the graphs below, on which the objective is drawn for various Z-values.
object20000 = implicit_plot(6*B+6.5*K==20000, (B, 0, 10000), (K, 0, 10000)) show(object20000+region)
object40000 = implicit_plot(6*B+6.5*K==40000, (B, 0, 10000), (K, 0, 10000)) show(object40000+region)
object49000 = implicit_plot(6*B+6.5*K==49000, (B, 0, 10000), (K, 0, 10000)) show(object49000+region)
object55000 = implicit_plot(6*B+6.5*K==55000, (B, 0, 10000), (K, 0, 10000)) show(object55000+region)
As we raise the objective value, the line moves up through the feasible region -- 20000 and 40000 are too low; 50000 is a little too high.
In the first two graphs, a portion of the objective line is within the region. In the third graph, the line has been pushed up until it touches exactly one point of the region (a corner). The fourth graph shows that increasing the objective value more results in the line no longer intersecting the region. So it appears that the objective value can be increased until the objective line intersects the region at a point, namely a corner. This is the fundamental idea for the method we’ll use to solve these problems - find the corner of the region and the answer will be the one that is "best."
From the graph above, we know the best is the corner where the vertical line (nylon) crosses plastic. Let's solve for that corner:
solve([12*B+14*K==100000, 5*B==30000], (B,K))
6*6000+6.5*2000 #The objective value!
Solving a two variable linear programming problem graphically is straight-forward. It is also possible to solve a three variable problem graphically, but the corner points become more difficult to locate visually. It is impossible to solve a problem with more than three variables using our graphical method. The corner points of the feasible region must be found somehow using algebra and then checked for the optimal value. The Simplex algorithm developed by Danzig solves the problem using this idea.
#Call up the algorithm and define the variables: KenBarbie = MixedIntegerLinearProgram() B, K = KenBarbie['B'], KenBarbie['K']
#Set up the objective: KenBarbie.set_objective(6*B + 6.5*K) | 2021-12-07 06:24:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4828040897846222, "perplexity": 2020.6430422370331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363336.93/warc/CC-MAIN-20211207045002-20211207075002-00257.warc.gz"} |
http://mathhelpforum.com/calculus/118750-volume-water-spherical-tank-funtion-depth-double-integral.html | # Thread: volume of water in spherical tank funtion of depth double integral
1. ## volume of water in spherical tank funtion of depth double integral
Please if anyone could help I would appreciate it so much
A water tank is in the shape of a sphere of radius a. Find the volume of the water as a function of the depth of the water h in the tank.
I think that the best way to solve it would be to use double integrals.... I cannot figure it out however, any help at all I would appreciate.
Thank you so much
2. I don't believe you need to use a double integral, just partition the sphere on the interval 0 to the h(depth) of the tank, and to find the volume you calculate the integral of the area on the interval 0 to h I believe, can someone confirm this as I am not 100%
Possible consider $\int_0^h\pi r^2 dh$
I based this off if the tank was standing on the x-y plane, with y having relation to h and your partitions are being taken from 0 to some point h on the h-axis so to speak which can be related to some depth h, which the change in thickness of your partitions will be $dh$ Also note to determine your radius it needs to be in terms of h, instead of the usually given function that is in terms of x which remember I stated h has relation to y-axis, since your radius is a, I assume your integral would look like this
$\int_0^h\pi a^2 dh$
3. ## unsure
Okay I think that if the depth is 0 the volume should be 0 and when the depth is 2a (the tank is full), the volume would be 4/3*pi*a^3. when i tried to use the integral that you suggested I might be doing something wrong but i cannot find get that answer. any help?
Thanks so much for your answer i appreciate all the work that you have done to try and solve it i think that maybe you are right but I'm not sure if i am just doing something wrong!
4. r is a function of h
where
$
r(h) = a - | h - a |
$
$
r(0) = 0
$
$
r(a) = a
$
$
r(2a) = 0
$
5. I am confused. How did you get this? and if what you are saying is right then the integral above cannot be the correct integral becuase r(2a)=0, right?
thank you so much!
Originally Posted by mooshazz
r is a function of h
where
$
r(h) = a - | h - a |
$
$
r(0) = 0
$
$
r(a) = a
$
$
r(2a) = 0
$
6. Originally Posted by seipc
I am confused. How did you get this? and if what you are saying is right then the integral above cannot be the correct integral becuase r(2a)=0, right?
thank you so much!
apperently i was wrong about r(h)
$
r(h) = \sqrt(a^2-(a-h)^2) = \sqrt(2ah-h^2)
$
but the values on the edge i gave u are still the same
and the integral is not zero,
you need to calculate
$
\int_0^h\pi(2ak-k^2) dk = \pi \* (a\*k^2-k^3/3)|_0^h
= \pi \* a\*h^2 - \pi \* h^3/3
$
7. Okay i follow you all the way through this time, however, i am still confused on one thing. I know that the answer has to be 4/3*pi*a^3 when its at 2a (full) but i dont get that answer.... am i just doing something wrong.
Thank you soooo much!
Originally Posted by mooshazz
apperently i was wrong about r(h)
$
r(h) = \sqrt(a^2-(a-h)^2) = \sqrt(2ah-h^2)
$
but the values on the edge i gave u are still the same
and the integral is not zero,
you need to calculate
$
\int_0^h\pi(2ak-k^2) dk = \pi \* (a\*k^2-k^3/3)|_0^h
= \pi \* a\*h^2 - \pi \* h^3/3
$ | 2013-12-07 17:06:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082629680633545, "perplexity": 343.4961877850458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054976/warc/CC-MAIN-20131204131734-00086-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://www.gift.co.za/saskatchewan-weather-mljpz/8a0e1f-slope-of-tangent-line-formula | (See below.) Firstly, what is the slope of this line going to be? 2x-2 = 0. The tangent line and the given function need to intersect at $$\mathbf{x=0}$$. Get more help from Chegg. ephaptoménÄ) to a circle in book III of the Elements (c. 300 BC). b 2 x 1 x + a 2 y 1 y = b 2 x 1 2 + a 2 y 1 2, since b 2 x 1 2 + a 2 y 1 2 = a 2 b 2 is the condition that P 1 lies on the ellipse . The slope is the inclination, positive or negative, of a line. This is a fantastic tool for Stewart Calculus sections 2.1 and 2.2. That is to say, you can input your x-value, create a couple of formulas, and have Excel calculate the secant value of the tangent slope. Solution : y = x 2-2x-3. This is displayed in the graph below. Standard Equation. I have also attached what I see to be f' or the derivative of 1/(2x+1) which is -2/(2x+1)^2 It is meant to serve as a summary only.) consider the curve: y=x-x² (a) find the slope of the tangent line to the curve at (1,0) (b) find an equation of the tangent line in part (a). Let us take an example. m is the slope of the line. The ⦠Free tangent line calculator - find the equation of the tangent line given a point or the intercept step-by-step This website uses cookies to ensure you get the best experience. This is a generalization of the process we went through in the example. 3. A function y=f(x) and an x-value x0(subscript) are given. The slope calculator, formula, work with steps and practice problems would be very useful for grade school students (K-12 education) to learn about the concept of line in geometry, how to find the general equation of a line and how to find relation between two lines. Sometimes we want to know at what point(s) a function has either a horizontal or vertical tangent line (if they exist). Slope of a line tangent to a circle â direct version A circle of radius 1 centered at the origin consists of all points (x,y) for which x2 + y2 = 1. m = f â(a).. This time we werenât given the y coordinate of this point so we will need to figure that out. After getting the slope (which I assume will be an integer) how do I get the coordinates of any other arbitrary point on this line? I can't figure this out, it does not help that we do not have a very good teacher but can someone teach me how to do this? By using this website, you agree to our Cookie Policy. The point where the curve and the line meet is called a point of tangency. With the key terms and formulas clearly understood, you are now ready to find the equation of the tangent line. 2. After learning about derivatives, you get to use the simple formula, . To draw one, go up (positive) or down (negative) your slope (in the case of the example, 22 points up). The tangent line and the graph of the function must touch at $$x$$ = 1 so the point $$\left( {1,f\left( 1 \right)} \right) = \left( {1,13} \right)$$ must be on the line. My question is about a) which is asking about the tangent line to 1/(2x+1) at x=1. Analyze derivatives of functions at specific points as the slope of the lines tangent to the functions' graphs at those points. The derivative of a function at a point is the slope of the tangent line at this point. 1. Use the formula for the slope of the tangent line to find dy for the curve c(t) = (t-1 â 3t, 543) at the point t = 1. dx dy dx t = 1 eBook Submit Answer . (a) Find a formula for the tangent line approximation, $$L(x)$$, to $$f$$ at the point $$(2,â1)$$. Indeed, any vertical line drawn through More broadly, the slope, also called the gradient, is actually the rate i.e. at which the tangent is parallel to the x axis. So how do we know what the slope of the tangent line should be? 2. If the tangent line is parallel to x-axis, then slope of the line at that point is 0. As h approaches zero, this turns our secant line into our tangent line, and now we have a formula for the slope of our tangent line! The slope of the line is represented by m, which will get you the slope-intercept formula. Find the equations of a line tangent to y = x 3-2x 2 +x-3 at the point x=1. Slope of the tangent line : dy/dx = 2x-2. You will see the coordinates of point q that were recorded in a spreadsheet each time you pressed / + ^. Then move over one and draw a point. The slope-intercept formula for a line is given by y = mx + b, Where. We will also discuss using this derivative formula to find the tangent line for polar curves using only polar coordinates (rather than converting to Cartesian coordinates and using standard Calculus techniques). Since we can model many physical problems using curves, it is important to obtain an understanding of the slopes of curves at various points and what a slope means in real applications. the rate increase or decrease. Tangent Line: Recall that the derivative of a function at a point tells us the slope of the tangent line to the curve at that point. y = x 2-2x-3 . General Formula of the Tangent Line. The formula is as follows: y = f(a) + f'(a)(x-a) Here a is the x-coordinate of the point you are calculating the tangent line for. Equation of the tangent line is 3x+y+2 = 0. Questions involving finding the equation of a line tangent to a point then come down to two parts: finding the slope, and finding a point on the line. Horizontal and Vertical Tangent Lines. 2x = 2. x = 1 it cannot be written in the form y = f(x)). Tangent lines are just lines with the exact same slope as your point on the curve. What is the gradient of the tangent line at x = 0.5? b is the y-intercept. In this section, we will explore the meaning of a derivative of a function, as well as learning how to find the slope-point form of the equation of a tangent line, as well as normal lines, to a curve at multiple given points. Slope =1/9 & equation: x-9y-6=0 Given function: f(x)=-1/x f'(x)=1/x^2 Now, the slope m of tangent at the given point (3, -1/3) to the above function: m=f'(3) =1/3^2 =1/9 Now, the equation of tangent at the point (x_1, y_1)\equiv(3, -1/3) & having slope m=1/9 is given following formula y-y_1=m(x-x_1) y-(-1/3)=1/9(x-3) 9y+3=x-3 x-9y-6=0 Since x=2, this looks like: f(2+h)-f(2) m=----- h 2. Substitute the value of into the equation. Because the slopes of perpendicular lines (neither of which is vertical) are negative reciprocals of one another, the slope of the normal line to the graph of f(x) is â1/ fâ²(x). Finding the slope of the tangent line Given a function, you can easily find the slope of a tangent line using Microsoft Excel to do the dirty work. (a) Find a formula for the slope of the tangent line to the graph of f at a general point= x=x0 (b) Use the formula obtained in part (a) to find the slope of the tangent line for the given value of x0 f(x)=x^2+10x+16; x0=4 It is also equivalent to the average rate of change, or simply the slope between two points. It is the limit of the difference quotient as h approaches zero. Slope and Derivatives. In the equation of the line y-y 1 = m(x-x 1) through a given point P 1, the slope m can be determined using known coordinates (x 1, y 1) of the point of tangency, so. (b) Use the tangent line approximation to estimate the value of $$f(2.07)$$. ... Use the formula for the equation of a line to find . Recall that point p is locked in as (1, 1). If you're seeing this message, it means we're having trouble loading external resources on our website. What value represents the gradient of the tangent line? thank you, if you would dumb it down a bit i want to be able to understand this. In fact, this is how a tangent line will be defined. Then we need to make sure that our tangent line has the same slope as f(x) when $$\mathbf{x=0}$$. Estimating Slope of a Tangent Line ©2010 Texas Instruments Incorporated Page 2 Estimating Slope of a Tangent Line Advance to page 1.5. However, it seems intuitively obvious that the slope of the curve at a particular point ought to equal the slope of the tangent line along that curve. Find the Tangent at a Given Point Using the Limit Definition, The slope of the tangent line is the derivative of the expression. Secant Lines, Tangent Lines, and Limit Definition of a Derivative (Note: this page is just a brief review of the ideas covered in Group. The derivative of a function is interpreted as the slope of the tangent line to the curve of the function at a certain given point. Here there is the use of f' I see so it's a little bit different. Given the quadratic function in blue and the line tangent to the curve at A in red, move point A and investigate what happens to the gradient of the tangent line. Find the formula for the slope of the tangent line to the graph of f at general point x=x° Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Your job is to find m, which represents the slope of the tangent line.Once you have the slope, writing the equation of the tangent line is fairly straightforward. The normal line is defined as the line that is perpendicular to the tangent line at the point of tangency. In order to find the tangent line we need either a second point or the slope of the tangent line. (c) Sketch a graph of $$y = f ^ { \prime \prime } ( x )$$ on the righthand grid in Figure 1.8.5; label it ⦠Show your work carefully and clearly. This equation does not describe a function of x (i.e. To find the equation of the tangent line to a polar curve at a particular point, weâll first use a formula to find the slope of the tangent line, then find the point of tangency (x,y) using the polar-coordinate conversion formulas, and finally weâll plug the slope and the point of tangency into the A tangent is a line that touches a curve at a point. Now we reach the problem. This is all that we know about the tangent line. A secant line is a straight line joining two points on a function. In this section we will discuss how to find the derivative dy/dx for polar curves. For a horizontal tangent line (0 slope), we want to get the derivative, set it to 0 (or set the numerator to 0), get the $$x$$ value, and then use the original function to get the $$y$$ value; we then have the point. There also is a general formula to calculate the tangent line. Example 3 : Find a point on the curve. The Slope of a Tangent to a Curve (Numerical Approach) by M. Bourne. Using the tangent line slope formula weâll plug in the value of âxâ that is given to us. I have attached the image of that formula which I believe was covered in algebra in one form. So in our example, f(a) = f(1) = 2. f'(a) = -1. In this formula, the function f and x-value a are given. The derivative of . I do understand my maths skills are not what they should be :) but i would appreciate any help, or a reference to some document/book where I ⦠Also, read: Slope of a line. | 2021-10-24 21:16:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6939369440078735, "perplexity": 178.12145785883718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00390.warc.gz"} |
https://argoprep.com/blog/k8/remainder-theorem/ | # Remainder Theorem
## Polynomials and the Remainder Theorem
If your teacher presented you with the following problem, what would you think? Would you know how to solve it? It certainly does not look like something you would see in a math class. It looks more like a secret code. However, it is the remainder theorem formula.
This remainder theorem formula is used with polynomials. By definition, a polynomial is an expression that contains coefficients and variables.
Simply put, it is an expression with letters and numbers in it. Sometimes, there are more letters than numbers, but the letters represent numbers.
Now, keep in mind that a theorem, as in the remainder theorem, is a proof. This means you are trying to prove the theory and not necessarily get a quick numerical answer.
Before we get into the remainder theorem, let’s find out what exactly a polynomial is. Look at the polynomial below. As you can see in the example below, there are variables (letters) and coefficients (numbers) as well as exponents.
The coefficients are the number (or they could be letters) that are shown first before the variable. The variable is almost always a letter that represents a number.
Now, one other thing you will need to know about the remainder theorem is that a linear factor is involved. So, what is a linear factor? In layman’s terms, a linear factor is just a simple equation like the one shown below. It is also called a degree of 1.
To recap, you have learned about the two parts of a remainder theorem, the polynomial, and the linear factor. Now, it is time to dive into the formula.
## Remainder Theorem Formula
Let’s look again at the remainder theorem formula. In this formula, a polynomial is divided by a linear factor. The “p(x)” represents a polynomial with the variable of an unknown amount. With the remainder theorem, you are trying to find the remainder from a division problem.
Let’s review a division problem to understand the terms first before we go any deeper into this polynomial division problem.
FACT: Think of working a polynomial remainder theorem problem like a long division problem!
The polynomial division problem has similar terms. The first part, p(x), means that this is a polynomial with an unknown x-variable. The linear factor is (x-c), and the “c” means it is just an unknown value, and it could be any number.
The “q” is the quotient, and “r” is the remainder. The “x” is again an unknown number.
EXAMPLE
Now, it is time to work on an example using this formula. Let’s look at an example. The polynomial is below.
$$p(x)\;=\;2^{2x}\;+\;3x\;+1$$
The linear factor is $$x\;+2.$$
So, we will divide $$2x^2\;+3x\;+1\;$$ by $$\;x+2.$$
To set it up, you place the polynomial as the dividend, and the linear factor is the divisor. So, $$2x$$ times $$x$$ is equal to $$2x^2$$and $$2x$$ times 2 is equal to $$4x.$$
Now, you subtract. $$2x^2$$ minus $$2x^2$$ is zero. Then you subtract
$$3x$$ and $$4x$$. However, you are subtracting a positive, so this means you subtract. $$3x$$minus $$4x$$is equal to negative $$(-)x.$$
Then you bring down the plus sign and the one. So, you have negative $$(-) x$$ plus 1.
Negative $$(-1)$$ times $$x+2$$ equals negative $$(-) x$$ minus 2. When you subtract negative $$x$$ minus 2 from negative $$x$$ plus 1, you need to realize that you have two negatives that turn into a positive.
Therefore, you have negative $$x$$ plus 1 plus $$x$$ plus two. The negative x plus positive x cancels out. The remainder is 3.
The final answer would be $$2x-1$$ with a remainder of 3.
## Why do we use the remainder theorem?
Like many people, you may wonder where in the world you are going to use this. First, if you are taking an Algebra or other math class, then obviously you will use it there. For the most part, this is not a skill that you will be using every day.
Instead, you will most likely be using it once in a great, great, while. However, when you are trying to find the length of a flat surface, you may be using this theorem.
The surface area will represent the polynomial and the width will represent the linear factor. When you divide those two numbers, then you will get the answer with the remainder just like a polynomial in the remainder theorem.
Let’s look at a word problem using this theorem.
FACT: Remember when you subtract in a long division problem, it might change the sign on the second number. If you look at the problem above, it shows where the subtracting changed to adding. | 2022-11-29 03:57:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6668550372123718, "perplexity": 276.16678808943465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00454.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.