text
stringlengths 104
605k
|
---|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Mendel's Second Set of Experiments | CK-12 Foundation
You are reading an older version of this FlexBook® textbook: CK-12 Biology Concepts Go to the latest version.
# 3.3: Mendel's Second Set of Experiments
Created by: CK-12
%
Progress
Practice Mendel's Second Set of Experiments
Progress
%
Round and green, round and yellow, wrinkled and green, or wrinkled and yellow?
Can two traits be inherited together? Or are all traits inherited separately? Mendel asked these questions after his first round of experiments.
### Mendel’s Second Set of Experiments
After observing the results of his first set of experiments, Mendel wondered whether different characteristics are inherited together. For example, are purple flowers and tall stems always inherited together? Or do these two characteristics show up in different combinations in offspring? To answer these questions, Mendel next investigated two characteristics at a time. For example, he crossed plants with yellow round seeds and plants with green wrinkled seeds. The results of this cross, which is a dihybrid cross , are shown in Figure below .
This chart represents Mendel’s second set of experiments. It shows the outcome of a cross between plants that differ in seed color (yellow or green) and seed form (shown here with a smooth round appearance or wrinkled appearance). The letters R, r, Y, and y represent genes for the characteristics Mendel was studying. Mendel didn’t know about genes, however. Genes would not be discovered until several decades later. This experiment demonstrates that 9/16 were round yellow, 3/16 were wrinkled yellow, 3/16 were round green, and 1/16 were wrinkled green.
#### F1 and F2 Generations
In this set of experiments, Mendel observed that plants in the F1 generation were all alike. All of them had yellow and round seeds like one of the two parents. When the F1 generation plants self-pollinated, however, their offspring—the F2 generation—showed all possible combinations of the two characteristics. Some had green round seeds, for example, and some had yellow wrinkled seeds. These combinations of characteristics were not present in the F1 or P generations.
#### Law of Independent Assortment
Mendel repeated this experiment with other combinations of characteristics, such as flower color and stem length. Each time, the results were the same as those in Figure above . The results of Mendel’s second set of experiments led to his second law. This is the law of independent assortment . It states that factors controlling different characteristics are inherited independently of each other.
### Summary
• After his first set of experiments, Mendel researched two characteristics at a time. This led to his law of independent assortment. This law states that the factors controlling different characteristics are inherited independently of each other.
### Practice I
Use this resource to answer the questions that follow.
• http://www.hippocampus.org/Biology $\rightarrow$ Biology for AP* $\rightarrow$ Search: Mendel's Law of Independent Assortment
1. What is a dihybrid cross? Give an example.
2. What would a YYRR plant look like?
3. When did Mendel observe a 9:3:3:1 ratio in the F 2 generation?
4. What does Mendel's second law state?
### Review
1. What was Mendel investigating with his second set of experiments? What was the outcome?
2. State Mendel’s second law.
3. If a purple-flowered, short-stemmed plant is crossed with a white-flowered, long-stemmed plant, would all of the purple-flowered offspring also have short stems? Why or why not?
### Vocabulary Language: English Spanish
dihybrid cross
dihybrid cross
A cross between F1 offspring of two individuals that differ in two traits of particular interest.
law of independent assortment
law of independent assortment
Mendel’s second law of inheritance; states that factors controlling different characteristics are inherited independently of each other.
Feb 24, 2012
Dec 17, 2014
|
# Matrix =
Printable View
• Jul 20th 2006, 07:27 AM
kwtolley
Matrix =
Code:
[1 5] [4 -2] [-2 3] [0 1]
My answer is
Code:
[5 3] [-2 4]
Thanks for checking my answer.
• Jul 20th 2006, 10:03 AM
CaptainBlack
Quote:
Originally Posted by kwtolley
Code:
[1 5] [4 -2] [-2 3] [0 1]
My answer is
Code:
[5 3] [-2 4]
Thanks for checking my answer.
Can't be as the top left element of the product is the dot product of the
first row of the first matrix and the first column of the second, hence it
is:
$[1,\ 5] \cdot [4,\ 0]=4$,
but your answer has $5$ for this element.
RonL
• Jul 20th 2006, 11:45 AM
Quick
Quote:
Originally Posted by kwtolley
Code:
[1 5] [4 -2] [-2 3] [0 1]
My answer is
Code:
[5 3] [-2 4]
Thanks for checking my answer.
This is correct if you're adding the matrices together.
• Jul 20th 2006, 12:18 PM
CaptainBlack
Quote:
Originally Posted by Quick
This is correct if you're adding the matrices together.
I suppose that is possible, maybe I lost something in reformatting the
question that make this interpretation more likely.
RonL
• Jul 20th 2006, 08:02 PM
kwtolley
no adding
This problem is written just like I posted it, so I'm not sure if I should have added, but I did. Thanks as always for checking.
• Jul 20th 2006, 08:34 PM
CaptainBlack
Quote:
Originally Posted by kwtolley
This problem is written just like I posted it, so I'm not sure if I should have added, but I did. Thanks as always for checking.
If there is no addition sign between the matrices then the question is
asking for the matrix prooduct, which is what my answer refered to.
RonL
|
# GPU Execution Profiling of the Generated Code
This example shows you how to generate an execution profiling report for the generated CUDA® code by using the `gpucoder.profile` function.
The GPU Coder profiler runs a software-in-the-loop (SIL) execution that produces execution-time metrics for the tasks and kernels in the generated code. This example generates an execution profiling report for the Fog Rectification example from GPU Coder. For more information, see Fog Rectification.
### Third-Party Prerequisites
• CUDA enabled NVIDIA® GPU.
• NVIDIA CUDA toolkit and driver.
• NVIDIA Nsight™ Systems. For information on the supported versions of the compilers and libraries, see Third-Party Hardware.
• Environment variables for the compilers and libraries. For setting up the environment variables, see Setting Up the Prerequisite Products.
• The profiling workflow of this example depends on the profiling tools from NVIDIA that accesses GPU performance counters. From CUDA toolkit v10.1, NVIDIA restricts access to performance counters to only admin users. To enable GPU performance counters to be used by all users, see the instructions provided in Permission issue with Performance Counters (NVIDIA).
### Verify GPU Environment
To verify that the compilers and libraries necessary for running this example are set up correctly, use the `coder.checkGpuInstall` function.
```envCfg = coder.gpuEnvConfig('host'); envCfg.BasicCodegen = 1; envCfg.Quiet = 1; coder.checkGpuInstall(envCfg);```
### Fog Rectification Algorithm
To improve the foggy input image, the algorithm performs fog removal and then contrast enhancement. The diagram shows the steps of both these operations.
This example takes a foggy RGB image as input. To perform fog removal, the algorithm estimates the dark channel of the image, calculates the airlight map based on the dark channel, and refines the airlight map by using filters. The restoration stage creates a defogged image by subtracting the refined airlight map from the input image.
Then, the Contrast Enhancement stage assesses the range of intensity values in the image and uses contrast stretching to expand the range of values and make features stand out more clearly.
`type fog_rectification.m`
```function [out] = fog_rectification(input) %#codegen % Copyright 2017-2019 The MathWorks, Inc. coder.gpu.kernelfun; % restoreOut is used to store the output of restoration restoreOut = zeros(size(input),'double'); % Changing the precision level of input image to double input = double(input)./255; %% Dark channel Estimation from input darkChannel = min(input,[],3); % diff_im is used as input and output variable for anisotropic diffusion diff_im = 0.9*darkChannel; num_iter = 3; % 2D convolution mask for Anisotropic diffusion hN = [0.0625 0.1250 0.0625; 0.1250 0.2500 0.1250; 0.0625 0.1250 0.0625]; hN = double(hN); %% Refine dark channel using Anisotropic diffusion. for t = 1:num_iter diff_im = conv2(diff_im,hN,'same'); end %% Reduction with min diff_im = min(darkChannel,diff_im); diff_im = 0.6*diff_im ; %% Parallel element-wise math to compute % Restoration with inverse Koschmieder's law factor = 1.0./(1.0-(diff_im)); restoreOut(:,:,1) = (input(:,:,1)-diff_im).*factor; restoreOut(:,:,2) = (input(:,:,2)-diff_im).*factor; restoreOut(:,:,3) = (input(:,:,3)-diff_im).*factor; restoreOut = uint8(255.*restoreOut); restoreOut = uint8(restoreOut); %% % Stretching performs the histogram stretching of the image. % im is the input color image and p is cdf limit. % out is the contrast stretched image and cdf is the cumulative prob. % density function and T is the stretching function. p = 5; % RGB to grayscale conversion im_gray = im2gray(restoreOut); [row,col] = size(im_gray); % histogram calculation [count,~] = imhist(im_gray); prob = count'/(row*col); % cumulative Sum calculation cdf = cumsum(prob(:)); % finding less than particular probability i1 = length(find(cdf <= (p/100))); i2 = 255-length(find(cdf >= 1-(p/100))); o1 = floor(255*.10); o2 = floor(255*.90); t1 = (o1/i1)*[0:i1]; t2 = (((o2-o1)/(i2-i1))*[i1+1:i2])-(((o2-o1)/(i2-i1))*i1)+o1; t3 = (((255-o2)/(255-i2))*[i2+1:255])-(((255-o2)/(255-i2))*i2)+o2; T = (floor([t1 t2 t3])); restoreOut(restoreOut == 0) = 1; u1 = (restoreOut(:,:,1)); u2 = (restoreOut(:,:,2)); u3 = (restoreOut(:,:,3)); % Replacing the value from look up table out1 = T(u1); out2 = T(u2); out3 = T(u3); out = zeros([size(out1),3], 'uint8'); out(:,:,1) = uint8(out1); out(:,:,2) = uint8(out2); out(:,:,3) = uint8(out3); return ```
### Generate Execution Profiling Report
To generate an execution profiling report, create a code configuration object with a dynamic library (`'dll'`) build type. Because the gpucoder.profile function accepts only an Embedded Coder™ configuration object, enable the option to create a `coder.EmbeddedCodeConfig` configuration object.
```cfg = coder.gpuConfig('dll','ecoder',true); cfg.GpuConfig.MallocMode = 'discrete';```
Run `gpucoder.profile` with the default threshold value of zero seconds. If the generated code has a lot of CUDA API or kernel calls, it is likely that each call constitutes only a small proportion of the total time. In such cases, set a low (non-zero) threshold value to generate a meaningful profiling report. It is not advisable to set number of executions value to a very low number (less than 5) because it does not produce an accurate representation of a typical execution profile.
```inputImage = imread('foggyInput.png'); inputs = {inputImage}; designFileName = 'fog_rectification'; gpucoder.profile(designFileName, inputs, ... 'CodegenConfig', cfg, 'Threshold', 0, 'NumCalls', 10);```
```Code generation successful: View report ### Starting SIL execution for 'fog_rectification' To terminate execution: clear fog_rectification_sil Execution profiling data is available for viewing. Open Simulation Data Inspector. Execution profiling report available after termination. ### Host application produced the following standard error (stderr) messages: Warning: LBR backtrace method is not supported on this platform. DWARF backtrace method will be used. Collecting data... ### Stopping SIL execution for 'fog_rectification' ```
### Code Execution Profiling Report for the `fog_rectification` Function
The code execution profiling report provides metrics based on data collected from a SIL execution. Execution times are calculated from data recorded by instrumentation probes added to the SIL or PIL test harness or inside the code generated for each component. For more information, see View Execution Times (Embedded Coder).
These numbers are representative. The actual values depend on your hardware setup. This profiling was done using MATLAB R2022b on a machine with an 6 core, 3.5GHz Intel® Xeon® CPU, and an NVIDIA TITAN XP GPU
#### Summary
This section gives information about the creation of the report.
#### Profiled Sections of Code
This section contains information about profiled code sections. The report contains time measurements for:
• The `entry_point_fn``_initialize` function, for example, `fog_rectification_initialize`.
• The entry-point function, for example, `fog_rectification`.
• The `entry_point_fn``_terminate` function, for example, `fog_rectification_terminate`.
• The section column lists the names of the function from which code is generated.
• Maximum execution time is the longest time between start and end of code section.
• Average Execution Time is the average time between start and end of code section.
• Maximum Self Time is the maximum execution time, excluding time in child sections.
• Average Self Time is the average execution time, excluding time in child sections.
• Calls indicate the number of calls to the code section.
• To view execution-time metrics for a code section in the Command Window, on the corresponding row, click the icon .
• To display measured execution times, click the Simulation Data Inspector icon . You can use the Simulation Data Inspector to manage and compare plots from various executions.
• To display the execution-time distribution, click the icon .
By default, the report displays time in milliseconds (${10}^{-3}$ seconds). You can specify the time unit and numeric display format. For example, to display time in microseconds (${10}^{-6}$ seconds), use the `report` (Embedded Coder) command:
```executionProfile=getCoderExecutionProfile('fog_rectification'); report(executionProfile, ... 'Units', 'Seconds', ... 'ScaleFactor', '1e-06', ... 'NumericFormat', '%0.3f')```
```ans = '/local-ssd/lnarasim/MATLAB/ExampleManager/lnarasim.Bdoc22b.j1984243/gpucoder-ex87489778/codegen/dll/fog_rectification/html/orphaned/ExecutionProfiling_f31bfb52dfefde93.html' ```
The report displays time in seconds only if the timer is calibrated, that is, the number of timer ticks per second is known. On a Windows® machine, the software determines this value for a SIL simulation. On a Linux® machine, you must manually calibrate the timer. For example, if your processor speed is 3.5 GHz, specify the number of timer ticks per second:
`executionProfile.TimerTicksPerSecond = 3.5e9;`
#### Execution Times in Percentages
This section provides function execution times as percentages of caller function and total execution times, which can help you to identify performance bottlenecks in generated code.
#### GPU Profiling Trace for fog_rectification
Section 4 shows the complete trace of GPU calls that have a runtime higher than the threshold value. A snippet of the profiling trace is shown.
#### GPU Profiling Summary for fog_rectification
Section 5 in the report shows the summary of GPU calls that are shown in section 4. The `cudaFree` is called 15 times per run of `fog_rectification` and the average time taken by 15 calls of `cudaFree` over 9 runs of `fog_rectification` is 1.3790 milliseconds. This summary is sorted in descending order of time taken to give the users an idea which GPU call is taking the maximum time.
#### Definitions
This section provides descriptions of some metrics.
|
# Tag Info
15
One of my papers was just posted to arXiv and addresses this question: optimally solving the Rubik's Cube is NP-complete.
14
EDIT: I quickly completed the amateur proof that I started a few months ago and never finished. You can download it in PDF format on my blog ... nobody has checked it yet, so refutations, comments and suggestions are welcome. I don't know if there is an official proof, but a few months ago I built the gadgets to mimic a planar 3-CNF formula; for example ...
8
With m eggs and k measurements the most floors that can be checked is exactly $$n(m,k)={k \choose 0} + {k \choose 1} + \ldots + {k \choose m},$$ (maybe $\pm 1$ depending on the exact def). Proof is trivial by induction. This expression has no closed form inverse but gives good asymptotic.
7
Two such puzzles that I know about are: Unruly. This website has an online library of puzzles and solutions and a generator for puzzles of arbitrary size. Masyu. This website has a library of puzzles and solutions. It also links to several variants of the puzzle. Actually, the page where the Unruly puzzle is found lists more such puzzles, some of which ...
7
To meet condition 1, $n$ must be even, so let's assume that it is. Then we can automatically achieve both conditions 1 and 2 by making an $n/2\times n/2$ matrix whose entries are $2\times 2$ submatrices in one of the two patterns \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right),\quad \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{...
6
Even without the hash function, this is basically just 1-dimensional Weisfeiler-Leman with individualization of a single vertex. Neuen & Schweitzer (STOC '18, arXiv) gave examples with an exponential $2^{\Omega(n)}$ lower bound for a much stronger family of algorithms, namely those for which one can iteratively individualize & refine, and even use $k$...
6
I thought of this weird reduction (chances that it is wrong are high :-). Idea: reduction from Hamiltonian path on grid graphs with degree $\leq 3$; each node of the planar graph can be shifted in such a way that every "row" ($y$ value) and every "column" ($x$ value) contains at most one node. The graph can be scaled and each node can be replaced by a square ...
3
Just a partial self-answer: I think the problem is NP-complete. I found 3 gadgets (each one occupies $16 \times 16$ cells) that allows to build a Net game equivalent to a grid graph of degree $\leq 3$ and that should have a solution iif the original graph has an Hamiltonian cycle. The figure shows four different configurations of the gadget equivalent to a ...
2
Determining that $20$ is the diameter (God's number) of the Rubik's Cube Group $G$ under the half-turn metric with Singmaster generating set $s=\langle U, U', U^2, D, D', D^2,\cdots\rangle$ was a wonderful result. I'm curious about follow-up questions, such as determining how many half-turn twists $m$ it would take to get the cube fully "mixed" to $\epsilon$...
2
For my PhD (wow! that was long ago... i'm getting old..). I worked on a few different problems (and CSP or SAT modelled them). Of the kind you are interested in: sudokus edge matching puzzles The phd is at: https://www.tdx.cat/handle/10803/8122 And I should have code (generators) lying around somewhere.
2
In the most authoritative reference on PPAD-complete problems, there is no PPAD-complete puzzle mentioned.
2
This isn't really an answer to your questions, but I think it would help in understanding the problem (or the way to answer your questions is) to write out a formal statement and proof of the solution. The proof you've presented doesn't say what is being proven, and as written is certainly incorrect. It states (final bullet point) that the inductive ...
2
In my comment above I said perhaps $\Theta(\min_{k \le m} kn^{\frac{1}{k}})$ is a tight bound. I'm not sure about the lower bound, but since you just want an explanation for what $k$ means, I can explain the intuition using the upper bound. As you guessed, $k$ is the number of eggs actually used. That explains the $\min$ on the outside. Now once we've ...
1
This is to report an implementation of completed-sudoku compact encoding (similar to suggestion by Zurui Wang 9/14/11). The input is the top row and 1st 3 digits of the 2nd row. These are reduced to 1-9! and 1-120 and combined to <= 4.4x10^7. These are used as givens to count lexicographically all the partial sukokus of 30 digits up to the matching ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# On every infinite-dimensional Banach space there exists a discontinuous linear functional.
On every infinite-dimensional Banach space there exists a discontinuous linear functional.
Assuming the axiom of choice, every vector space has a basis. With an infinite basis, I can define on a countable subset $\{e_n:n\in\mathbb{N}\}$ a function $f(e_n)=n\|e_n\|$ and let $f(x)=1$ for all other basis vectors.
Then this determines an unbounded linear functional, which is therefore discontinuous.
But this argument, a, applies to any infinite-dimensional normed spaces, b, relies on the assumption of the axiom of choice.
Is there a smart answer that does make use of the condition that the space in question is a Banach space, and even better, avoids the use of axiom of choice?
-
No. There are models of $\mathsf{ZF+\lnot AC}$ in which every linear transformation from a Banach space to a normed space is automatically continuous. In particular this is true for linear functionals.
In such models, it follows, every linear functional has to be continuous.
An example for these models are Solovay's model, or models of $\mathsf{ZF+AD}$.
-
If OP knew only half of the technical terms here he wouldn't have asked this question, and especially not in this way. – Martin Jan 27 '13 at 14:24
@Martin: What technical terms? linear? transformation? Banach space? – Asaf Karagila Jan 27 '13 at 14:25
@Martin: Maybe you mean $\mathsf{ZF}$ and the symbol $\lnot$? – Asaf Karagila Jan 27 '13 at 14:27
model, $\mathsf{ZF + \lnot AC}$, Solovay's model, $\mathsf{ZF+AD}$. – Martin Jan 27 '13 at 14:30
Thank you. I think I roughly know what you mean--under the ZF axioms and "not AC", linear functionals must be continuous, so the assumption of AC is necessary. Does this also mean that it is impossible to write down explicitly a discontinuous linear functional for some specific infinite-dimensional Banach space? – Montez Jan 27 '13 at 18:58
|
# an autodidact meets a dilettante…
‘Rise above yourself and grasp the world’ Archimedes – attribution
## a little about the chemistry of water and its presence on Earth
So I now know, following my previous post, a little more than I did about how water’s formed from molecular hydrogen and oxygen – you have to break the molecular bonds and create new ones for H2O, and that requires activation energy, I think. But I need to explore all of this further, and I want to do so in the context of a fascinating question, which I’m hoping is related – why is there so much water on Earth’s surface?
When Earth was first formed, from planetesimals energetically colliding together, generating lots of heat (which may have helped with the creation of H2O, but not in liquid form??) there just doesn’t seem to have been a place for water, which would’ve evaporated into space, wouldn’t it? Presumably the still-forming, virtually molten Earth had no atmosphere.
The most common theory put out for Earth’s water is bombardment in the early days by meteors of a certain type, carbonaceous chondrites. These meteors were formed further out from the sun, where water would have frozen. Carbonaceous chondrites are known to contain the same ratio of heavy water to ‘normal’ water as we find on Earth. Heavy water is formed with deuterium, an isotope of hydrogen containing a neutron as well as the usual proton. Obviously there had to have been plenty of these collisions over a long period to create our oceans. Comets have been largely ruled out because, of the comets we’ve examined, the deuterium/hydrogen ratio is about double that of the chondrites, though some have argued that those comets may be atypical. Also there’s some evidence that the D/H ratio of terrestrial water has changed over time.
So there are still plenty of unknowns about the history of Earth’s water. Some argue that volcanism, along with other internal sources, was wholly or partly responsible – water vapour is one of the gases produced in eruptions, which then condensed and fell as rain. Investigation of moon rocks has revealed a D/H ratio similar to that of chondrites, and also that of Earth (yes, there’s H2O on the moon, in various forms). This suggests that, since it has become clear that the Moon and Earth are of a piece, water has been there on both from the earliest times. Water ice detected in the asteroid belt and elsewhere in the solar system provides further evidence of the abundance of this hardy little molecule, which enriches the hypotheses of researchers.
But I’m still mystified by how water is formed from molecular, or diatomic, hydrogen and oxygen. It occurs to me, thanks to Salman Khan, that having a look at the structural formulae of these molecules, as well as investigating ‘activation energy’, might help. I’ve filched the ‘Lewis structure’ of water from Wikipedia.
It shows that hydrogen atoms are joined to oxygen by a single bond, the sharing of a pair of electrons. They’re called polar covalent bonds, as described in my last post on the topic. H2 also binds the two hydrogen atoms with a single covalent bond, while O2 is bound in a double covalent bond. (If you’re looking for a really comprehensive breakdown of the electrochemical structure of water, I recommend this site).
So, to produce water, you need enough activation energy to break the bonds of H2 and O2 and create the bonds that form H2O. Interestingly, I’m currently reading The Emerald Planet, which gives an example of the kind of activation energy required. The Tunguska event, an asteroid visitation in the Siberian tundra in 1908, was energetic enough to rip apart the bonds of molecular nitrogen and oxygen in the surrounding atmosphere, leaving atomic nitrogen and oxygen to bond into nitric oxide. But let’s have a closer look at activation energy.
So, according to Wikipedia:
In chemistry and physics, activation energy is the energy which must be available to a chemical or nuclear system with potential reactants to result in: a chemical reaction, nuclear reaction, or various other physical phenomena.
This stuff gets complicated and mathematical very quickly, but activation energy (Ea) is measured in either joules (or kilojoules) per mole or kilocalories per mole. A mole, as I’ve learned from Khan, is the number of atoms there are in 12g of carbon-12. So what? Well, that’s just a way of translating atomic mass units (amu) to grams (one gram equals one mole of amu).
The point is though that we can measure the activation energy, which, in the case of molecular reactions, is going to be more than the measurable change between the initial and final conditions. Activation energy destabilises the molecules, bringing about a transition state in which usually stable bonds break down, freeing the molecules to create new bonds – something that is happening throughout our bodies at every moment. When molecular oxygen is combined with molecular hydrogen in a confined space, all that’s required is the heat from a lit match to start things off. This absorption of energy is called an endothermic reaction. Molecules near the fire break down into atoms, which recombine into water molecules, a reaction which releases a lot of energy, creating a chain of reactions until all the molecules are similarly recombined. From this you can imagine how water could have been created in abundance during the fiery early period of our solar system’s evolution.
I’ll end with more on the structure of water, for my education.
As a liquid, water has a structure in which the H-O-H angle is about 106°. It’s a polarised molecule, with the negative charge on the oxygen being around 70% of an electron’s negative charge, which is neutralised by a corresponding positive charge shared by the two hydrogen atoms. These values can change according to energy levels and environment. As opposite charges attract, different water molecules attract each other when their H atoms are oriented to other O atoms. The British Chemistry professor Martin Chaplin puts it better than I could:
This attraction is particularly strong when the O-H bond from one water molecule points directly at a nearby oxygen atom in another water molecule, that is, when the three atoms O-H O are in a straight line. This is called ‘hydrogen bonding’ as the hydrogen atoms appear to hold on to both O atoms. This attraction between neighboring water molecules, together with the high-density of molecules due to their small size, produces a great cohesive effect within liquid water that is responsible for water’s liquid nature at ambient temperatures.
We’re all very grateful for that nature.
Written by stewart henderson
September 24, 2018 at 10:32 am
Posted in chemistry, science, water
Tagged with , , ,
## exploring oxygen
I’ve been reading David Beerling’s fascinating but demanding book The Emerald Planet, essentially a history of plants, and their contribution to our current life-sustaining atmosphere, and it has inspired me to get a handle on atmospheric oxygen in general and the properties of this rather important diatomic molecule. Demanding because, as always, basic science doesn’t come naturally to me so I have to explain it to myself in great detail to really pin it down, and then I forget. For example, I don’t have any understanding of oxidation right now, though I’ve read about it, and probably written about it, and more or less understood it, many times. Things fall apart, and then we fall apart…
Okay, let me pull myself together. Oxygen is a highly reactive gas, combining with other elements readily in a number of ways. A bushfire is an example of oxidation, in which free oxygen is ‘consumed’ rapidly, reacting with carbon in the dry wood to produce carbon dioxide, among other gases. This is also called combustion. Rust is a slower form of oxidation, in which iron reacts with oxygen to form iron oxide. So I think that’s basically what oxidation is, the trapping of ‘free’ oxygen into other gases or compounds, think carbon monoxide, sulphur dioxide, hydrogen peroxide, etc etc. Not to mention its reaction with hydrogen to form water, that stuff that makes up more than half our bodily mass.
Well, I’m wrong. Oxidation doesn’t have to involve oxygen at all. Which I think is criminally confusing. Yes, fire and rust are examples of oxidation reactions, but so is a reaction between hydrogen and fluorine gas to produce hydrofluoric acid (it’s actually a redox reaction – hydrogen is being oxidised and fluorine is being reduced). According to this presumably reliable definition, ‘oxidation is the loss of electrons during a reaction by a molecule, atom or ion’. Reduction is the opposite. The reason it’s called oxidation is historical – oxygen, the gas that Priestley and Lavoisier famously argued over, was the first gas known to engage in this sort of behaviour. Basically, oxygen oxidises other elements, getting them to hand over their electrons – it’s an electron thief.
Oxygen has six valence electrons, so needs another two to feel ‘complete’. It’s diatomic in nature, existing around us as O2. I’m not sure how that works – if each individual atom wants two electrons, to make eight electrons in its outer shell for stability, why would it join with another oxygen to complete this outer shell, and then some? That makes for another four electrons. Are they now valence electrons? Apparently not, in this stable diatomic form. Here’s an expert’s attempt to explain this, from Quora
For oxygen to have a full outer shell it must have 8 electrons in it. But it only has 6 electrons in its valence shell. Each oxygen atom is actively seeking to get more electrons to complete its valence shell. If no other atoms except oxygen atoms are available, each oxygen atom will try to wrestle extra valence electrons from another oxygen atom. So if one oxygen atom merges with another, they “share” electrons, giving both a full outer shell and ultimately being virtually unreactive.
For a while this didn’t make sense to me, mathematically. Atomic oxygen has eight electrons around one nucleus. Six in the outer, ‘valence’ shell. Molecular oxygen has 16 electrons around two nuclei. What’s the configuration to make it stable? Presumably both nuclei still have 2 electrons configured in their first shells, that makes 12 electrons to make for a stable configuration, which doesn’t seem to work out. Did it have something to do with ‘sharing’? Are the shells configured now around both nuclei instead of separately around each nucleus? What was I missing here? Another expert on the same website writes this:
[The two oxygen atoms combine to] create dioxygen, a molecule (O2) in which both oxygen atoms have 8 valence electrons, so they are happy (the molecule is stable).
But what about the extra electrons? It seems I’d have to give up on understanding and take the experts’ word, and I hate that. However, the Khan academy has come to the rescue. In video 14 of his chemistry series, Khan explains that the two atoms share two pairs of electrons, so yes, sharing was the key. So each atom can ‘kind of pretend’, in Khan’s words, that they have eight valence electrons. And this is a covalent bond, unlike an ionic bond which combines metals with non-metals, such as sodium and chloride.
Anyway, moving on. One of the most important features of oxygen, as mentioned, is its role in water – which is about 89% oxygen by weight. But how do these two elements – diatomic molecules as we find them in our environment – actually come together to form such a very different substance?
Well. According to this video, when H2 and O2, and presumably other molecules, are formed
electrons lose energy to form the new orbitals, the energy gets away as a photon, and then the new orbitals are stuck that way, they can’t undo themselves until the missing energy comes back.
This set me on my heels when I heard it, I’d never heard anything like it before, possibly because photon stuff tends to belong to physics rather than chemistry, though photosynthesis rather undoes that argument…
So, sticking with this video (from Brigham Young University Physics Department), to create water from H2 and O2 you need to give them back some of that lost energy, in the form of ‘activation energy’, e.g by ‘striking a match’. The video turns out to be kind of funny/scary, and it again involves photons. After the explosion, water vapour was found condensing on the inside of the glass through which hydrogen was pumped and ignited…
Certainly the demonstration was memorable (and there are a few of these explosive vids online), but I think I need more theory. Hopefully I’ll get back to it, but it seems to have much to do with the strong covalent bonds that form, for example, molecular hydrogen. It requires a lot of energy to break them.
Once formed, water is very stable because the oxygen’s six valence electrons get two extras, one from each of the hydrogens, while the hydrogens get an extra electron each. The atoms are stuck together in a type of bonding called polar covalent. Oxygen is more electronegative than hydrogen, meaning it attracts electrons more strongly – the negative charge is polarised at the oxygen, giving that part of the molecule a partial negative charge, with a proportional positive charge at the hydrogens. I might explore the effects of this polarity in another post.
The percentage of oxygen in our atmosphere seems stable at 21% – that’s to say, it appears to be the same now as when I was born, but that’s not a lot of time, geologically. The issue of oxygen levels in our atmosphere over geological time is complex and contested, but the usual story is that something happened with the prokaryotic life forms that had evolved in the oceans billions of years ago, some kind of mutation which enabled a bacterial species to capture and harness solar energy. This green mutation, cyanobacteria, gave off gaseous oxygen as a waste product – a disaster for other life forms due to its highly reactive nature. The photosynthesising cyanobacteria, however, multiplied rapidly, oxygenising the ocean. Oxygen reacted with the ocean’s iron, creating layers of rust (iron oxide) on the ocean floor, later visible on land through tectonic forces over the eons. Gradually over time, other organisms evolved that were adapted to the new oxygen-rich atmosphere. It became an energy source, which in turn produced its own waste product, carbon dioxide. This created a near-perfect cycle, as cyanobacteria required CO2 as well as water and sunlight to produce oxygen (and sugar). Other photosynthesising water-based and land-based life forms, plants in particular, emerged. In fact, these life forms had harnessed cyanobacteria as chloroplasts, a process known as endosymbiosis.
I’ll end this bitsy post with the apparent fact, according to this Inverse article, that our oxygen levels are actually falling, and have been for near a million years, and that’s leaving aside the far greater effects due to human activity (fossil fuel burning consumes oxygen and releases CO2). Of course oxygen is very vastly more abundant in the atmosphere than CO2, and the change is minuscule on the overall scale of things (unlike the change we’re making to CO2 levels). It will make much more of a difference in the oceans however, where the lack of dissolved oxygen is creating dead zones. The article explains:
The primary contributor to these apocalyptic scenes is fertilizer runoff from agriculture, which causes algal blooms, providing a great feast for bacteria that consume oxygen. The abundance of these bacteria cause O2 levels to plummet, and if they go low enough, organisms that need it to survive swim away or die.
Just another of the threats to sea-life caused by humans.
Written by stewart henderson
September 16, 2018 at 4:20 pm
Posted in environment, science
Tagged with , ,
## the continuing story of South Australia’s energy solutions
In a very smart pre-election move, our state Premier Jay Weatherill has announced that there’s a trial under way to install Tesla batteries with solar panels on over 1,000 SA Housing Trust homes. The ultimate, rather ambitious aim, is to roll this out to 50,000 SA homes, thus creating a 250MW power plant, in essence. And not to be outdone, the opposition has engaged in a bit of commendable me-tooism, with a similar plan, actually announced last October. This in spite of the conservative Feds deriding SA labor’s ‘reckless experiments’ in renewables.
Initially the plan would be offered to public housing properties – which interests me, as a person who’s just left a solarised housing association property for one without solar. I’m in community housing, a subset of public housing. Such a ‘virtual’ power plant will, I think, make consumers more aware of energy resources and consumption. It’s a bit like owning your own bit of land instead of renting it. And it will also bring down electricity prices for those consumers.
This is a really important and exciting development, adding to and in many ways eclipsing other recently announced developments in SA, as written about previously. It will be, for a time at least, the world’s biggest virtual power plant, lending further stability to the grid. It’s also a welcome break for public housing tenants, among the most affected by rising power bills (though we’ll have to wait and see if prices do actually come down as a result of all this activity).
And the announcements and plans keep coming, with another big battery – our fourth – to be constructed in the mid-north, near Snowtown. The 21MW/26MWh battery will be built alongside a 44MW solar farm in the area (next to the big wind farm).
South Australia’s wind farms
Now, as someone not hugely well-versed in the renewable energy field and the energy market in general, I rely on various websites, journalists and pundits to keep me honest, and to help me make sense of weird websites such as this one, the apparent aim of which is to reveal all climate scientists as delusionary or fraudsters and all renewable energy as damaging or wasteful. Should they (these websites) be tackled or ignored? As a person concerned about the best use of energy, I think probably the latter. Anyway, one journalist always worth following is Giles Parkinson, who writes for Renew Economy, inter alia. In this article, Parkinson focuses on FCAS (frequency control and ancillary services), a set of network services overseen by AEMO, the Australian Energy Market Operator. According to Parkinson and other experts, the provision of these services has been a massive revenue source for an Australian ‘gas cartel’, which has been rorting the system at the expense of consumers, to the tune of many thousands of dollars. Enter the big Tesla battery , officially known as the Hornsdale Power Reserve (HPR), and the situation has changed drastically, to the benefit of all:
Rather than jumping up to prices of around $11,500 and$14,000/MW, the bidding of the Tesla big battery – and, in a major new development, the adjoining Hornsdale wind farm – helped (after an initial spike) to keep them at around \$270/MW.
This saved several million dollars in FCAS charges (which are paid by other generators and big energy users) in a single day.
And that’s not the only impact. According to state government’s advisor, Frontier Economics, the average price of FCAS fell by around 75 per cent in December from the same month the previous year. Market players are delighted, and consumers should be too, because they will ultimately benefit. (Parkinson)
As experts are pointing out, the HPR is largely misconceived as an emergency stop-gap supplier for the whole state. It has other, more significant uses, which are proving invaluable. Its effect on FCAS, for example, and its ultra-ultra-quick responses to outages at major coal-fired generators outside of the state, and ‘its smoothing of wind output and trading in the wholesale market’. The key to its success, apparently, is its speed of effect – the ability to switch on or off in an instant.
Parkinson’s latest article is about another SA govt announcement – Australia’s first renewable-hydrogen electrolyser plant at Port Lincoln.
I’ve no idea what that means, but I’m about to find out – a little bit. I do know that once-hyped hydrogen hasn’t been receiving so much support lately as a fuel – though I don’t even understand how it works as a fuel. Anyway, this plant will be ten times bigger than one planned for the ACT as part of its push to have its electricity provided entirely by renewables. It’s called ‘green hydrogen’, and the set-up will include a 10MW hydrogen-fired gas turbine (the world’s largest) driven by local solar and wind power, and a 5MW hydrogen fuel cell. Parkinson doesn’t describe the underlying technology, so I’ll have a go.
It’s all about electrolysis, the production of hydrogen from H2O by the introduction of an electric current. Much of what follows comes from a 2015 puff piece of sorts from the German company Siemens. It argues, like many, that there’s no universal solution for electrical storage, and, like maybe not so many, that large-scale storage can only be addressed by pumped hydro, compressed air (CAES) and chemical storage media such as hydrogen and methane. Then it proceeds to pour cold water on hydro – ‘the potential to extend its current capacity is very limited’ – and on CAES ‘ – ‘has limitations on operational flexibility and capacity. I know nothing about CAES, but they’re probably right about hydro. Here’s their illustration of the process they have in mind, from generation to application.
Clearly the author of this document is being highly optimistic about the role of hydrogen in end-use applications. Don’t see too many hydrogen cars in the offing, though the Port Lincoln facility, it’s hoped, will produce hydrogen ‘that can be used to power fuel cell vehicles, make ammonia, generate electricity in a turbine or fuel cell, supply industry, or to export around the world’.
So how does electrolysis (of water) actually work? The answer, of course, is this:
2 H2O(l) → 2 H2(g) + O2(g); E0 = +1.229 V
Need I say more? On the right of the equation, E0 = +1.229 V, which basically means it takes 1.23 volts to split water. As shown above, Siemens is using PEM (Proton Exchange Membrane, or Polymer Electrolyte Membrane) electrolysis, though alkaline water electrolysis is another effective method. Not sure which which method is being used here.
In any case, it seems to be an approved and robust technology, and it will add to the variety of ‘disruptive’ and innovative plans and processes that are creating more regionalised networks throughout the state. And it gives us all incentives to learn more about how energy can be produced, stored and utilised.
Written by stewart henderson
February 14, 2018 at 4:50 pm
|
# How can the heat capacity of a lead sinker be determined?
Jan 17, 2015
Here's how you'd go about doing this.
In order to determine the specific heat ("c"_("sinker")) of a lead sinker, you will need to use a calorimeter. Let's say you have a lead sinker that weighs $\text{5 g}$ . You'll start by placing this lead sinker in boling water for 15-20 minutes.
Place $\text{200 mL}$ of distilled water in the calorimeter. We will assume that, after 20 minutes, the lead sinker is now at the same temperature as the boiling water, $\text{100"^@"C}$.
The water in the calorimeter is at room temperature, say $\text{25"^@"C}$. Place the hot lead sinker into the calorimeter and record, using a thermometer, the highest temperature of the mixture.
From this point on, you'll use the heat absorbed by the water to determine the specific heat of the lead sinker.
Since this is presumably a lab assignment, I'll just use the conceptual equations to solve for "c"_("sinker").
The heat absorbed by the water is equal to
${q}_{\text{absorbed") = "m"_("water") * "c"_("water}} \cdot \Delta T$, where
"c"_("water") - the specific heat of water, $4.18$ "J"/("g" * ^@"C");
$\Delta T$ - the difference between the final and the initial temperature of the water - will be positive in water's case, since it absorbs heat from the hot lead sinker - its final temperature will be bigger than $\text{25"^@"C}$.
The heat absorbed by the water must be equal to the heat given off by the sinker (in absolute terms), so
${q}_{\text{sinker") = -q_("water}}$
We know that ${q}_{\text{sinker}}$ is equal to
${q}_{\text{sinker") = "m"_("sinker") * "c"_("Sinker") * DeltaT_("sinker}}$, where
"m"_("sinker") - the mass of the sinker - in our example, 5 g.
$\Delta {T}_{\text{sinker}}$ - the difference btween the final and the initial temperature of the sinker - will be negative this time, since the sinker will cool off - its final temperature will be way smaller than $\text{100"^@"C}$.
Therefore, the value for the sinker's specific heat will be
${c}_{\text{sinker") = (-q_("water"))/("m"_("sinker") * DeltaT_("sinker}}$
Usually, you'll get something around $0.125$-$0.130$ "J"/("g" * ^@"C") for lead's specific heat.
A good example of this particular density are on this site: https://wikis.engrade.com/specificheat
|
/ Experiment-HEP FERMILAB-PUB-11-216-E
Model-independent measurement of $\boldsymbol{t}$-channel single top quark production in $\boldsymbol{p\bar{p}}$ collisions at $\boldsymbol{\sqrt{s}=1.96}$ TeV
Pages: 8
Abstract: We present a model-independent measurement of $t$-channel electroweak production of single top quarks in $\ppbar$ collisions at $\sqrt{s}=1.96\;\rm TeV$. Using $5.4\;\rm fb^{-1}$ of integrated luminosity collected by the D0 detector at the Fermilab Tevatron Collider, and selecting events containing an isolated electron or muon, missing transverse energy and one or two jets originating from the fragmentation of $b$ quarks, we measure a cross section $\sigma({\ppbar}{\rargap}tqb+X) = 2.90 \pm 0.59\;\rm (stat+syst)\; pb$ for a top quark mass of $172.5\;\rm GeV$. The probability of the background to fluctuate and produce a signal as large as the one observed is $1.6\times10^{-8}$, corresponding to a significance of 5.5 standard deviations.
Note: * Temporary entry *
Total numbers of views: 1085
Numbers of unique views: 540
|
Mon 20 Jan 2020 16:18 - 16:40 at Maurepas - Decidability and complexity Chair(s): Kathrin Stark
We formalise undecidability results concerning higher-order unification in the simply-typed $\lambda$-calculus with $\beta$-conversion in Coq. We prove the undecidability of general higher-order unification by reduction from Hilbert’s tenth problem, the solvability of Diophantine equations, following a proof by Dowek. We sharpen the result by establishing the undecidability of second-order and third-order unification following proofs by Goldfarb and Huet, respectively.
Goldfarb’s proof for second-order unification is by reduction from Hilbert’s tenth problem. Huet’s original proof uses the Post correspondence problem (PCP) to show the undecidability of third-order unification. We simplify and formalise his proof as a reduction from modified PCP. We also verify a decision procedure for first-order unification.
All proofs are carried out in the setting of synthetic undecidability and rely on Coq’s built-in notion of computation.
#### Mon 20 Jan
15:35 - 16:40: CPP 2020 - Decidability and complexity at Maurepas Chair(s): Kathrin StarkSaarland University, Germany 15:35 - 15:56Talk Verified Programming of Turing Machines in CoqYannick ForsterSaarland University, Fabian KunzeSaarland University, Maximilian WuttkeSaarland University DOI Pre-print Media Attached 15:56 - 16:18Talk A Functional Proof Pearl: Inverting the Ackermann HierarchyLinh TranNational University of Singapore, Anshuman MohanNational University of Singapore, Aquinas HoborNational University of Singapore DOI Pre-print Media Attached 16:18 - 16:40Talk Undecidability of Higher-Order Unification Formalised in CoqSimon SpiesSaarland University, Yannick ForsterSaarland University DOI Pre-print Media Attached
|
Contests Virtual Contests Problems Submit Runs Status Rank List Forum
1987. Faulty Odometer
Time Limit: 1.0 Seconds Memory Limit: 65536K
Total Runs: 616 Accepted Runs: 414
You are given a car odometer which displays the miles traveled as an integer. The odometer has a defect, however: it proceeds from the digit 3 to the digit 5, always skipping over the digit 4. This defect shows up in all positions (the one's, the ten's, the hundred's, etc.). For example, if the odometer displays 15339 and the car travels one mile, odometer reading changes to 15350 (instead of 15340).
### Input
Each line of input contains a positive integer in the range 1..999999999 which represents an odometer reading. (Leading zeros will not appear in the input.) The end of input is indicated by a line containing a single 0. You may assume that no odometer reading will contain the digit 4.
### Output
Each line of input will produce exactly one line of output, which will contain: the odometer reading from the input, a colon, one blank space, and the actual number of miles traveled by the car.
### Sample Input
13
15
2003
2005
239
250
1399
1500
999999
0
### Output for Sample Input
13: 12
15: 13
2003: 1461
2005: 1462
239: 197
250: 198
1399: 1052
1500: 1053
999999: 531440
Source: Rocky Mountain 2005
Submit List Runs Forum Statistics
Tianjin University Online Judge v1.3.0
Maintance:Fxz. Developer: SuperHacker, G.D.Retop, Fxz
|
## Sunday, March 23, 2014
### Gtk3 hit-a-hint: mouseless navigation module
I present gtkhah, a gtk module for hit-a-hint mouseless navigation of gtk3 applications.
It's been inspired by the various hit-a-hint browser extensions.
Usage at the project homepage. Here's a quick screenshot of Gedit:
## Monday, January 20, 2014
### Intersection test and optimal allocation of 2D axis-aligned boxes using linear programming
Either-or trick
In this solution, we will make use of the either-or trick. The constraints in a program are all in and, but we want some of them to be in or.
Consider the following logic formula: $$a\ge b\vee c\ge d$$. It can be written with the following and constraints: \begin{alignedat}{2}\, & M\cdot x\; & +a\; & \ge b\\ \, & M\cdot\left(1-x\right)\; & +c\; & \ge d\\ \, & \, & x\; & \in\left\{ 0,1\right\} \end{alignedat} where $$M \gt 0$$ is big (depending on the specific context as we'll see later).
When $$x=0$$, then $$a \ge b$$ must hold, while the second constraint is always satisfied. When $$x=1$$, then $$c \ge d$$ must hold, while the first constraint is always satisfied.
In other words, either one of the two constraint must hold for a given value of $$x$$. If neither do, then the original or formula is not satisfied.
Empty intersection test
Given two AABB boxes $$i,j$$ at position $$x_i,y_i$$ and $$x_j,y_j$$ with size $$w_i,h_i$$ and $$w_j,h_j$$ respectively, we want to constraint our program such that their intersection is empty.
This can be expressed with the following logic formula:
$\left(x_{i}\ge x_{j}+w_{j}\vee x_{j}\ge x_{i}+w_{i}\right)\vee\left(y_{i}\ge y_{j}+h_{j}\vee y_{j}\ge y_{i}+h_{i}\right)$
We can encode the above expression with 3 either-or tricks using 3 binary variables:
\begin{alignedat}{3}\; & M\cdot b_{1}\; & \, & +M\cdot b_{3} & +x_{i}\; & \ge x_{j}+w_{j}\\ \; & M\cdot\left(1-b_{1}\right)\; & \, & +M\cdot b_{3} & +x_{j}\; & \ge x_{i}+w_{i}\\ \; & M\cdot b_{2}\; & \, & +M\cdot\left(1-b_{3}\right) & +y_{i}\; & \ge y_{j}+w_{j}\\ \; & M\cdot\left(1-b_{2}\right)\; & \, & +M\cdot\left(1-b_{3}\right) & +y_{j}\; & \ge y_{i}+w_{i}\\ \, & \, & \, & \; & b_{1},b_{2},b_{3}\; & \in\left\{ 0,1\right\} \end{alignedat} Real problem
Given a fixed area of size $$(cols)\times(rows)$$ and a set of boxes with fixed width $$w_i$$ for each $$i=1..n$$ (where $$n$$ is the number of boxes), find the optimal allocation $$x_i, y_i, h_i$$ for each box $$i=1..n$$ such that we cover all the area and maximize the size of the boxes fairly.
To achieve fair allocation we choose to maximize the minimum height of the boxes. For example, if we had an area of size $$5\times100$$, and two boxes of width $$5$$ each, we prefer them to have height $$50$$ and $$50$$ respectively, rather than $$1$$ and $$99$$.
This problem can be encoded as follows: $max\; minh+C\cdot{\displaystyle \sum_{i=1}^{n}h_{i}}$ $subject\;to:$ $$\forall i=1..n:\; minh\leq h_{i}$$
$\forall i=1..n-1,\, j=i+1..n:$ \begin{alignedat}{3}\; & M\cdot b_{1}^{\left(ij\right)}\; & \, & +M\cdot b_{3}^{\left(ij\right)} & +x_{i}\; & \ge x_{j}+w_{j}\\ \; & M\cdot\left(1-b_{1}^{\left(ij\right)}\right)\; & \, & +M\cdot b_{3}^{\left(ij\right)} & +x_{j}\; & \ge x_{i}+w_{i}\\ \; & M\cdot b_{2}^{\left(ij\right)}\; & \, & +M\cdot\left(1-b_{3}^{\left(ij\right)}\right) & +y_{i}\; & \ge y_{j}+w_{j}\\ \; & M\cdot\left(1-b_{2}^{\left(ij\right)}\right)\; & \, & +M\cdot\left(1-b_{3}^{\left(ij\right)}\right) & +y_{j}\; & \ge y_{i}+w_{i}\\ \, & \, & \, & \; & b_{1}^{\left(ij\right)},b_{2}^{\left(ij\right)},b_{3}^{\left(ij\right)}\; & \in\left\{ 0,1\right\} \end{alignedat}
$\forall i=1..n:$ \begin{alignedat}{1}x_{i}+w_{i}\; & \le cols\\ y_{i}+h_{i}\; & \leq rows\\ 0\leq x_{i}\; & \leq cols-1\\ 0\leq y_{i}\; & \leq rows-1\\ 1\leq h_{i}\; & \leq rows \end{alignedat} Equation $$(1)$$ is used to get the minimum height of the boxes together with the objective function. Equations $$(2)$$ ensure that the boxes do not overlap. Equations $$(3)$$ ensure that the boxes don't lie outside of the allocation area.
In the objective function we also add a second component, which ensures that on equal $$minh$$ we chose the allocation that fills the remaining space. This component must thus be $$< 1$$ to not bias the $$minh$$ component.
Now, what are the values of $$M$$ and $$C$$? The constant $$M$$ must be big enough to make an inequality always true, so:
\begin{gather*} \forall i,j:\; M\ge x_{j}+w_{j}-x_{i}\\ M\ge\max\left\{ x_{j}+w_{j}-x_{i}\right\} \\ M\ge\max\left\{ x_{j}+w_{j}\right\} -\min\left\{ x_{i}\right\} \\ M\ge cols \end{gather*} Since it must hold also for rows, $$M\ge\max\left\{ cols,rows\right\}$$. The result is very intuitive if you look at the original inequalities. We can choose a value of $$M=cols+rows$$.
The constant $$C$$ instead must be such that: \begin{gather*} C\cdot{\displaystyle \sum_{i=1}^{n}h_{i}}<1\\ C\cdot n\cdot\max\left\{ h_{i}\right\} <1\\ C\cdot n\cdot rows<1\\ C<\frac{1}{n\cdot rows} \end{gather*} This result is also very intuitive. We can choose $$C=\frac{1}{2\cdot n\cdot rows}$$ to avoid numerical instability.
Demonstration
I've prepared a GMPL program to allocate $$10$$ boxes with width $$3,4,10,7,4,8,2,9,8,5$$ in an area of $$10\,cols\times100\, rows$$. GLPK does not converge, so you can limit the time to 1 second (or more) in order to get a feasible but suboptimal solution.
You can visualize the result in the screenshot below of the two solutions while running for 1 second and 5 second respectively. After 60 seconds the solution didn't improve.
Please note that there is no relationship between the colors of the two tests.
Relaxing the integer constraint on $$x,y,h,minh$$ might also give you better solutions in less time.
You can download the gist below and run it as: glpsol --tmlim 5 -m aabb_alloc.mod:
## Friday, September 06, 2013
Have you ever wanted to do case expr of regexhere -> ... ?
You can do almost that with view patterns!
{-# LANGUAGE ViewPatterns #-}
import Text.Regex.Posix
-- Helper
pat :: String -> String -> [[String]]
pat p s = s =~ p
-- Function with matching
foo :: String -> String
foo (pat "foo(bar|baz)" -> [[_,x]]) = x
foo _ = "no!"
main :: IO ()
main = do
print $foo "foobar" print$ foo "foobaz"
print $foo "yes?" The above code will print bar baz no! . Have fun! ## Wednesday, July 03, 2013 ### Found my google reader alternative: the old reader So I'd like to share with you my google reader alternative: theoldreader. I've tried so far yoleo, feedly, goread and diggreader. Each of them is missing the following must have things for me: • The "next" button. I don't like using keyboard bindings when reading feeds. • Search through all feeds • Social by sharing/liking news • Exporting OPML Additionally, theoldreader has a great (disabled by default) feature: clicking on a news will scroll to it. That is very handy if you don't want to stick your mouse clicking on the next button. The only bad point of theoldreader is that there's an import queue, I'm currently in position 1566 (1481 after about 15 minutes). Other than that, it's perfect for my needs and resembles most of google reader. Update: the search only looks for terms into post titles, not post contents :-( ## Friday, June 28, 2013 ### Inline commenting system for blog sites One side of the idea: Lately I've seen some comment systems that inline comments in the text itself, see an example here. Other side of the idea: I've implemented a couple of unsupervised algorithms for news text extraction from web pages in Javascript, and they work generally well. They're also capable of splitting the news text into paragraphs and avoid image captions or other unrelated text. The idea: Merge the two ideas, as a new commenting service (like Disqus) for blogs, in a modern and interesting way. 1. The algorithms are capable of extracting the news text from your blog and splitting it into paragraphs. 2. For each paragraph, add a seamless event so that when the user overs a paragraph with the mouse, a comment action button appears. 3. When clicking the comment button a popup will appear showing a text box at the top where you can write the comment, and the list of comments at the bottom that are related to that paragraph only. (I please you, let the user order comments by date.) 4. Optional killer feature: together with the text box, you can also choose to add two buttons: "This is true", "This is false", so that there's a kind of voting system about the truth value of that paragraph from the readers perspective. 5. Once you send the comment, that comment is tied to that paragraph. 6. Give the possibility to create comments the old way, in case the user does not want to strictly relate the comment to a paragraph. 7. Give the possibility to show all the comments at the bottom of the page (button "Show all comments"). If a comment is related to a paragraph, insert an anchor to the paragraph. Given that the blogger shouldn't add any additional HTML code for the paragraphs because of the unsupervised news extraction algorithms, the comment service API will not differ in any way than the current ones being used. Problems of this approach: • There's no clear chronology for the user between comments of two different paragraphs. In my opinion, this shouldn't be a big deal, as long as comments between two paragraphs are unrelated. • If a paragraph is edited/deleted, what happens to the comments? One solution is to not show them in the text, but add a button "Show obsolete comments" at the buttom of the page, or rather show them unconditionally. • The unsupervised algorithm may fail to identify the post. In that case, the blogger may fall back to manually define the DOM element containing the article. ## Wednesday, June 05, 2013 ### Accessing simple private fields in PHP A way of accessing private fields in PHP is by changing the accessibility of the fields themselves with http://php.net/manual/en/reflectionproperty.setaccessible.php. However this approach requires PHP 5.3. For PHP < 5.3 there's another subtle way of accessing private properties: function getPrivateProperty($fixture, $propname) { try {$arr = (array)$fixture; } catch (Exception$e) {
}
$class = get_class($fixture);
$privname = "\0$class\0$propname"; return$arr[$privname]; } Usage is pretty straightforward, pass in the object and the property name as string. The property must be private and must be convertible to array. ## Saturday, April 06, 2013 ### Build Vala applications with Shake build system I'm going to introduce you a very nice alternative to make: the Shake build system, by setting up a builder for your Vala application project. First of all, you need to know that Shake is a library written in Haskell, and it's meant to be a better replacement for make. Let's start by installing cabal and then shake: apt-get install cabal-install cabal update cabal install shake TL;DR; this is the final Build.hs file: #!/usr/bin/env runhaskell import Development.Shake import Development.Shake.FilePath import Development.Shake.Sys import Control.Applicative hiding ((*>)) app = "bestValaApp" sources = words "file1.vala file2.vala file3.vala" packages = words "gtk+-3.0 glib-2.0 gobject-2.0" cc = "cc" valac = "valac" pkgconfig = "pkg-config" -- derived csources = map (flip replaceExtension ".c") sources cobjects = map (flip replaceExtension ".o") csources main = shakeArgs shakeOptions$ do
want [app]
app *> \out -> do
need cobjects
pkgconfigflags <- pkgConfig $["--libs"] ++ packages sys cc "-fPIC -o" [out] pkgconfigflags cobjects cobjects **> \out -> do let cfile = replaceExtension out ".c" need [cfile] pkgconfigflags <- pkgConfig$ ["--cflags"] ++ packages
sys cc "-ggdb -fPIC -c -o" [out, cfile] pkgconfigflags
csources *>> \_ -> do
let valapkgflags = prependEach "--pkg" packages
need sources
sys valac "-C -g" valapkgflags sources
-- utilities
prependEach x = foldr (\y a -> x:y:a) []
pkgConfig args = (words . fst) <$> (systemOutput pkgconfig args) Just tweak app, sources and packages to match your needs, chmod +x Build.hs then run ./Build.hs . Explanation. The words function splits a string by spaces to get a list of strings, e.g. ["file1.vala", "file2.vala", "file3.vala"]. The csources variable maps .vala file names to .c file names. Same goes for cobjects. It's the equivalent of$(subst .vala,.c,$(SOURCES)) you'd do with make. There it comes the main. The shakeArgs shakeOptions part will run shake with default options. Shake provides handy command line options similar to make, run ./Build.hs -h for help. The want [app] tells shake we want to build the app object by default. That's equivalent to the usual first make rule all:$(APP).
Then we define how to build the executable app with app *> \out -> do. We tell shake the dependencies with need cobjects. This is similar to $(APP):$(COBJECTS) in make but not equivalent. In shake dependencies are not static like in many other build systems. This is one of the most interesting shake features.
The rest is quite straightforward to understand.
Then we define how to build each .o object with cobjects **> \out -> do. Here the out variable contains the actual .o required to be built, equivalent to \$@ in make. Then we need [cfile], in order to simulate %.o: %.c like in make.
One more feature shake has out-of-the-box that make doesn't is how to generate more files from a single command. With make you'd use a .stamp file due to valac generating several .c files out of .vala files. Then use the .stamp as dependency.
With shake instead we consistently define how to build .c files with csources *>> \_ -> do, then shake will do the rest.
The shake project is very active. You can read this tutorial to learn Haskell basics, and the reference docs of shake. The author homepage has links to cool presentations of the shake build system.
|
# Find $f(x)$ if $f\left(\frac{x+y}{3}\right)=\frac{2+f(x)+f(y)}{3}$
$f : \mathbb{R} \to \mathbb{R}$ is a differentiable function satisfying $$f\left(\frac{x+y}{3}\right)=\frac{2+f(x)+f(y)}{3}$$
if $f'(0)=2$, find the function
My Try:
we have $$f\left(\frac{x+y}{3}\right)-\frac{f(y)}{3}=\frac{2+f(x)}{3}$$ $\implies$
$$\frac{f\left(\frac{x+y}{3}\right)-\frac{f(y)}{3}}{\frac{x}{3}}=\frac{2+f(x)}{x}$$
Now taking Limit $x \to 0$ we have
$$\lim_{x \to 0}\frac{f\left(\frac{x+y}{3}\right)-\frac{f(y)}{3}}{\frac{x}{3}}=\lim_{x \to 0}\frac{2+f(x)}{x}$$
$\implies$
$$f'\left(\frac{y}{3}\right)=\lim_{x \to 0}\frac{2+f(x)}{x}$$
Now since LHS to be finite , we need $0/0$ form in RHS, hence $f(0)=-2$
Now by L'Hopital's Rule we get
$$f'\left(\frac{y}{3}\right)=f'(0)=2$$
Integrating we get
$$3f\left(\frac{y}{3}\right)=2y+c$$
Putting $y=0$ we get $c=-6$
So
$$3f\left(\frac{y}{3}\right)=2y-6$$
So $$f(y)=2y-2$$
Hence $$f(x)=2x-2$$
But this function is not satisfying given functional equation.
What went wrong?
• The limit which you claimed to be $f'(y/3)$ is not, except perhaps if you implicitly assume the linearity of the map $f$, which would be a contradiction with its defining equation. By the way, setting $x=y=0$ in the defining equation, one gets $f(0) = 2$. In fact, the affine map $a(x) = 2x+2$ solves the equation for $f$ and the derivative constraint. – Jordan Payette Jan 25 '18 at 3:31
Using $x=y=0$ in the functional equation we can see that $f(0)=2$. Further note that $$f(x+h)=\frac{2+f(3x)+f(3h)}{3}, f(x) =\frac{2+f(3x)+f(0)}{3}$$ so that $$\frac{f(x+h) - f(x)} {h} =\frac{f(3h)-f(0)}{3h}$$ Taking limits as $h\to 0$ we get $f'(x) =f'(0)=2$ and thus $f(x) =2x+c$ and from $f(0)=2$ we get $c=2$ so that $f(x) =2x+2$. You can check that it satisfies the functional equation.
One readily verifies that the affine map $a(x) = 2x+2$ is a differentiable function which solves both the defining equation for $f$ and the derivative constraint $f'(0) = 2$.
Let's prove now the uniqueness of this solution. The defining equation can be rewritten $f\left( \frac{x+y}{3} \right) - \frac{f(x) + f(y)}{3} = \frac{2}{3}$. Differentiating with respect to $x$, we get $f'\left( \frac{x+y}{3} \right) - f'(x)=0$ regardless of the values $x$ and $y$, so that $f'(x)$ is a constant. The constraint $f'(0)=2$ thus yields $f'(x) \equiv 2$, which integrates to $f(x) = 2x+c$. Substituting this result into the defining equation for $f$ readily yields $c=2$.
Remark: Paramanand Singh's answer is better than mine in that it proves that $f$ is differentiable at every point by only using its defining equation and the assumption that it is differentiable at $0$. In that respect it is quite instructive. Be aware however that such a proof is in general rather difficult, hence the frequent assumption that $f$ is differentiable ; one doesn't need then to rely on the definition of a derivative, but can rather merely differentiate the defining equation as I did above.
|
Home » » Consider the following reaction equation: SO42-(aq) + 2H+(aq) + ye- → SO42-(...
# Consider the following reaction equation: SO42-(aq) + 2H+(aq) + ye- → SO42-(...
### Question
Consider the following reaction equation:
SO42-(aq) + 2H+(aq) + ye- → SO42-(aq) + H2O(l).
The value of y in the equation is
A) 2
B) 3
C) 4
D) 6
### Explanation:
From the reaction, the total charge on the left side is equal to the total charge on the right side.
-2 + 2 + y = -2
y = -2
The value of y is 2.
## Dicussion (1)
• From the reaction, the total charge on the left side is equal to the total charge on the right side.
-2 + 2 + y = -2
y = -2
The value of y is 2.
|
Home > Type 1 > Statistics Type One Error
# Statistics Type One Error
## Contents
Brandon Foltz 247,512 views 27:06 Statistics 101: Null and Alternative Hypotheses - Part 2 - Duration: 18:10. The answer to this may well depend on the seriousness of the punishment and the seriousness of the crime. Cambridge University Press. This is not necessarily the case– the key restriction, as per Fisher (1966), is that "the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must http://comunidadwindows.org/type-1/statistics-type-i-type-ii-error.php
Moulton, R.T., “Network Security”, Datamation, Vol.29, No.7, (July 1983), pp.121–127. So remember I True II False share|improve this answer edited Jul 7 '12 at 12:48 cardinal♦ 17.6k56497 answered Jul 7 '12 at 11:59 Dr. ISBN1584884401. ^ Peck, Roxy and Jay L. On the other hand, if the system is used for validation (and acceptance is the norm) then the FAR is a measure of system security, while the FRR measures user inconvenience
## Type 1 Error Example
Brandon Foltz 67,120 views 37:43 Statistics 101: Type I and Type II Errors - Part 1 - Duration: 24:55. A negative correct outcome occurs when letting an innocent person go free. Statistical test theory In statistical test theory, the notion of statistical error is an integral part of hypothesis testing. We say, well, there's less than a 1% chance of that happening given that the null hypothesis is true.
The probability that an observed positive result is a false positive may be calculated using Bayes' theorem. Sign in to report inappropriate content. Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. Type 1 Error Calculator Changing the positioning of the null hypothesis can cause type I and type II errors to switch roles.
Loading... Sign in Transcript Statistics 162,385 views 428 Like this video? In the same paper[11]p.190 they call these two sources of error, errors of typeI and errors of typeII respectively. Image source: Ellis, P.D. (2010), “Effect Size FAQs,” website http://www.effectsizefaq.com, accessed on 12/18/2014.
A test's probability of making a type I error is denoted by α. Type 1 Error Psychology This is why the hypothesis under test is often called the null hypothesis (most likely, coined by Fisher (1935, p.19)), because it is this hypothesis that is to be either nullified One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. Let's say that this area, the probability of getting a result like that or that much more extreme is just this area right here.
## Probability Of Type 1 Error
Watch Queue Queue __count__/__total__ Find out whyClose Type I and Type II Errors StatisticsLectures.com SubscribeSubscribedUnsubscribe15,25815K Loading... share|improve this answer answered Aug 12 '10 at 23:02 J. Type 1 Error Example https://t.co/HfLr26wkKJ https://t.co/31uK66OL6i 2h ago 1 retweet 6 Favorites [email protected] How are customers benefiting from all-flash converged solutions? Probability Of Type 2 Error The design of experiments. 8th edition.
This value is the power of the test. news Please try again later. An example of a null hypothesis is the statement "This diet has no effect on people's weight." Usually, an experimenter frames a null hypothesis with the intent of rejecting it: that As a result of the high false positive rate in the US, as many as 90–95% of women who get a positive mammogram do not have the condition. Type 3 Error
It can never find anything! Add to Want to watch this again later? In other words, the probability of Type I error is α.1 Rephrasing using the definition of Type I error: The significance level αis the probability of making the wrong decision when have a peek at these guys ProfRobBob 13,052 views 26:35 Testing of Hypothesis - Duration: 43:47.
CRC Press. Power Statistics Security screening Main articles: explosive detection and metal detector False positives are routinely found every day in airport security screening, which are ultimately visual inspection systems. restate everything in the form of the Null.
## For related, but non-synonymous terms in binary classification and testing generally, see false positives and false negatives.
If we reject the null hypothesis in this situation, then our claim is that the drug does in fact have some effect on a disease. Table of error types Tabularised relations between truth/falseness of the null hypothesis and outcomes of the test:[2] Table of error types Null hypothesis (H0) is Valid/True Invalid/False Judgment of Null Hypothesis Now remember the word "art" or "$\alpha$rt" says that $\alpha$ is the probability of Rejecting a True null hypothesis and the psuedo word "baf" or "$\beta$af" says that $\beta$ is the Types Of Errors In Accounting Type I error A typeI error occurs when the null hypothesis (H0) is true, but is rejected.
You Are What You Measure Featured Why Is Proving and Scaling DevOps So Hard? Related terms See also: Coverage probability Null hypothesis Main article: Null hypothesis It is standard practice for statisticians to conduct tests in order to determine whether or not a "speculative hypothesis" Working... check my blog So we will reject the null hypothesis.
The probability of making a type I error is α, which is the level of significance you set for your hypothesis test. A typeI occurs when detecting an effect (adding water to toothpaste protects against cavities) that is not present. Privacy Legal Contact United States EMC World 2016 - Calendar Access Submit your email once to get access to all events. jbstatistics 122,223 views 11:32 Statistics 101: Calculating Type II Error - Part 1 - Duration: 23:39.
So in the end, it really doesn't get me anywhere. –Thomas Owens Aug 12 '10 at 23:07 5 +1, I like. @Thomas: Given an "innocent until proven guilty" system, you Let’s go back to the example of a drug being used to treat a disease. Sign in to make your opinion count. Some authors (Andrew Gelman is one) are shifting to discussing Type S (sign) and Type M (magnitude) errors.
They are also each equally affordable. You might also enjoy: Sign up There was an error. Cambridge University Press. Required fields are marked *Comment Current [email protected] * Leave this field empty Notify me of followup comments via e-mail.
Since it's convenient to call that rejection signal a "positive" result, it is similar to saying it's a false positive. I highly recommend adding the “Cost Assessment” analysis like we did in the examples above. This will help identify which type of error is more “costly” and identify areas where additional However I think that these will work! Brandon Foltz 55,039 views 24:55 Type I Errors, Type II Errors, and the Power of the Test - Duration: 8:11.
current community blog chat Cross Validated Cross Validated Meta your communities Sign up or log in to customize your list. Joint Statistical Papers. Thank you very much. Reply Bill Schmarzo says: April 16, 2014 at 11:19 am Shem, excellent point!
So for example, in actually all of the hypothesis testing examples we've seen, we start assuming that the null hypothesis is true. I'm thinking this might work for me. –Thomas Owens Aug 12 '10 at 21:42 2 it's sort of like how in elementary school kids would ask "are you not not
|
# All Questions
54 views
### Change in Wavelength of a Photon Relation to Energy (specifically Compton Effect)
Given a photon dropping from $\lambda_1$ to $\lambda_2$, its energy will drop from $\frac{hc}{(\lambda_1)}$ to $\frac{hc}{(\lambda_2)}$. However, I was wondering if there is any significance in the ...
55 views
### Axisymmetric fluid flow
I'm having trouble with a boundary condition. In a fluid mechanics problem, I have flow at $z = \infty$ flowing into a solid plate at $z = 0$ and then flowing radially, and the problem is given as ...
21 views
### Does the strength of a smell depend on the volume or surface area of an agglomerate of material?
I just switched from Timberland to Adidas shoes, and noticed a horrible trait of the Adidas tread: instead of being composed of externally-protruding bumps like the Timberland, it is composed of ...
42 views
### Is a track in a cloud chamber made by one particle or many particles?
Is the track in a cloud chamber made by one particle interacting with many particles each emiting a photon or is the track made by many particles interacting on the same trajectory? or is it a cascade ...
60 views
### Source of beating phenomena of a Michelson interferometer?
I was discussing the reason why we see beating from a Michelson interferometer, and one of my friend said it 's because the light have different frequencies, therefore, they would be out of phase. ...
40 views
### Visible Lines for Hydrogen
Say H atoms are excited to 4th level, n=4 that is, how many lines do we see? How to decide the number of the lines?
96 views
### Equilibrium - uniform circular motion
Maybe this is a bit of a silly question, but let us pretend we have a pendulum in a ideal universe with no friction, drag, or anomalous forces there to affect it. Additionally, our pendulum is ...
43 views
### Yearly onshore wind turbine energy production [closed]
I am trying to work out the yearly energy production of an onshore Siemens SWT-3.0-101, hub height 94m, total height 144,5. Here is the power curve at each wind speed level worked out through a ...
467 views
### How to find the phase constant? [closed]
I was given this velocity-vs-time graph of a particle in simple harmonic motion: I determined the amplitude to be $A = 1.15$ m, which Mastering Physics confirmed is correct. Then I was asked to ...
76 views
### Atomic Orbitals, Singlets, Triplets
Why are singlet states representative of the S orbitals, and similarly why are triplet states representative of P orbitals? I learned about these states as resulting from having a total spin of a ...
110 views
82 views
### Matter Waves Interference
When an EM wave diffracts, I can imagine that its EM field interacts with the charges in a certain obstacle thus inducing a wave behaviour on the charges of the matter that will interact with the EM ...
149 views
### How can I calculate out the 'specific charge' of an atom? [closed]
I know that it's charge/mass. But what steps do I take to calculate the specific charge of say, carbon-12? What about ions too?
46 views
### What defines the interaction strength of a particle (massless or not) with matter?
Generally, talking about photons, the shorter the wavelength, the higher the interaction with matter. I doubt that I really understand why this happens. What about other massless particles? And ...
69 views
### Modeling sound resonance in an arbitrary cavity
I am trying to solve a challenging problem, and I'm hoping for some advice on how to proceed. I want to model sound waves in a cavity for the purpose of determining resonance. The plan is to answer ...
60 views
### Is it valid to calculate mass of electron using the speed of the wave packet and energy
A gaussian wave packet. It's peak is moving at speed v. We know the energy of the packet is E. Can I deduce the mass of the electron using $m=2E/v^2$
41 views
### Emf of a rod sliding on two conducting rails [closed]
The following is the question. Actually, I do not know how to find the answer. I have tried to find out the solution or hints from the lecture note given to me but it is quite difficult to ...
107 views
### What really happens with Time Dilation? [duplicate]
I know if you move your time moves slower than someone who is stationary, by Lorentz's transformation. However, I don't get how this happens. What does it mean when time moves slower? How does it ...
84 views
### Image formed in a compound light microscope
I am trying to understand whether the image formed in a compound light microscope is at infinity or not. I get conflicting answers everywhere I look.
73 views
### Do electric fields generated by plane charges lose intensity over distance? If not, why?
Sparknotes' studyguide for the SAT II: Physics test says that for a point charge (1-dimensional, e.g. an electron), the formula for intensity of the generated electric field is given by ...
49 views
### Force experienced on two particles in a rotating system?
I've a system of two particles of the same mass who rotate in a circle about the centre of mass of the two particles. Is the force experienced by the particles $F=MV^{2}/r$ or should I use ...
54 views
### A question about electric field
I would like to understand why is it the charge density while dealing with currents is $\mathop{\mathrm{div}}(E)/4\pi$, while when dealing with insulators is $-\mathop{\mathrm{div}}(E)/4\pi$? Thank ...
64 views
### Moment of inertia of a disc with masses attached at the rim
Does the moment of inertia of a disc with some masses attached at the rim be the same as one without the attached masses? Or is it necessary to use parallel axis theorem to incorporate the moment of ...
119 views
### Normal reaction on wheels when car is taking a turn
I recently read in a book that for a car taking a turn on a horizontal surface, the normal reaction of the road on the outer wheels is always greater than the normal reaction on the inner wheels. ...
136 views
### Are all elementary particles of the same type exactly the same?
Are all elementary particles of the same type EXACTLY the same? Is there some variation in what an electron is, for example, or are they all the same?
29 views
### Want to achieve maximum magnetic force & find out the max point [closed]
Hi Sir/ Madam Want to achieve maximum magnetic force & find out the max point for this setting, want to know is the strongest magnetic force must in the 'dot' area ( middle poin of setting ) ...
58 views
### Can matter be excited into energy and then be turned back into matter?
I was wondering the other day about teletransportation (human). And I had the idea that as far as I know, matter is energy. So I was wondering if it's possible to excite matter so it turns into ...
66 views
### Why must the final state be stationary?
I faced the following sentences: We consider a gravitational collapse taking place in this spacetime. The singularity theorems assure us that a singularity will form. The assumption that the ...
172 views
### Conservation of Angular Momentum and linear velocity
I have a problem where a cylinder is moving on a horizontal surface, starting with velocity $v_0$. It is given that its radius is $10\text{cm}$, its mass is $200\text g$ and the coefficient of ...
41 views
### Air insulation cavity depth: rule of thumb to avoid convection
I want to know more about optimising an air gap between two surfaces, such as window panes or a building's walls. I was taught that as a general rule, a 6cm air gap is the optimum. Any larger and ...
119 views
### Time dilation in a gravitational field and the equivalence principle
A clock near the surface of the earth will run slower than one on the top of the mountain. If the equivalence principal tells us that being at rest in a gravitational field is equivalent to being in ...
66 views
### Concentration of photons: How many can you slow down in a given unit of space?
Being able to slow down light to the speed of a bicicle as shown in this vid: http://www.youtube.com/watch?v=EK6HxdUQm5s Led to a question: Given a unit of space e.g. 1cm^3, and then you slowed down ...
67 views
### How Can There Be Magnetic Force Without Velocity [duplicate]
Suppose there is a space with constant magnetic field, and a charged particle is moving in that space with a constant velocity, ofcourse it experiences a magnetic force and gets deflected. But the ...
52 views
### Will the heat flow of Joule heat be different, if the Joule heat is dissipated in a material that has a temperature gradient beforehand?
Let us assume one dimensional heat transfer, for example a finite length wire starting at point $0$ and ending at point $\ell$. If the current passes the wire, the Joule heat $I^{2}R$ will be ...
42 views
### Exciting Surface Plasmons using ATR
I'm very new to the topic of surface plasmons and I have been reading about different methods of exciting them. There is one method in which a prism is set up to allow phase matching of an incident ...
42 views
### Splitting of degenerate states due to perturbation
In band structures we see avoided crossing when we have degenerate eigenstates (caused by perturbation due to potential energy). However along some direction in first Brillouin zone, even though the ...
100 views
### E&M question in example in my book [closed]
So I still not sure how to apply like Right-hand rule (RHR) in this setup in problem like the one in the following so I tried to do RHR in order to get the direction but it didn't work out. This is an ...
128 views
### Missing centrifugal acceleration
I am trying to get correct equations for acceleration of a point in reference frame A, given position, velocity and acceleration in rotating reference frame B. Let $\mathbf{x}_A(t)$, ...
47 views
### Is it possible for a material to have a polarization field greater than the applied field?
In the case of a dielectric (LIH at least, since that is all I've studied), the polarization field is always less than the applied field. In the case of a conductor, the polarization field is equal ...
274 views
### Wave packets and the derivation of Schrodinger's equation
I studied in my class, that a plane progressive wave cannot be used to represent the wave nature of a particle as it is not square integrable. Also, the phase velocity can get above the value of $c$, ...
67 views
### Discord for partially decohered bell state
To illustrate discord and its use, Zurek in his paper on discord (NB pdf) gives example of a partially decohered bell state i.e. \rho_{AB}=\frac{1}{2}(|00\rangle\langle 00|+|11\rangle\langle 11|) + ...
121 views
### Probability in Quantum Mechanics: General
How do I find the most probable value of position of a (non-Gaussian) wave function? Is it the same value as the expectation value of the position? And is it true that the most probable value of ...
102 views
### Write a parabolic equation in kinematics
How might I go about writing a parabolic equation in standard form $ax^2 + bx + c$ given all of the following measurements: $X_0, X_f, Y_o, Y_f$: the initial and final x and y positions. ...
81 views
### Can the phrase “Terminal Velocity” be used to describe non-gravity situations?
According to Wikipedia: [Terminal Velocity] is the velocity of the object when the sum of the drag force (Fd) and buoyancy equals the downward force of gravity (FG) acting on the object. Since ...
82 views
### Partition functions in $\phi^{4}$ theory
The partition function in a $\phi^{4}$ theory is written Z[J]=\int D\phi \, e^{-\int d^{4}x \left(\frac{1}{2}\left[(\nabla ...
In my introductory physics class, $V$ is the symbol for electric potential (joules per coulomb) and $U$ is the symbol for electric potential energy (joules). Since the Schrodinger equation is the ...
|
Using binomial theorem, write down the expansions of the following | Filo
Class 11
Math
JEE Main Questions
Binomial Theorem
545
150
Using binomial theorem, write down the expansions of the following:
545
150
Connecting you to a tutor in 60 seconds.
|
# Zone Types
If a deformable block analysis is required, models with different densities of block zoning should be evaluated once the block cutting and boundary location have been established. The block zone generate command or block zone generate-new command is used to specify the zoning density and zone type.
The highest density of zoning should be in regions of high stress or strain gradients (e.g., in the vicinity of excavations). It is not advisable to have large jumps in zone size between adjacent blocks. For reasonable accuracy, the ratio between zone volumes in adjacent polyhedra should not exceed roughly 4:1.
Tetrahedra
The block zone generate edgelength command or the block zone generate-new command will automatically create tetrahedral zones within an arbitrarily shaped convex polyhedron. If block-cutting results in blocks that are long and thin, it is recommended that these blocks be further cut and joined before generating zones. By doing this, zones with an aspect ratio closer to unity can be generated. i Block Metrics describes methods that can be used to evaluate the quality of blocks before zoning.
In general, plasticity results from tetrahedral zones are not as accurate as those from hexahedral or higher order zones (see below). 3DEC assumes that the important mechanisms that lead to large displacements are related to movement on joints and faults. If very accurate plasticity calculations are required, it is recommended that hexahedra be used. If this is not possible, due to the geometry of the problem, then it is highly recommended that nodal mixed discretization is turned on using the block zone nodal-mixed-discretization. The nodal mixed discretization scheme produces more accurate plasticity results in most cases. Details on this formulation can be found in i Deformable Block Motion.
A single 2×2×2 cubic block zoned with tetrahedra using the following command is shown in Figure 1.
block zone generate edgelength 0.5
Figure 1: Example block meshed with tetrahedral zones.
Hexahedra
The block zone generate hexahedra command only generates zones within six-sided polyhedra. This command creates mixed-discretization (m-d) zones (two overlays of five tetrahedral zones) that provide better accuracy for problems involving failure and collapse of the intact blocks. (See i Deformable Block Motion in the Theory and Background section for details.) Hexahedral zones will provide more accurate results in problems involving plastic failure of the intact material. However, because these can only be used in 6-sided blocks, their applicability in “real-world” problems is limited.
Tetrahedral zones can provide reasonable accuracy for certain failure modes (e.g., confined compression loading). However, this type of zoning does not produce an accurate prediction for collapse loads in bearing capacity problems. Comparisons of results using block zone generate hexahedra versus block zone generate edgelength for plasticity analysis are given in the following verification problems: i Rough Square Footing on a Cohesive Frictionless Material and i Cylindrical Hole in an Infinite Mohr-Coulomb Medium.
Note that hexahedral zones cannot be used with joint fluid flow.
Be aware that 3DEC actually generates ten overlapping tetrahedra for each hexahedral zone. The underlying calculations are performed by mixing the tetrahedra as described in i Deformable Block Motion, but when plotting zones, you will see the tetrahedra, each with its own stress and strain. Similarly, when you give the command block zone list, information about the tetrahedral zones will be listed, rather than the hexahedra.
The same block as above is zoned with the following command to create a single hexahedral zone. The resulting block plot is shown in Figure 2.
block zone generate hexahedra divisions 1 1 1
Figure 2: Example block with a single hexahedral zone (plotted as overlapping tets).
Higher-Order Tetrahedra
The block zone generate higher-order-tetra command produces high-order tetrahedral zones that can be used in blocks that cannot be zoned with m-d (hex) zoning. These zones have additional gridpoint nodes and are more accurate than the standard tetrahedral zoning. An extra gridpoint is added to the center of each zone edge. The command may be used only after the blocks have been zoned using the normal block zone generate commands. The model configure hotetra command must be given before any blocks are created.
The plasticity solution for the high-order zones is more accurate than for the normal constant-strain tetrahedral zones. However, there are some drawbacks. The high-order elements are incompatible with the free-field boundary and with joint fluid flow. In addition, the high-order zones are stored and plotted as an assembly of eight tetrahedra. The following commands will create a single tetrahedral block with a single higher-order tetrahedral zone in it. Figure 3 shows how the zone is actually plotted as eight smaller tets.
model config hotetra
block create tet 0,0,0 1,0,0 0.7,0.7,0 0.5,0.35,0.7
block zone generate edgelength 1
block zone generate high-order-tetra
Figure 3: An example block with a single higher-order tetrahedral zone (plotted as eight sub-tets).
By default, a separate stress value is plotted in each sub-tet. However, stresses in the high-order zones are projected to the gridpoints to give accurate stress results (see block gridpoint list stress). The projected stresses can also be plotted by choosing Stress from the contour-by menu of the block plot item.
It is tempting to use the high order tetrahedra (rather than regular tetrahedra) so that a more accurate plasticity solution can be obtained in deformable blocks. However, the accuracy of the contact force calculations is actually diminished due to the presence of the midside nodes. The error increases as the stiffness of the joints increase (relative to the zone stiffness).
Figure 4 shows a simple block model for testing the joint behavior. This model results in a joint made up of a single triangular zone face, with sixgridpoints. The zones are assigned a Young’s modulus of 50 GPa. The model is then loaded to produce a stress of 1 MPa in the zones and on the joint.
For a joint stiffness of 1 GPa/m, the results are good and the joint stress is close to 1 MPa (Figure 5). When the joint stiffness is increased, it is clear that the midside nodes exhibit forces that are too high, while the corner nodes shows forces that are too low (Figure 6). The data file for this test is shown below. For a model with stiff joints, using regular tetrahedra with block zone nodal-mixed-discretization on is recommended.
Figure 4: Model used to test accuracy of joint force calculations with higher-order tetrahedral zones.
Figure 5: Normal stress on the joint with normal stiffness = 1 GPa/m.
Figure 6: Normal stress on the joint with normal stiffness = 100 GPa/m.
program log on
program log-file 'hotet-jkn-1.log'
;
model new
;
model configure highorder
;
block tolerance 0.001
;
block create brick -1 1
block cut joint-set
block cut joint-set dip 90 dip-direction 45 origin 0 0 0
block delete range position-x 0 1
;
block zone generate edgelength 5
block zone generate high-order-tetra
;
block zone cmodel assign elastic
block zone property density 0.0025 young 50000 poiss 0.25
block contact property stiffness-normal 1000 stiffness-shear 100000 friction 40
block insitu stress 0 0 0 0 0 0
block contact property stiffness-normal 1000 stiffness-shear 100000 friction 40
block gridpoint apply velocity-z 0 range position-z -1
block gridpoint apply velocity-x 0 range position-x -1
block gridpoint apply velocity-y 0 range position-y -1
;
block gridpoint apply velocity-z -1e-3 range position-z 1
;
block history displacement-x position 1 1 1
block history displacement-y position 1 1 1
block history displacement-z position 1 1 1
model save 'hotetra-joints1'
;
; zdisp (top) = 1.04e-3, for szz=1
model mechanical timestep fix 2e-5
model large-strain off
model cycle 52000
;
block gridpoint apply velocity-z 0.0 range position-z 1
model cycle 500
block contact list force
;
model save 'hotet-jkn1'
program log off
;
;
;
;
|
Yolo4mecuite 3 years ago Solve the system of inequalities by using the substitution method. Check each solution and show work.
$y \le \times - 2 2. Yolo4mecuite and then : x + 3y \ge 6$
|
### If A = [731-2-41591], find (AT)T.
Exercise 2.2 | Q 6 | Page 46
#### QUESTION
If A = $\left[\begin{array}{ccc}7& 3& 1\\ -2& -4& 1\\ 5& 9& 1\end{array}\right]$, find (AT)T.
#### SOLUTION
A = $\left[\begin{array}{ccc}7& 3& 1\\ -2& -4& 1\\ 5& 9& 1\end{array}\right]$
∴ AT = $\left[\begin{array}{ccc}7& -2& 5\\ 3& -4& 9\\ 1& 1& 1\end{array}\right]$
∴ (AT)T = $\left[\begin{array}{ccc}7& 3& 1\\ -2& -4& 1\\ 5& 9& 1\end{array}\right]$ = A.
Concept: Algebra of Matrices
Balbharati Mathematics and Statistics 1 (Commerce) 12th Standard HSC Maharashtra State Board
Chapter 2 Matrices
Exercise 2.2 | Q 6 | Page 46
|
### Cache-Timing Attacks on RSA Key Generation
Alejandro Cabrera Aldaya, Cesar Pereida García, Luis Manuel Alvarez Tapia, and Billy Bob Brumley
##### Abstract
During the last decade, constant-time cryptographic software has quickly transitioned from an academic construct to a concrete security requirement for real-world libraries. Most of OpenSSL's constant-time code paths are driven by cryptosystem implementations enabling a dedicated flag at runtime. This process is perilous, with several examples emerging in the past few years of the flag either not being set or software defects directly mishandling the flag. In this work, we propose a methodology to analyze security-critical software for side-channel insecure code path traversal. Applying our methodology to OpenSSL, we identify three new code paths during RSA key generation that potentially leak critical algorithm state. Exploiting one of these leaks, we design, implement, and mount a single trace cache-timing attack on the GCD computation step. We overcome several hurdles in the process, including but not limited to: (1) granularity issues due to word-size operands to the GCD function; (2) bulk processing of desynchronized trace data; (3) non-trivial error rate during information extraction; and (4) limited high-confidence information on the modulus factors. Formulating lattice problem instances after obtaining and processing this limited information, our attack achieves roughly a 27% success rate for key recovery using the empirical data from 10K trials.
Available format(s)
Publication info
DOI
10.13154/tches.v2019.i4.213-242
Keywords
applied cryptographypublic key cryptographyRSAside-channel analysistiming attackscache-timing attacksOpenSSLCVE-2018-0737
Contact author(s)
aldaya @ gmail com
History
2020-03-20: last of 2 revisions
See all versions
Short URL
https://ia.cr/2018/367
CC BY
BibTeX
@misc{cryptoeprint:2018/367,
author = {Alejandro Cabrera Aldaya and Cesar Pereida García and Luis Manuel Alvarez Tapia and Billy Bob Brumley},
title = {Cache-Timing Attacks on RSA Key Generation},
howpublished = {Cryptology ePrint Archive, Paper 2018/367},
year = {2018},
doi = {10.13154/tches.v2019.i4.213-242},
note = {\url{https://eprint.iacr.org/2018/367}},
url = {https://eprint.iacr.org/2018/367}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
## Algebra 2 Common Core
The first step is to factor out the x, so that the polynomial is now $x(x^2+7x+10)$. You then factor the $(x^2+7x+10)$ into (x+2)(x+5). Note 2x5=10 and 2+5=7. Thus the final answer x(x+5)(x+2).
|
• ### Recent IR-Improved results for LHC/FCC physics(1801.03303)
Jan. 10, 2018 hep-ph
We present recent developments in the theory and application of IR-improved QED$\otimes$QCD resummation methods, realized by MC event generator methods, for LHC and FCC physics scenarios.
• We report on the measurement of the all-particle cosmic ray energy spectrum with the High Altitude Water Cherenkov (HAWC) Observatory in the energy range 10 to 500 TeV. HAWC is a ground based air-shower array deployed on the slopes of Volcan Sierra Negra in the state of Puebla, Mexico, and is sensitive to gamma rays and cosmic rays at TeV energies. The data used in this work were taken from 234 days between June 2016 to February 2017. The primary cosmic-ray energy is determined with a maximum likelihood approach using the particle density as a function of distance to the shower core. Introducing quality cuts to isolate events with shower cores landing on the array, the reconstructed energy distribution is unfolded iteratively. The measured all-particle spectrum is consistent with a broken power law with an index of $-2.49\pm0.01$ prior to a break at $(45.7\pm0.1$) TeV, followed by an index of $-2.71\pm0.01$. The spectrum also respresents a single measurement that spans the energy range between direct detection and ground based experiments. As a verification of the detector response, the energy scale and angular resolution are validated by observation of the cosmic ray Moon shadow's dependence on energy.
• The first observation of the rare decay$B^0_s \to \phi\pi^+\pi^-$ and evidence for $B^0 \to \phi\pi^+\pi^-$ are reported, using $pp$ collision data recorded by the LHCb detector at centre-of-mass energies $\sqrt{s} = 7$ and 8~TeV, corresponding to an integrated luminosity of $3{\mbox{\,fb}^{-1}}$. The branching fractions in the $\pi^+\pi^-$ invariant mass range $400<m(\pi^+\pi^-)<1600{\mathrm{\,Me\kern -0.1em V\!/}c^2}$ are $[3.48\pm 0.23\pm 0.17\pm 0.35]\times 10^{-6}$ and $[1.82\pm 0.25\pm 0.41\pm 0.14]\times 10^{-7}$ for $B^0_s \to \phi\pi^+\pi^-$ and $B^0 \to \phi\pi^+\pi^-$ respectively, where the uncertainties are statistical, systematic and from the normalisation mode $B^0_s \to \phi\phi$. A combined analysis of the $\pi^+\pi^-$ mass spectrum and the decay angles of the final-state particles identifies the exclusive decays $B^0_s \to \phi f_0(980)$, $B_s^0 \to \phi f_2(1270)$, and $B^0_s \to \phi\rho^0$ with branching fractions of $[1.12\pm 0.16^{+0.09}_{-0.08}\pm 0.11]\times 10^{-6}$, $[0.61\pm 0.13^{+0.12}_{-0.05}\pm 0.06]\times 10^{-6}$ and $[2.7\pm 0.7\pm 0.2\pm 0.2]\times 10^{-7}$, respectively.
• The first observation of the rare decay$B^0_s \to \phi\pi^+\pi^-$ and evidence for $B^0 \to \phi\pi^+\pi^-$ are reported, using $pp$ collision data recorded by the LHCb detector at centre-of-mass energies $\sqrt{s} = 7$ and 8~TeV, corresponding to an integrated luminosity of $3{\mbox{\,fb}^{-1}}$. The branching fractions in the $\pi^+\pi^-$ invariant mass range $400<m(\pi^+\pi^-)<1600{\mathrm{\,Me\kern -0.1em V\!/}c^2}$ are $[3.48\pm 0.23\pm 0.17\pm 0.35]\times 10^{-6}$ and $[1.82\pm 0.25\pm 0.41\pm 0.14]\times 10^{-7}$ for $B^0_s \to \phi\pi^+\pi^-$ and $B^0 \to \phi\pi^+\pi^-$ respectively, where the uncertainties are statistical, systematic and from the normalisation mode $B^0_s \to \phi\phi$. A combined analysis of the $\pi^+\pi^-$ mass spectrum and the decay angles of the final-state particles identifies the exclusive decays $B^0_s \to \phi f_0(980)$, $B_s^0 \to \phi f_2(1270)$, and $B^0_s \to \phi\rho^0$ with branching fractions of $[1.12\pm 0.16^{+0.09}_{-0.08}\pm 0.11]\times 10^{-6}$, $[0.61\pm 0.13^{+0.12}_{-0.05}\pm 0.06]\times 10^{-6}$ and $[2.7\pm 0.7\pm 0.2\pm 0.2]\times 10^{-7}$, respectively.
• The first observation of the rare decay$B^0_s \to \phi\pi^+\pi^-$ and evidence for $B^0 \to \phi\pi^+\pi^-$ are reported, using $pp$ collision data recorded by the LHCb detector at centre-of-mass energies $\sqrt{s} = 7$ and 8~TeV, corresponding to an integrated luminosity of $3{\mbox{\,fb}^{-1}}$. The branching fractions in the $\pi^+\pi^-$ invariant mass range $400<m(\pi^+\pi^-)<1600{\mathrm{\,Me\kern -0.1em V\!/}c^2}$ are $[3.48\pm 0.23\pm 0.17\pm 0.35]\times 10^{-6}$ and $[1.82\pm 0.25\pm 0.41\pm 0.14]\times 10^{-7}$ for $B^0_s \to \phi\pi^+\pi^-$ and $B^0 \to \phi\pi^+\pi^-$ respectively, where the uncertainties are statistical, systematic and from the normalisation mode $B^0_s \to \phi\phi$. A combined analysis of the $\pi^+\pi^-$ mass spectrum and the decay angles of the final-state particles identifies the exclusive decays $B^0_s \to \phi f_0(980)$, $B_s^0 \to \phi f_2(1270)$, and $B^0_s \to \phi\rho^0$ with branching fractions of $[1.12\pm 0.16^{+0.09}_{-0.08}\pm 0.11]\times 10^{-6}$, $[0.61\pm 0.13^{+0.12}_{-0.05}\pm 0.06]\times 10^{-6}$ and $[2.7\pm 0.7\pm 0.2\pm 0.2]\times 10^{-7}$, respectively.
• ### ${\cal{KK}}\text{MC-hh}$: Resummed Exact ${\cal O}(\alpha^2L)$ EW Corrections in a Hadronic MC Event Generator(1608.01260)
Sept. 27, 2016 hep-ph
We present an improvement of the MC event generator Herwiri2, where we recall the latter MC was a prototype for the inclusion of CEEX resummed EW corrections in hadron-hadron scattering at high cms energies. In this improvement the new exact ${\cal O}(\alpha^2L)$ resummed EW generator ${\cal{KK}}$ MC 4.22, featuring as it does the CEEX realization of resummation in the EW sector, is put in union with the Herwig parton shower environment. The {\rm LHE} format of the attendant output event file means that all other conventional parton shower environments are available to the would-be user of the resulting new MC. For this reason (and others -- see the text) we henceforth refer to the new improvement of the Herwiri2 MC as ${\cal{KK}}\text{MC-hh}$. Since this new MC features exact ${\cal O}(\alpha)$ pure weak corrections from the DIZET EW library and features the CEEX and the EEX YFS-style resummation of large multiple photon effects, it provides already the concrete path to 0.05\% precision on such effects if we focus on the EW effects themselves. We therefore show predictions for observable distributions and comparisons with other approaches in the literature. This MC represents an important step in the realization of the exact amplitude-based $QED\otimes QCD$ resummation paradigm. Independently of this latter observation, the MC rigorously quantifies important EW effects in the current LHC experiments.
• ### Spectral characteristics of Mrk 501 during the 2012 and 2014 flaring states(1509.04458)
Sept. 15, 2015 astro-ph.HE
Observations at Very High Energies (VHE, E > 100 GeV) of the BL Lac object Mrk 501 taken with the High Energy Stereoscopic System (H.E.S.S.) in four distinct periods between 2004 and 2014 are presented, with focus on the 2012 and 2014 flaring states. The source is detected with high significance above $\sim$ 2 TeV in $\sim$ 13.1 h livetime. The observations comprise low flux states and strong flaring events, which in 2014 show a flux level comparable to the 1997 historical maximum. Such high flux states enable spectral variability and flux variability studies down to a timescale of four minutes in the 2-20 TeV energy range. During the 2014 flare, the source is clearly detected in each of these bins. The intrinsic spectrum is well described by a power law of index $\Gamma=2.15\pm0.06$ and does not show curvature in this energy range. Flux dependent spectral analyses show a clear harder-when-brighter behaviour. The high flux levels and the high sensitivity of H.E.S.S. allow studies in the unprecedented combination of short timescales and an energy coverage that extends significantly above 20 TeV. The high energies allow us to probe the effect of EBL absorption at low redshifts, jet physics and LIV. The multiwavelength context of these VHE observations is presented as well.
• ### Competition of Commodities for the Status of Money in an Agent-based Model(1412.2124)
Dec. 5, 2014 q-fin.ST
In this model study of the commodity market, we present some evidence of competition of commodities for the status of money in the regime of parameters, where emergence of money is possible. The competition reveals itself as a rivalry of a few (typically two) dominant commodities, which take the status of money in turn.
• ### On multiple gluon exchange in J/psi hadroproduction(1408.5249)
Aug. 22, 2014 hep-ph
We consider a contribution to J/psi hadroproduction in which the meson production is mediated by three-gluon partonic state, with two gluons coming from the target and one gluon from the projectile. This mechanism involves double gluon density and thus it enters at a non-leading twist, but it is enhanced at large energies due to large double gluon density at small x. The three-gluon contribution to J/psi hadroproduction is calculated within perturbative QCD in the k_T factorization framework, and it is found to provide a significant correction to the standard leading twist cross-section at the energies of the Tevatron or the LHC. The results are given as differential p_T-dependent cross-sections for J/psi polarization components.
• The discovery of rapidly variable Very High Energy (VHE; E > 100 GeV) gamma-ray emission from 4C +21.35 (PKS 1222+216) by MAGIC on 2010 June 17, triggered by the high activity detected by the Fermi Large Area Telescope (LAT) in high energy (HE; E > 100 MeV) gamma-rays, poses intriguing questions on the location of the gamma-ray emitting region in this flat spectrum radio quasar. We present multifrequency data of 4C +21.35 collected from centimeter to VHE during 2010 to investigate the properties of this source and discuss a possible emission model. The first hint of detection at VHE was observed by MAGIC on 2010 May 3, soon after a gamma-ray flare detected by Fermi-LAT that peaked on April 29. The same emission mechanism may therefore be responsible for both the HE and VHE emission during the 2010 flaring episodes. Two optical peaks were detected on 2010 April 20 and June 30, close in time but not simultaneous with the two gamma-ray peaks, while no clear connection was observed between the X-ray an gamma-ray emission. An increasing flux density was observed in radio and mm bands from the beginning of 2009, in accordance with the increasing gamma-ray activity observed by Fermi-LAT, and peaking on 2011 January 27 in the mm regime (230 GHz). We model the spectral energy distributions (SEDs) of 4C +21.35 for the two periods of the VHE detection and a quiescent state, using a one-zone model with the emission coming from a very compact region outside the broad line region. The three SEDs can be fit with a combination of synchrotron self-Compton and external Compton emission of seed photons from a dust torus, changing only the electron distribution parameters between the epochs. The fit of the optical/UV part of the spectrum for 2010 April 29 seems to favor an inner disk radius of <6 gravitational radii, as one would expect from a prograde-rotating Kerr black hole.
• ### Fast Beam Condition Monitor for CMS: performance and upgrade(1405.1926)
May 8, 2014 hep-ex, physics.ins-det
The CMS beam and radiation monitoring subsystem BCM1F (Fast Beam Condition Monitor) consists of 8 individual diamond sensors situated around the beam pipe within the pixel detector volume, for the purpose of fast bunch-by-bunch monitoring of beam background and collision products. In addition, effort is ongoing to use BCM1F as an online luminosity monitor. BCM1F will be running whenever there is beam in LHC, and its data acquisition is independent from the data acquisition of the CMS detector, hence it delivers luminosity even when CMS is not taking data. A report is given on the performance of BCM1F during LHC run I, including results of the van der Meer scan and on-line luminosity monitoring done in 2012. In order to match the requirements due to higher luminosity and 25 ns bunch spacing, several changes to the system must be implemented during the upcoming shutdown, including upgraded electronics and precise gain monitoring. First results from Run II preparation are shown.
• ### Electroweak vector boson production at the LHC as a probe of mechanisms of diffraction(1110.1825)
Sept. 24, 2013 hep-ph
We show that the double diffractive electroweak vector boson production in the $pp$ collisions at the LHC is an ideal probe of QCD based mechanisms of diffraction. Assuming the resolved Pomeron model with flavor symmetric parton distributions, the $W$ production asymmetry in rapidity equals exactly zero. In other approaches, like the soft color interaction model, in which soft gluon exchanges are responsible for diffraction, the asymmetry is non-zero and equal to that in the inclusive $W$ production. In the same way, the ratio of the $W$ to $Z$ boson production is independent of rapidity in the models with resolved Pomeron in contrast to the predictions of the soft color interaction model.
• ### The Formation of Condensation on Cherenkov Telescope Mirrors(1307.6313)
July 24, 2013 astro-ph.IM, astro-ph.HE
The mirrors of imaging atmospheric Cherenkov telescopes are different from those of conventional astronomical telescopes in several ways, not least in that they are exposed to the elements. One of the issues which may arise is condensation forming on the mirrors during observing under certain atmospheric conditions, which has important consequences for the operation of the telescopes. This contribution discusses why telescope mirrors suffer condensation and describes the atmospheric conditions and mirror designs which are likely to be problematic.
• ### Defect states and excitations in a Mott insulator with orbital degrees of freedom: Mott-Hubbard gap versus optical and transport gaps in doped systems(1302.0020)
July 24, 2013 cond-mat.str-el
We address the role played by charged defects in doped Mott insulators with active orbital degrees of freedom. It is observed that defects feature a rather complex and rich physics, which is well captured by a degenerate Hubbard model extended by terms that describe crystal-field splittings and orbital-lattice coupling, as well as by terms generated by defects such as the Coulomb potential terms that act both on doped holes and on electrons within occupied orbitals at undoped sites. We show that the multiplet structure of the excited states generated in such systems by strong electron interactions is well described within the unrestricted Hartree-Fock approximation, once the symmetry breaking caused by the onset of magnetic and orbital order is taken into account. Furthermore, we uncover new spectral features that arise within the Mott-Hubbard gap and in the multiplet spectrum at high energies due to the presence of defect states and strong correlations. These features reflect the action on electrons/holes of the generalized defect potential that affects charge and orbital degrees of freedom, and indirectly also spin ones. The present study elucidates the mechanism behind the Coulomb gap appearing in the band of defect states and investigates the dependence on the electron-electron interactions and the screening by the orbital polarization field. As an illustrative example of our general approach, we present explicit calculations for the model describing three t_2g orbital flavors in the perovskite vanadates doped by divalent Sr or Ca ions, such as in La_(1-x)Sr_xVO_3 and Y_(1-x)Ca_xVO_3 systems. We analyze the orbital densities at vanadium ions in the vicinity of defects, and the excited defect states which determine the optical and transport gaps in doped systems.
• The Cherenkov Telescope Array (CTA) is a planned observatory for very-high energy gamma-ray astronomy. It will consist of several tens of telescopes of different sizes, with a total mirror area of up to 10,000 square meters. Most mirrors of current installations are either polished glass mirrors or diamond-turned aluminium mirrors, both labour intensive technologies. For CTA, several new technologies for a fast and cost-efficient production of light-weight and reliable mirror substrates have been developed and industrial pre-production has started for most of them. In addition, new or improved aluminium-based and dielectric surface coatings have been developed to increase the reflectance over the lifetime of the mirrors compared to those of current Cherenkov telescope instruments.
• ### FlashCam: A fully digital camera for the Cherenkov Telescope Array(1307.3677)
July 13, 2013 physics.ins-det, astro-ph.IM
FlashCam is a Cherenkov camera development project centered around a fully digital trigger and readout scheme with smart, digital signal processing, and a "horizontal" architecture for the electromechanical implementation. The fully digital approach, based on commercial FADCs and FPGAs as key components, provides the option to easily implement different types of triggers as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. At the same time, a large dynamic range and high resolution of low-amplitude signals in a single readout channel per pixel is achieved using compression of high amplitude signals in the preamplifier and signal processing in the FPGA. The readout of the front-end modules into a camera server is Ethernet-based using standard Ethernet switches. In its current implementation, data transfer and backend processing rates of ~3.8 GBytes/sec have been achieved. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several tens of kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, which is interfaced through analogue signal transmission to the digital readout system. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. This paper describes the FlashCam concept, its verification process, and its implementation for a 12 m class CTA telescope with PMT-based PDP.
• ### On the eclipsing cataclysmic variable star HBHA 4705-03(1306.4462)
June 19, 2013 astro-ph.SR
We present observations and analysis of a new eclipsing binary HBHA 4705-03. Using decomposition of the light curve into accretion disk and hot spot components, we estimated photometrically the mass ratio of the studied system to be q=0.62 +-0.07. Other fundamental parameters was found with modeling. This approach gave: white dwarf mass M_1 = (0.8 +- 0.2) M_sun, secondary mass M_2=(0.497 +- 0.05) M_sun, orbital radius a=1.418 R_sun, orbital inclination i = (81.58 +- 0.5) deg, accretion disk radius r_d/a = 0.366 +- 0.002, and accretion rate dot{M} = (2.5 +- 2) * 10^{18}[g/s], (3*10^{-8} [M_sun/yr]). Power spectrum analysis revealed ambiguous low-period Quasi Periodic Oscillations centered at the frequencies f_{1}=0.00076 Hz, f_2=0.00048 Hz and f_3=0.00036 Hz. The B-V=0.04 [mag] color corresponds to a dwarf novae during an outburst. The examined light curves suggest that HBHA 4705-03 is a nova-like variable star.
• ### Large-scale radio continuum properties of 19 Virgo cluster galaxies The influence of tidal interactions, ram pressure stripping, and accreting gas envelopes(1304.1279)
April 4, 2013 astro-ph.CO
Deep scaled array VLA 20 and 6cm observations including polarization of 19 Virgo spirals are presented. This sample contains 6 galaxies with a global minimum of 20cm polarized emission at the receding side of the galactic disk and quadrupolar type large-scale magnetic fields. In the new sample no additional case of a ram-pressure stripped spiral galaxy with an asymmetric ridge of polarized radio continuum emission was found. In the absence of a close companion, a truncated HI disk, together with a ridge of polarized radio continuum emission at the outer edge of the HI disk, is a signpost of ram pressure stripping. 6 out of the 19 observed galaxies display asymmetric 6cm polarized emission distributions. Three galaxies belong to tidally interacting pairs, two galaxies host huge accreting HI envelopes, and one galaxy had a recent minor merger. Tidal interactions and accreting gas envelopes can lead to compression and shear motions which enhance the polarized radio continuum emission. In addition, galaxies with low average star formation rate per unit area have a low average degree of polarization. Shear or compression motions can enhance the degree of polarization. The average degree of polarization of tidally interacting galaxies is generally lower than expected for a given rotation velocity and star formation activity. This low average degree of polarization is at least partly due to the absence of polarized emission from the thin disk. Ram pressure stripping can decrease whereas tidal interactions most frequently decreases the average degree of polarization of Virgo spiral galaxies. We found that moderate active ram pressure stripping has no influence on the spectral index, but enhances the global radio continuum emission with respect to the FIR emission, while an accreting gas envelope can but not necessarily enhances the radio continuum emission with respect to the FIR emission.
• ### Radio continuum observations of the Leo Triplet at 2.64 GHz(1303.5335)
March 21, 2013 astro-ph.CO, astro-ph.GA
Aims. The magnetic fields of the member galaxies NGC 3628 and NGC 3627 show morphological peculiarities, suggesting that interactions within the group may be caused by stripping of the magnetic field. This process could supply the intergalactic space with magnetised material, a scenario considered as a possible source of intergalactic magnetic fields (as seen eg. in the Taffy pairs of galaxies). Additionally, the plumes are likely to be the tidal dwarf galaxy candidates. Methods. We performed radio continuum mapping observations at 2.64 GHz using the 100-m Effelsberg radio telescope. We obtained total power and polarised intensity maps of the Triplet. These maps were analysed together with the archive data, and the magnetic field strength (as well as the magnetic energy density) was estimated. Results. Extended emission was not detected either in the total power or the polarised intensity maps. We obtained upper limits of the magnetic field strength and the energy density of the magnetic field in the Triplet. We detected emission from the easternmost clump and determined the strength of its magnetic field. In addition, we measured integrated fluxes of the member galaxies at 2.64 GHz and estimated their total magnetic field strengths. Conclusions. We found that the tidal tail hosts a tidal dwarf galaxy candidate that possesses a detectable magnetic field with a non-zero ordered component. Extended radio continuum emission, if present, is weaker than the reached confusion limit. The total magnetic field strength does not exceed 2.8 {\mu}G and the ordered component is lower than 1.6 {\mu}G.
• Vela X is a region of extended radio emission in the western part of the Vela constellation: one of the nearest pulsar wind nebulae (PWNe), and associated with the energetic Vela pulsar (PSR B0833-45). Extended very-high-energy (VHE) $\gamma$-ray emission (HESS $\mathrm{J0835\mhyphen 455}$) was discovered using the H.E.S.S. experiment in 2004. The VHE $\gamma$-ray emission was found to be coincident with a region of X-ray emission discovered with ${\it ROSAT}$ above 1.5 keV (the so-called \textit{Vela X cocoon}): a filamentary structure extending southwest from the pulsar to the centre of Vela X. A deeper observation of the entire Vela X nebula region, also including larger offsets from the cocoon, has been performed with H.E.S.S. This re-observation was carried out in order to probe the extent of the non-thermal emission from the Vela X region at TeV energies and to investigate its spectral properties. In order to increase the sensitivity to the faint $\gamma$-ray emission from the very extended Vela X region, a multivariate analysis method combining three complementary reconstruction techniques of Cherenkov-shower images is applied for the selection of $\gamma$-ray events. The analysis is performed with the On/Off background method, which estimates the background from separate observations pointing away from Vela X; towards regions free of $\gamma$-ray sources but with comparable observation conditions. The $\gamma$-ray surface brightness over the large Vela X region reveals that the detection of non-thermal VHE $\gamma$-ray emission from the PWN HESS $\mathrm{J0835\mhyphen 455}$ is statistically significant over a region of radius 1.2$^{\circ}$ around the position $\alpha$ = 08$^{\mathrm{h}}$ 35$^{\mathrm{m}}$ 00$^{\mathrm{s}}$, $\delta$ = -45$^{\circ}$ 36$^{\mathrm{\prime}}$ 00$^{\mathrm{\prime}\mathrm{\prime}}$ (J2000).
• ### A dynamical model for the Taffy galaxies UGC 12914/5(1209.6052)
Sept. 26, 2012 astro-ph.CO, astro-ph.GA
The spectacular head-on collision of the two gas-rich galaxies of the Taffy system, UGC 12914/15, gives us a unique opportunity to study the consequences of a direct ISM-ISM collision. To interpret existing multi-wavelength observations, we made dynamical simulations of the Taffy system including a sticky particle component. To compare simulation snapshots to HI and CO observations, we assume that the molecular fraction of the gas depends on the square root of the gas volume density. For the comparison of our simulations with observations of polarized radio continuum emission, we calculated the evolution of the 3D large-scale magnetic field for our simulations. The induction equations including the time-dependent gas-velocity fields from the dynamical model were solved for this purpose. Our simulations reproduce the stellar distribution of the primary galaxy, UGC 12914, the prominent HI and CO gas bridge, the offset between the CO and HI emission in the bridge, the bridge isovelocity vectors parallel to the bridge, the HI double-line profiles in the bridge region, the large line-widths (~200 km/s) in the bridge region, the high field strength of the bridge large-scale regular magnetic field, the projected magnetic field vectors parallel to the bridge and the strong total power radio continuum emission from the bridge. The stellar distribution of the secondary model galaxy is more perturbed than observed. The observed distortion of the HI envelope of the Taffy system is not reproduced by our simulations which use initially symmetric gas disks. The model allows us to define the bridge region in three dimensions. We estimate the total bridge gas mass (HI, warm and cold H2) to be 5 to 6 10^9 M_sun, with a molecular fraction M_H2/M_HI of about unity (abrigded).
• ### Rotation-stimulated structures in the CN and C3 comae of comet 103P/Hartley 2 around the EPOXI encounter(1204.2429)
April 11, 2012 astro-ph.EP
In late 2010 a Jupiter Family comet 103P/Hartley 2 was a subject of an intensive world-wide investigation. On UT October 20.7 the comet approached the Earth within only 0.12 AU, and on UT November 4.6 it was visited by NASA's EPOXI spacecraft. We joined this international effort and organized an observing campaign. The images of the comet were obtained through narrowband filters using the 2-m telescope of the Rozhen National Astronomical Observatory. They were taken during 4 nights around the moment of the EPOXI encounter. Image processing methods and periodicity analysis techniques were used to reveal transient coma structures and investigate their repeatability and kinematics. We observe shells, arc-, jet- and spiral-like patterns, very similar for the CN and C3 comae. The CN features expanded outwards with the sky-plane projected velocities between 0.1 to 0.3 km/s. A corkscrew structure, observed on November 6, evolved with a much higher velocity of 0.66 km/s. Photometry of the inner coma of CN shows variability with a period of 18.32+/-0.30 h (valid for the middle moment of our run, UT 2010 Nov. 5.0835), which we attribute to the nucleus rotation. This result is fully consistent with independent determinations around the same time by other teams. The pattern of repeatability is, however, not perfect, which is understendable given the suggested excitation of the rotation state, and the variability detected in CN correlates well with the cyclic changes in HCN, but only in the active phases. The revealed coma structures, along with the snapshot of the nucleus orientation obtained by EPOXI, let us estimate the spin axis orientation. We obtained RA=122 deg, Dec=+16 deg (epoch J2000.0), neglecting at this point the rotational excitation.
• ### A search for the analogue to Cherenkov radiation by high energy neutrinos at superluminal speeds in ICARUS(1110.3763)
March 8, 2012 hep-ex
The OPERA collaboration has claimed evidence of superluminal {\nu}{_\mu} propagation between CERN and the LNGS. Cohen and Glashow argued that such neutrinos should lose energy by producing photons and e+e- pairs, through Z0 mediated processes analogous to Cherenkov radiation. In terms of the parameter delta=(v^2_nu-v^2_c)/v^2_c, the OPERA result implies delta = 5 x 10^-5. For this value of \delta a very significant deformation of the neutrino energy spectrum and an abundant production of photons and e+e- pairs should be observed at LNGS. We present an analysis based on the 2010 and part of the 2011 data sets from the ICARUS experiment, located at Gran Sasso National Laboratory and using the same neutrino beam from CERN. We find that the rates and deposited energy distributions of neutrino events in ICARUS agree with the expectations for an unperturbed spectrum of the CERN neutrino beam. Our results therefore refute a superluminal interpretation of the OPERA result according to the Cohen and Glashow prediction for a weak current analog to Cherenkov radiation. In particular no superluminal Cherenkov like e+e- pair or gamma emission event has been directly observed inside the fiducial volume of the "bubble chamber like" ICARUS TPC-LAr detector, setting the much stricter limit of delta < 2.5 10^-8 at the 90% confidence level, comparable with the one due to the observations from the SN1987A.
• ### Electromagnetic Calorimeter for HADES(1109.5550)
Nov. 28, 2011 nucl-ex
We propose to build the Electromagnetic calorimeter for the HADES di-lepton spectrometer. It will enable to measure the data on neutral meson production from nucleus-nucleus collisions, which are essential for interpretation of dilepton data, but are unknown in the energy range of planned experiments (2-10 GeV per nucleon). The calorimeter will improve the electron-hadron separation, and will be used for detection of photons from strange resonances in elementary and HI reactions. Detailed description of the detector layout, the support structure, the electronic readout and its performance studied via Monte Carlo simulations and series of dedicated test experiments is presented. The device will cover the total area of about 8 m^2 at polar angles between 12 and 45 degrees with almost full azimuthal coverage. The photon and electron energy resolution achieved in test experiments amounts to 5-6%/sqrt(E[GeV]) which is sufficient for the eta meson reconstruction with S/B ratio of 0.4% in Ni+Ni collisions at 8 AGeV. A purity of the identified leptons after the hadron rejection, resulting from simulations based on the test measurements, is better than 80% at momenta above 500 MeV/c, where time-of-flight cannot be used.
• ### Ram pressure stripping of the multiphase ISM and star formation in the Virgo spiral galaxy NGC 4330(1111.5236)
Nov. 22, 2011 astro-ph.CO
It has been shown that the Virgo spiral galaxy NGC 4330 shows signs of ongoing ram pressure stripping in multiple wavelengths: at the leading edge of the interaction, the Halpha and dust extinction curve sharply out of the disk; on the trailing side, a long Halpha/UV tail has been found which is located upwind of a long HI tail. We complete the multiwavelength study with IRAM 30m HERA CO(2-1) and VLA 6 cm radio continuum observations of NGC 4330. The data are interpreted with the help of a dynamical model including ram pressure and, for the first time, star formation. Our best-fit model reproduces qualitatively the observed projected position, radial velocity of the galaxy, the molecular and atomic gas distribution and velocity field, and the UV distribution in the region where a gas tail is present. However, the observed red UV color on the windward side is currently not reproduced by the model. Based on our model, the galaxy moves to the north and still approaches the cluster center with the closest approach occurring in ~100 Myr. In contrast to other Virgo spiral galaxies affected by ram pressure stripping, NGC 4330 does not show an asymmetric ridge of polarized radio continuum emission. We suggest that this is due to the relatively slow compression of the ISM and the particular projection of NGC 4330. The observed offset between the HI and UV tails is well reproduced by our model. Since collapsing and starforming gas clouds decouple from the ram pressure wind, the UV-emitting young stars have the angular momentum of the gas at the time of their creation. On the other hand, the gas is constantly pushed by ram pressure. The reaction (phase change, star formation) of the multiphase ISM (molecular, atomic, ionized) to ram pressure is discussed in the framework of our dynamical model.
|
## Involve: A Journal of Mathematics
• Involve
• Volume 5, Number 1 (2012), 15-24.
### A generalization of modular forms
#### Abstract
We prove a transformation equation satisfied by a set of holomorphic functions with rational Fourier coefficients of cardinality $2ℵ0$ arising from modular forms. This generalizes the classical transformation property satisfied by modular forms with rational coefficients, which only applies to a set of cardinality $ℵ0$ for a given weight.
#### Article information
Source
Involve, Volume 5, Number 1 (2012), 15-24.
Dates
Revised: 3 July 2011
Accepted: 4 August 2011
First available in Project Euclid: 20 December 2017
https://projecteuclid.org/euclid.involve/1513733445
Digital Object Identifier
doi:10.2140/involve.2012.5.15
Mathematical Reviews number (MathSciNet)
MR2924310
Zentralblatt MATH identifier
1284.11076
#### Citation
Haque, Adam. A generalization of modular forms. Involve 5 (2012), no. 1, 15--24. doi:10.2140/involve.2012.5.15. https://projecteuclid.org/euclid.involve/1513733445
#### References
• T. M. Apostol, Modular functions and Dirichlet series in number theory, 2nd ed., Graduate Texts in Mathematics 41, Springer, New York, 1990.
• T. Jech, Set theory, 2nd ed., Springer, Berlin, 1997.
• K. Ono, The web of modularity: arithmetic of the coefficients of modular forms and $q$-series, CBMS Regional Conference Series in Mathematics 102, American Mathematical Society, Providence, RI, 2004.
• G. Shimura, Elementary Dirichlet series and modular forms, Springer, New York, 2007.
|
Lecture notes! Intro to Quantum Information Science
Someone recently wrote that my blog is “too high on nerd whining content and too low on actual compsci content to be worth checking too regularly.” While that’s surely one of the mildest criticisms I’ve ever received, I hope that today’s post will help to even things out.
In Spring 2017, I taught a new undergraduate course at UT Austin, entitled Introduction to Quantum Information Science. There were about 60 students, mostly CS but also with strong representation from physics, math, and electrical engineering. One student, Ewin Tang, made a previous appearance on this blog. But today belongs to another student, Paulo Alves, who took it upon himself to make detailed notes of all of my lectures. Using Paulo’s notes as a starting point, and after a full year of procrastination and delays, I’m now happy to release the full lecture notes for the course. Among other things, I’ll be using these notes when I teach the course a second time, starting … holy smokes … this Wednesday.
I don’t pretend that these notes break any new ground. Even if we restrict to undergrad courses only (which rules out, e.g., Preskill’s legendary notes), there are already other great quantum information lecture notes available on the web, such as these from Berkeley (based on a course taught by, among others, my former adviser Umesh Vazirani and committee member Birgitta Whaley), and these from John Watrous in Waterloo. There are also dozens of books—including Mermin’s, which we used in this course. The only difference with these notes is that … well, they cover exactly the topics I’d cover, in exactly the order I’d cover them, and with exactly the stupid jokes and stories I’d tell in a given situation. So if you like my lecturing style, you’ll probably like these, and if not, not (but given that you’re here, there’s hopefully some bias toward the former).
The only prerequisite for these notes is some minimal previous exposure to linear algebra and algorithms. If you read them all, you might not be ready yet to do research in quantum information—that’s what a grad course is for—but I feel good that you’ll have an honest understanding of what quantum information is all about and where it currently stands. (In fact, where it already stood by the late 1990s and early 2000s, but with many comments about the theoretical and experimental progress that’s been made since then.)
Also, if you’re one of the people who read Quantum Computing Since Democritus and who was disappointed by the lack of basic quantum algorithms in that book—a function of the book’s origins, as notes of lectures given to graduate students who already knew basic quantum algorithms—then consider these new notes my restitution. If nothing else, no one can complain about a dearth of basic quantum algorithms here.
I welcome comments, bugfixes, etc. Thanks so much, not only to Paulo for transcribing the lectures (and making the figures!), but also to Patrick Rall and Corey Ostrove for TA’ing the course, to Tom Wong and Supartha Podder for giving guest lectures, and of course, to all the students for making the course what it was.
• Lecture 1: Course Intro, Church-Turing Thesis (3 pages)
• Lecture 2: Probability Theory and QM (5 pages)
• Lecture 3: Basic Rules of QM (4 pages)
• Lecture 4: Quantum Gates and Circuits, Zeno Effect, Elitzur-Vaidman Bomb (5 pages)
• Lecture 5: Coin Problem, Inner Products, Multi-Qubit States, Entanglement (5 pages)
• Lecture 6: Mixed States (6 pages)
• Lecture 7: Bloch Sphere, No-Cloning, Wiesner’s Quantum Money (6 pages)
• Lecture 8: More on Quantum Money, BB84 Quantum Key Distribution (5 pages)
• Lecture 9: Superdense Coding (2 pages)
• Lecture 10: Teleportation, Entanglement Swapping, GHZ State, Monogamy (5 pages)
• Lecture 11: Quantifying Entanglement, Mixed State Entanglement (4 pages)
• Lecture 12: Interpretation of QM (Copenhagen, Dynamical Collapse, MWI, Decoherence) (10 pages)
• Lecture 13: Hidden Variables, Bell’s Inequality (5 pages)
• Lecture 14: Nonlocal Games (7 pages)
• Lecture 15: Einstein-Certified Randomness (4 pages)
• Lecture 16: Quantum Computing, Universal Gate Sets (8 pages)
• Lecture 17: Quantum Query Complexity, Deutsch-Jozsa (8 pages)
• Lecture 18: Bernstein-Vazirani, Simon (7 pages)
• Lecture 19: RSA and Shor’s Algorithm (6 pages)
• Lecture 20: Shor, Quantum Fourier Transform (8 pages)
• Lecture 21: Continued Fractions, Shor Wrap-Up (4 pages)
• Lecture 22: Grover (9 pages)
• Lecture 23: BBBV, Applications of Grover (7 pages)
• Lecture 24: Collision and Other Applications of Grover (6 pages)
• Lecture 25: Hamiltonians (10 pages)
• Lecture 26: Adiabatic Algorithm (10 pages)
• Lecture 27: Quantum Error Correction (8 pages)
• Lecture 28: Stabilizer Formalism (9 pages)
• Lecture 29: Experimental Realizations of QC (9 pages)
And by popular request, here are the 2017 problem sets!
I might post solutions at a later date.
Note: If you’re taking the course in 2018 or a later year, these sets should be considered outdated and for study purposes only.
Here’s a 184-page combined file. Thanks so much to Robert Rand, Oscar Cunningham, Petter S, and Noon van der Silk for their help with this.
If it wasn’t explicit: these notes are copyright Scott Aaronson 2018, free for personal or academic use, but not for modification or sale.
I’ve freely moved material between lectures so that it wasn’t arbitrarily cut across lecture boundaries. This is one of the reasons why some lectures are much longer than others.
I apologize that some of the displayed equations are ugly. This is because we never found an elegant way to edit equations in Google Docs.
If you finish these notes and are still hankering for more, try my Quantum Complexity Theory or Great Ideas in Theoretical Computer Science lecture notes, or my Barbados lecture notes. I now have links to all of them on the sidebar on the right.
111 Responses to “Lecture notes! Intro to Quantum Information Science”
1. John G Faughnan Says:
On page 1 of lecture 2 you use a * superscript, but notes don’t say what it means. (PDF pages could use page numbers :-).
2. Scott Says:
John #1: I’m looking at page 1 of lecture 2, and I don’t see the superscript you’re referring to. In general, though, a superscript * means complex conjugate.
3. Rand Says:
Awesome!
Can we get these in a single PDF? Would be nice to have a single document to reference. (I would combine them for myself, but I imagine that I’m not the only one who would like a single PDF.)
Also, I’m disappointed that John Watrous’ amazing lecture notes didn’t get a shout-out. They got me started in quantum computing and I still point people to them if they want to understand the basics of the subject: https://cs.uwaterloo.ca/~watrous/LectureNotes/CPSC519.Winter2006/all.pdf
4. Scott Says:
Rand #3: Indeed, looking again at Watrous’s notes just now reminded me of how awesome they are! I added a shout-out to them in the post.
Collating everything into a single PDF is a great idea (which had occurred to me as well), but I don’t know a super-convenient way to do it. If you do and were going to do it anyway, any chance you could email me the result and I’ll link to it from the post, with thanks?
5. Sniffnoy Says:
Scott: Quick note, the Watrous link is currently broken due to smart quotes.
6. Job Says:
Suppose you are given an implementation of a function f(x)=y with an associated probability distribution p(y) that is periodic.
That is, p(y) = p(y + d), for all possible y. And we want to find d.
First of all, this variant of period-finding is not in NP, right?
Also, would a quantum computer be able to solve this problem given a quantum implementation of f(x)?
7. Petter S Says:
Here is the combined PDF: https://www.dropbox.com/s/d8fpyzkhwtke6ym/scott.pdf?dl=0
If anyone wants to regenerate it (after e.g. updates from Scott), here is the script: https://www.dropbox.com/s/3mmgscw9wggx0lf/scott.py?dl=0
8. Oscar Cunningham Says:
I made a combined version and emailed it to you.
9. jonas Says:
> At a hate site that I’ve decided no longer to mention by name (or even check, effective today),
I think these promises work better if they have some definite deadline. This could be some specific calendar date (like 2022-08-26), or some other event whose time you can objectively decide and won’t feel tempted cheat with it.
For example, I decided that while the twitch.io website has its nice, and has its good side, it is addictive, took too much time of my life and caused me to stay up too late during nights. So I swore that I won’t visit twitch.io until the time I next replace the central components of my primary home computer (motherboard, cpu, RAM) and the new components start to work. This is a deadline that will take a while but not forever, will motivate me to shop for a new computer (which is a necessary but unpleasant task), and even ties in to the subject of the promise (twitch.io is a website for streaming videos live, and with a more powerful computer I’ll be able to stream a video and work on other tasks that are intensive in computer resources at the same time, so using twitch.io will disrupt me less).
10. Andrei Says:
What is a good reference for the “superquantum” theories mentioned in Lecture 14 i.e. theories strictly between Tsirelson’s bound and the no-communication theorem? (I only know of the papers of Bradler et al. that you asked about on Physics.SE some years ago.)
11. Scott Says:
Sniffnoy #5: Thanks, fixed! The perils of trying to update HTML on an iPhone…
12. Scott Says:
Petter S #7 and Oscar #8: Thanks so much! (I see that Robert Rand and Noon van der Silk have also made combined files, so now I’ll need to choose which one to link to… 😀 )
13. Scott Says:
Incidentally, Petter S: Thanks so much for your Python script that automatically regenerates a combined file! I would like the ability to run that script myself, since there have already been edits and I foresee more.
So, I set up a Python installation on my computer. But the script didn’t work, because I was missing the PyPDF2 library. So then I downloaded the PyPDF2 library. But I have no idea how to make the Python installation realize that the PyPDF2 library is there. When I run its “setup.py” file, it just says
RESTART: C:\pypdf\setup.py
(This, in a nutshell, is why I became a theoretical computer scientist—because around the age of 19, I somehow completely lost whatever ability I’d ever had to deal with stuff like this. 🙂 )
14. Scott Says:
jonas #9: Yes, you’re right. Let’s give it a 3-year deadline, to be optionally extended (same as I did with Lubos Motl, on the opposite extreme of the ideological spectrum, when his attacks on me started interfering with my life).
15. Scott Says:
Andrei #10: The terms to Google are “nonlocal boxes” or “Popescu-Rohrlich (PR) boxes.” See for example this 2005 paper, which showed that PR boxes would render communication complexity trivial. If you work backwards and forwards from that paper (e.g. using Google Scholar), you can probably find most of the relevant literature.
Thanks for putting up your course notes.
In Lecture 2 you give a brief discussion of why the Schroedinger atomic model resolves the radiation problem of the old quantum theory.
This is a point which caused me confusion and I feel is not well explained in most text books and courses, and often overlooked completely, even though it is very straightforward.
The question of why mathematically the model does not describe electrons spiralling into the nucleus, is actually irrelevant. If it predicted electron distributions which were kinematically stable, but which ought to be very lossy according to classical electromagnetic theory, then there would be a contradiction, just as there is for the Bohr-Sommerfeld model.
The real reason why the Schroedinger model resolves the problem is simply because it predicts electron densities for the time eigenstates states which are constant in time, and probability currents which are d.c., so the dipoles, higher poles and current distributions of these states are predicted by classical EM theory not to radiate.
17. Jake Argent Says:
I have a question about the unitary matrix in page 10, with rows [1 0] and [0 i].
When I multiply that matrix with [1 0] column, it gives me [1 0] column. So shouldn’t it be interpreted that it maps |0> to |0> and |1> to i|1> ?
Or in other words, the given mapping should be realized by the matrix of rows [0 1] and [i 0].
Am I getting this wrong?
18. Scott Says:
Jake #17: That was a mistake. It’s fixed now; thanks!
19. Phylliida Says:
These are amazing!! Thank you so much! I’ve wanted to pick up Quantum Computing for a while but kept running into people assuming I already had a basic idea of how quantum stuff worked (I don’t). I understand algorithms and modern computational complexity research but quantum stuff were always just weird and confusing to me. When I found tutorials, alongside this people kept discussing stuff that weren’t relevant algorithmically, using gross weird notation. I had this problem with Quantum Computing since Democritus.
I finally found the Quantum Computation section in Computational Complexity: A Modern Approach. It was the first bit of content of quantum computation I found that didn’t require knowing anything about quantum mechanics, yet explained Bell’s Inequality, Grover, Simon, and Shor’s algorithms in ways I could understand (most of which didn’t even use complex numbers! Just negative amplitudes). I’ve been going through your lecture notes now and they seem to be equally as approachable, which is nice 🙂
They do seem like they get on a lot of tangents though. I guess that is nice for a comprehensive picture, but I wish there were more guides like in “Computational Complexity: A Modern Approach” that only discussed the concepts needed for quantum algorithms in clean ways that didn’t require any knowledge about quantum computing beforehand.
Anyway, still, thanks for sharing, they are enjoyable reads 🙂
20. Scott Says:
Phylliida #19: Allow me to recommend the time-honored technique of skipping the parts that aren’t relevant to you. 🙂
21. pat Says:
You can try the auto-latex equations extension for Latex in google docs:
22. b Says:
There’s not enough nerd whining in this post. Too much compsci.
23. Daniel Seita Says:
Scott #13
From the path, it looks like you are using Windows. Ubuntu might be easier to deal with for running Python code.
24. Israel Says:
Listened to your Strachey Lecture on Quantum Supremacy.
It was witty, entertaining and informative.
Thank you.
My best wishes to you and your loved ones.
PS: Please ignore the haters. They will be consumed by their own hatred.
Mitzvah gedola lihiyot b’simcha tamid. ( even when it is very, very hard )
25. James Gallagher Says:
These are standard (ie boring) lecture notes, so Ok, just teach the prerequisites. But why try to indoctrinate newbies with all kinds of non science?
Your lecture on “interpretations” fails to even mention the most obvious “interpretation”, that nature is just random and that is that.
The fact that we have unitary evolution and preservation of the 2-norm is just simple observational evidence, after that, once you allow randomness then Gleason’s Theorem gives you the Born Rule as the only sensible mathematical law for understanding observations.
If one is trying to construct the Born rule from their interpretation, then they are already lost, the only proper question left to answer, 100 years after Solvay is, “where does the randomness come from?”
(not, “how do we mimic the Born Rule with a silly deterministic system” (or dumb dynamical collapse model))
26. Michael Says:
Thanks Scott, the notes look great!
Flicking through, it looks as though a diagram on page 122 has slid off the page.
27. Petter S Says:
Scott #13: “pip install PyPDF2” should hopefully make it work. Then you need to do the same for “requests”.
28. Petter S Says:
Scott #13: So you lost the ability to deal with stuff like this? I don’t believe you! You still write papers in Latex, right? 🙂
29. Egg Syntax Says:
Have you considered doing video recordings of the lectures themselves? That would pair beautifully with these lecture notes! Might be too late for the first week, but it seems like the university’s AV Department would be willing to do that for you. Barring that, even phone recordings of the lectures would be a treat!
30. Scott Says:
James Gallagher #25: There’s a reason why “nature is just random and that is that” isn’t called an “interpretation”—because it doesn’t even try to answer the glaring question, of how Nature decides exactly when to suspend unitary evolution and introduce randomness (or no decision ever needs to be made, it being all a matter of perspective—as for example the MWIers say, and ironically enough modern Copenhagenists also say) (or if all the random choices were already made at the beginning of time, as the orthodox Bohmians say).
If you find the notes boring (or if you already know this material), then don’t read them. While I may or may not have done them justice, I certainly think the intellectual developments being covered were some of the most exciting ones of the past half-century, in any part of math, science, engineering, or philosophy, and I was gratified that many of the students thought so too.
31. Scott Says:
Michael #26: Thanks! That was a strange display error that wasn’t present in the Google Doc, and only appeared when creating the PDF. It’s fixed now.
32. Scott Says:
Petter S #27-28: I use Scientific Workplace (a graphical front-end for LaTeX), but I still need to waste days of every year messing around with LaTeX packages and fonts, and I despise every minute of it.
Fortunately, I was pointed to a wonderful program called PDFtk, which lets me concatenate PDFs super-quickly without needing to mess around with Python scripts—so my learning how to do the latter has been (again) deferred to another day! 😀
33. Scott Says:
Egg Syntax #29: When I post video lectures, people complain about my stuttering and mannerisms (oddly enough, my lecture style seems fine in person; it just doesn’t translate to video). When I don’t post video lectures, people complain that they’re not available. So I can’t win either way. I probably won’t do video this year, but maybe in a future year, once the delivery is more fluid?
34. fred Says:
This is awesome! Thank you so much!
35. Ben Bevan Says:
Great stuff Scott. Always enjoy your approach as it’s often quite different from the other sources out there. Really nice to have your take on this material.
36. lewikee Says:
Thank you so much for this, Scott. Remarkably succinct for the amount of (counter-intuitive!) information being presented.
37. fred Says:
In lesson 2, it’s written than column state [1/2 0 0 1/2] isn’t feasible, but right after a transformation (controlled not) is introduced that leads to that state.
38. Joe Says:
Scott this is great! I’m exactly the person you described — I’ve read QCSD but I wanted some more basics too. Any chance you will upload the problem sets you used for this course? I’d love some exercises to make sure I’m following along properly.
39. lewikee Says:
fred #37: Yeah I noticed that. The only way I could reconcile the (apparent?) contradiction was that it said the vector could not be a tensor product. Perhaps the latter, transformed vector is not technically a tensor product?
40. fred Says:
lewikee #39
Oh, I see – if you start flipping the two coins/bits based on some rules (depending on the combined values), you end up with a state that can’t (always) be obtained from combining the separate (independent) vectors for each coin/bit.
41. Scott Says:
Joe #38: OK, homework sets are now posted!
42. Scott Says:
fred #37: Yeah, the resolution is simply that in this case, the CNOT gate maps a product distribution to a distribution that we previously proved can’t be a product distribution, but is instead correlated. I edited the notes to clarify.
43. Jonathan Paulson Says:
Minor typo, I think. Lecture 3 page two has “And we define as the inner product of ket x with ket y”. Should it be “bra x with ket y”?
44. Jonathan J Paulson Says:
Minor typo? Lecture 3 page 2 the line “Remember: the way we change quantum states is by applying linear transformations” has U transforming a column vector into a row vector. Shouldn’t the output still be a column vector?
45. Ryan O'Donnell Says:
I’m teaching a Quantum Computation and Information course at CMU this semester and will have videos 🙂 I’m not an expert in the topic like Scott, but perhaps you may still like it. Check out my YouTube (click on my name, above) starting in about a week…
46. Confused Says:
Hey Scott, I have a general question about quantum information, I am sure it would require a lot to actually answer but hopefully there is some sort of vague answer that suffices.
So, take the stance of “it from qubit” as a genuine claim of ontology, that everything physical in the universe can be described in terms of two-valued quantum systems. My question is basically, how does the reduction of all quantum systems to two-valued systems work?
As an example, if we want to apply “it from qubit” in that way, we would say “to explain the physical process of taking an egg and frying it, we would need to describe a very large quantum state that describes the system in it’s original state, and then apply a series of gates that transform that system into the desired output state”. My question is really just, when real life quantum mechanical systems are so much more complicated than the simple two-valued systems like electron spin or photon polarization, how is the reduction actually done?
Is it just a coding argument, in the same way that computer science, as well as mathematical logic and metamathematics, encodes information? I am familiar with and understand how Turing machines can only deal with 1s and 0s but those 1s and 0s encode sets of natural numbers… is the “it from qubit” ontology also just an encoding of physical systems as qubits? I am still confused about the idea even if the answer is just “yes”.
47. James Gallagher Says:
Scott #30
oh come on, Nature just does the random choices at Planck timescales, so several trillion trillion trillion random choices every second, each one followed by the entire universe updating via a unitary evolution.
There are probably many universes existing like this with different unitary evolution rules, and many not existing for more than a small timescale because they have non-unitary updates which cause infinite or zero states.
And only a few with a sufficiently stable unitary evolution law to allow ~13 billion years of evolution and Solar Systems to form etc
I mean, come on, if you’re gonna teach MWI and Penrose gravitational collapse models, then teach this crap too
48. Scott Says:
James #47: So Nature makes random choices once per Planck time. Then presumably the wavefunction is also collapsing in some local basis once per Planck time (since that’s the only time random choices are needed in QM)? But that would lead to a drastically different world than the one we observe, a world where the double-slit experiment and other basic quantum phenomena (which happen over MUCH longer timescales than a Planck time) wouldn’t work. Or alternatively, if the once-per-Planck-time random choices are simply “saved up” for whenever a wavefunction reduction happens, then what DOES determine when (and in which basis) reduction happens? You’ve then simply pushed the measurement problem somewhere else and made zero progress on it. Either way, this proposal is dead on arrival.
49. Scott Says:
Confused #46: The spins of electrons and the polarizations of photons are literally qubits. But I don’t know anyone who believes that ALL the basic constituents of our best description of reality must literally be qubits. At the least, some of them would presumably be qutrits (3-dimensional quantum systems) or other higher-dimensional things, and more generally, the Hilbert space need not have any “canonical” factorization into qubit-like components (as the Hilbert spaces of boson and fermion occupation numbers already don’t).
For a computer scientist like me, though, the more important question is whether the Hilbert space of the observable universe is finite- or infinite-dimensional. The holographic entropy bound, together with the assumption that the dark energy is a cosmological constant, would imply that it’s finite (roughly 10122) dimensional. And if so, then the state of the universe could always be thought of as a finite collection of qubits—via “coding,” as you put it.
Of course, this doesn’t imply that ideas from quantum information will actually be useful for fundamental physics, but that’s something that an increasing number of high-energy physicists separately believe is true, for example because of the insights that emerged from the study of the black-hole information problem and entanglement entropy in AdS/CFT.
50. Ashley Says:
Scott,
Ah, problem sets! Thank you!!
Also, consider this as putting my hand up for you posting the solution set too.
51. jonas Says:
Re “Confused” #46: Funnily, David Madore just complained in “http://www.madore.org/~david/weblog/d.2018-08-12.2545.html#d.2018-08-12.2545” that after studying some quantum electrodynamics, he still doesn’t understand how the real world (not counting its relativistic part) is encoded into a quantum mechanical system. It is possible that some physicists do understand this, but it’s so complicated that David just haven’t learned enough yet. That more terrible possibility is that even the particle physicists don’t know, but I really hope that’s not the case.
52. Jaikrishnan Janardhanan Says:
Scott,
It would be nice if you could kindly post these notes to AMS Open Notes:
https://www.ams.org/open-math-notes
53. fred Says:
Scott, how many qubits those 60M$(UT supercomputer grant) could have bought you? 54. Scott Says: fred #53: I hadn’t heard about that supercomputer grant. In any case,$60M could fund a pretty nice experimental quantum computing effort, but you wouldn’t want to put me in charge of it. 🙂
55. Scott Says:
Jonathan Paulson #43: That’s actually slightly subtle, since the definition of “inner product” could involve automatically conjugating-transposing one of the two vectors. In any case, edited to clarify.
56. Scott Says:
Jonathan Paulson #44 and others: OK, I fixed this. But unfortunately, doing nice equations in Google Docs turns out to be difficult. Paulo found a hacky way to do it (inserting the equations as images), and he’s no longer available. As a result, many of the equations are already ugly, and if I need to edit them, then they become even uglier.
I’ll tell you what: if anyone is willing to go through all 29 lecture notes (as Google Docs) and beautify the equations, I’m willing to offer $500 for that. 57. Adam H Says: Thank you for this generous primer Scott! I think you are becoming the Richard Feynman, the great explainer, of Computation(as well as making huge contributions to your field just like he did in physics). 58. James Gallagher Says: Scott #48 haha, didn’t we do this argument ~5 years ago? I just hope you’re well and in good condition to tackle the new academic year starting soon, you should be in a great mood with all the positivity towards quantum computing coming from major funding sources in your own government. You ought to be a major player in the coming years in this quantum computing renaissance. And whether it succeeds or not, what it discovers will be scientific progress for humankind, which is usually a very good thing. Just move on from silly incidents like the tip-jar debacle and social media herds swarming on what they perceive as an easy weak target. None of these things matter compared to the importance of the work you do and the career and family life you have. But anyway, you’re wrong in your argument about collapse per ~planck-time, since unless it’s mathematically wrong, it’s not wrong, and since a random collapse per ~planck-time model is virtually mathematically indistinguishable from a deterministic many-worlds model it can’t be wrong. It will be wrong when science proves the EM spectrum to be finer than planck scales, or a several million qubit quantum computer models the microscopic world correctly. 59. Scott Says: James #58: No, once again, one collapse per Planck time would have empirical consequences that are dramatically different from what we observe. It is distinguishable from Many-Worlds, because it would rule out the practical observation of interference effects, but we do see interference effects (which is how we know about QM in the first place). The proposal is flat-out wrong. Having said that, I do indeed vastly prefer someone who’s stubbornly wrong about QM than someone who stubbornly considers me a terrible person. 🙂 So I appreciate your comments. And yes, it’s an exciting time for the field, and yes, with teaching just having started yesterday, and with a couple research papers urgently needing to be written, there are indeed much more exciting things to think about than people being mean on social media. 60. fred Says: Regarding the MWI, if there’s a qubit measurement with Prob(1) = 2/3 and Prob(0) = 1/3. Supposedly the current branch where the measurement is performed would “split” into one branch where the qubit == 1 and one branch where the qubit == 0. But those two branches aren’t exactly created “equal” (because of the difference in probability). So how does each branch remember/record that the probabilities were 2/3 and 1/3? Is this information preserved in the wave function of the entire universe for each branch? Is it preserved in the brain of the experimenter? 😛 61. Pat Says: I’ll fix your equations for you if you like. 62. Semiclassical Says: To provide some context regarding the “1 collapse per Planck time” discussion: In the original GRW theory, the mean localization frequency is f=10^-16 s^-1 and a localization accuracy of 10^-5 cm. Quoting from the Stanford entry on collapse theories (section 5 of https://plato.stanford.edu/entries/qm-collapse/): “It follows that a microscopic system undergoes a localization, on average, every hundred million years, while a macroscopic one undergoes a localization every 10^−7 seconds.” (They don’t clarify what definitions of micro/macro are being used, but this can presumably be found in the original papers.) Note that the microscopic rate of localization (1 per hundred million years) is 10s of orders of magnitude different slower than “1 per Planck time”~ 1 per 10^-51 years. The latter macroscopic rate (1 per 100 nanoseconds) is very fast on human time scales, but is still many orders of magnitude short of “trillion trillion trillions of localizations per second”; for that, you’d need to scale up to a much much larger system. So the localization rates in the GRW theory are vastly, vastly different than “1 per Planck time”. The upshot is that a rapid rate of localization is not needed to get a realistic proposal: the sheer number of particles present in a macroscopic system means that a little localization goes a long way. By contrast, ‘one collapse per Planck time’ involves so much localization as to be entirely unrealistic and not in accord with any realistic proposal of spontaneous localization. It may be a cute slogan, but quantitatively it is simply not credible. 63. fred Says: Semiclassical #62 What’s really interesting is that those theories seem testable experimentally (in the near future). 64. James Gallagher Says: Scott #59 If the interference exists mathematically then that is good enough. If you take a vector of 100 complex numbers (say), then randomly change the phase (say) of one of them and then evolve the vector by multiplying by a 100×100 unitary matrix then you have a new vector of 100 complex numbers. But we don’t know what it is until we look at it, since the (phase) change was random. Now, without looking, we allow another random change and a 2nd evolution via multiplication by the unitary matrix. We then have yet another vector of 100 complex numbers, but we do not know what it is until we look at it. In fact it is in a superposition of all possible two time evolution steps with a phase change allowed on any of the 100 complex numbers (That’s a big superposition) Now, if this goes on for billions of iterations, there is still only one vector of 100 complex numbers, but we have no idea what it is until we look. It is, mathematically in an incredibly huge superposition, with all the possible interference effects possible depending on which branch the evolution took. And yes there are worlds, with tiny probabilities, where in a double slit experiment the photons all end up on one side, same as with deterministic many worlds, but the most probable outcomes are the ones we usually observe, the interference effects are a result of a mathematical calculation which just uses superpositions and schrodinger (unitary) evolution. Note also, that an observation of only a small part of the vector will still leave a large part in (mathematical) superposition, local observations don’t fix the global state of the universe. I’m just suggesting that the superpositions are only a mathematical reality, due to randomness at planck timescales, not actual real branching of many worlds. At least in my (silly?) model we have a speed limit, due to the finite time it takes for a random jump, in deterministic models the universe all happens at once, which is a shame really. 65. Scott Says: James #64: I confess that none of that made the slightest bit of sense to me. But even supposing it’s meaningful, and the failure is on my end—even then, isn’t it simpler just to use standard QM? I feel like the burden would still be on you to show what we gain by adopting the elaborate-sounding picture you describe. 66. James Gallagher Says: Scott #65 Natural origin of a Speed of Light limit? 67. Bhavishya Says: Unrelated: The link to Ryan O’Donnell is broken. The correct link -> http://www.contrib.andrew.cmu.edu/~ryanod/ 68. fred Says: “Say we’re at a quantum airport and there’s a piece of unattended tipping jar which could have a 5$ bill in it, but reaching into the jar could alert the police.
How do we take the 5 dollar bill without getting arrested?”
69. Scott Says:
fred #68: No, doesn’t work. The analogue of the Elitzur-Vaidman bomb experiment would only tell you non-invasively whether the \$5 is there, not take it if it is.
70. fred Says:
Scott #69
I figured this was too good to be true…
Regarding the beginning of lecture 5, with the biased coin.
Can’t we simulate classically the qubit solution, by counting only multiples of (1/eps) radiants?
Wouldn’t that require just log(1/eps) bits (instead of log(1/eps^2) bits)?
71. Scott Says:
fred #70: log(1/ε) and log(1/ε2) are the same thing, up to changing the base of the log.
72. JimV Says:
In real events such as a partially-silvered glass pane which reflects 75% of photons in a beam-splitter experiment and transmits 25%, there are many branches, one for each possible photon trajectory. In 75% of those branches the photon was reflected and in 25% it was transmitted–or that’s the way I see it, anyway.
I’m not sure if this is what James G. has in mind, but it sounds a bit like this: any quantum experiment such as two-slit interference could be simulated by programming the quantum effects according to the rules of QM in a computer, correct? (Using a random or pseudo-random number generator.) What if we postulate that the universe itself is running that simulation? (Granted, no computer in our universe could possible do all the calculations for all the observable particles, but maybe the universe itself can.)
73. James Gallagher Says:
JimV #72
Yes that’s is what I mean, but I’m a little surprised it’s not even considered an obvious possible mechanism, especially since MWI is accepted by people so easily!
When I first thought of this idea about 6 or 7 years ago I excitedly posted my thoughts on Lubos Motl’s blog, but didn’t get much feedback, same as on Scott’s blog about 5 years ago. So although I personally couldn’t see anything wrong I assumed it wasn’t something worth making a fuss about.
In any case, I was much more concerned about deriving 3D space from such a mechanism, since that is the obvious thing we observe that does not exist in a large dimensional Hilbert Space. And I noticed a peculiar result about unitary evolution followed by subtraction of the previous state.
ie if we have the evolution rule U(t+dt) = ( exp(h x Anti_Symmetric_Matrix) – I ) x U(t), where h is a real number and I is the Identity matrix, then there exists an h which gives you a stable period-3 global oscillation which is an attractor for measure 1 of initial states
(this is actually an easy undergraduate (or advanced high school) level result, but not something I had seen before)
The idea was, that because of the constant random collapses we need to discard the entire previous state of the universe at each evolution step, and I bet if had suggested this in the 1960s when they were fixing QED etc by subtracting infinities it would have got some interest.
However, we now know that the renormalisation is justified by a well understood mathematical mechanism due to Wilson, so it sounds a bit silly to suggest we “subtract the entire Universe at each step”
(Also, we get a pretty boring universe unless we introduce a (small) subspace which avoids the period-3 global attractor and allows for particles – eg a 6-dimensional subspace which is just the eigenspace of a certain eigenvalue determined by h)
74. wolfgang Says:
>> What if we postulate that the universe itself is running that simulation?
I you really believe in the m.w.i. then you believe that there are (infinitely ?) many branches where quantum computers simulate your experience …
75. Scott Says:
Pat #61: Seriously? Thank you so much!! Shoot me an email, and I’ll give you the link to the Dropbox folder.
76. Jonathan J Paulson Says:
Typo in Lecture 17, page 6: “If
you want no possibility of error, then it’s hard to see that this is the best you can do.” Should be “not hard to see that this is the best you can do”
77. JimV Says:
RE: “If you really believe in the m.w.i. then you believe that there are (infinitely ?) many branches where quantum computers simulate your experience …” [Wolfgang]
I don’t believe in (wouldn’t bet my life on) either the MWI or the simulation method, I just think they are quite interesting ideas to think about. Why do people on the Internet make assumptions about my unstated beliefs which are always wrong? Is it something about my face?
Anyway, there have been only a finite number of events in our light cone since the Big Bang with a finite number of quantum possibilities each. (He says, happening to think that like motion (see Zeno), matter (see Democritus), and energy (see QM), every physical quantity is not continuous but comes in finite numbers of discrete increments; although he wouldn’t bet his life on that either; almost, though.)
78. Jeremy Says:
Hi Scott, Thanks for the notes!
I read the first four parts, and it was breezy for me, but I found myself confused by the quantum bomb detector section and I had to reread it a couple times. Maybe the only difference was my level of prior knowledge?
Anyways, there were a couple of things that I thought could be improved:
First, emphasize that the 0 and 1 are not the outcomes of the query, but are in fact themselves representing different queries. The idea of treating “no query” as a query is surprising to someone thinking classically! Using “query” in both senses of the word in the same phrase is confusing. When I read it through the first couple times, I just assumed that 1 and 0 were the outcomes of the query.
Second, when you say upgrade to a qbit, and give the equation |b> = alpha|0> + beta|1>, it’s not clear that the |0> and |1> correspond to the 0 an 1 in the classical set above. Since we have been discussing qbits on a more abstract level earlier using the ket notation, it led me astray into thinking that it was not a simple linear combination of the above states. Maybe you could use the ket notation in the classical equation too?
Third, in the exposition of how to perform the rotations to do the quantum bomb measurement, you make the assumption that your starting state is |0>, without explicitly stating it.
Finally (I think this is the only genuine bug that I spotted), you said to repeat the operation pi/2 times, which I think it should have been pi/(2 epsilon) times.
Thank you again for sharing this and I hope you take not offense at my suggestions, they are truly out of appreciation!
79. wolfgang Says:
>> physical quantity is not continuous but comes in finite numbers of discrete increments
I think it is helpful to clarify what we are talking about.
Most people discuss m.w.i. in the context of quantum mechanics in a classical (flat) background. In this case, a Geiger counter measuring the decay of a radioactive source would “branch” into a continuum of possible worlds, parametrized by the classical time parameter.
If you assume “finite numbers of increments” you may refer to something like the Bekenstein bound, i.e. quantum gravity. We don’t really know how to describe quantum gravity and what its m.w.i. would look like. Are worlds “branching” in some “emergent time”?
It is not clear to me how one would have to think about the “light cone since the big bang” in this description.
String theory is a possible description of quantum gravity and it seems to come with a multiverse of possible worlds, perhaps realized in eternal inflation. In this case the world would be much larger than what we assume to be the visible patch since the big bang.
I would not know how to estimate the number of quantum events in this case …
80. fred Says:
from lesson 5 –
“That’s because particles need to be close to become entangled, but once they’re entangled you can separate them to an arbitrary distance and they’ll stay entangled.”
But how does that work in light of the fact that QM is time reversible? (two particles flying from infinity are actually entangled, until they meet and become disentangled?)
81. Scott Says:
fred #80: Yes. If two particles flying in from infinity happened to be entangled with each other and not with anything else—something that I don’t think has ever been observed in a cosmological setting (though maybe there are models of early-universe cosmology that predict the existence of such pairs??)—in that case, depending on the Hamiltonian and so forth, they could become unentangled when they met, as indeed we know must be possible by reversibility.
Of course in practice, the thermodynamically vastly more likely thing is that long before this happens, the entanglement will become unobservable due to interactions between one or both of these particles and their environments.
82. Scott Says:
Jonathan #76: Thanks! Fixed.
83. Scott Says:
Jeremy #78: Thanks!! I fixed the error with π/(2ε), and will circle back to the other things later.
84. fred Says:
in chapter 12
“We live in the classical world, where objects have definite locations, the objects can be measured without disturbing them”
But isn’t this only true if we have some quantization?
If space and energy are continuum, by the action==reaction principle, it’s impossible to measure anything without disturbance.
And since any location is a real number, any measurement can never be at full precision.
On the other hand, if we drop the continuum and assume quantization, we can meet the requirements, but don’t we end up with a world that’s basically some version of The Game of Life?
85. fred Says:
chapter 12
“On their face, all these views seem contradictory to our understanding of physics, which
relies on reductionism : each atom just keeps obeying the same simple equations, regardless of how big or complicated a system the atom might be part of”
But what about something like superconductivity or BEC?
In the sense that individual atoms lose their identity and the behavior of the whole can’t be understood by summing the behavior of the parts.
86. asdf Says:
The notes look cool! Unrelated: is this interesting?
https://arstechnica.com/science/2018/09/engineering-tour-de-force-births-programmable-optical-quantum-computer/
87. Scott Says:
asdf #86: It looks cool! Though clearly there’s quite a bit of catching up for optics to get to where superconducting and trapped-ions are now…
88. Nathan Pellegrin Says:
The notes are great – thanks for posting! #14 is fantastic. It is helpful to me in digesting recent results on teleportation of quantum gates. Any plans for an audiobook version?
89. mjgeddes Says:
Excellent Scott! Based on these lecture notes, I realized I was missing some important concepts in the field of computational complexity; I’m trying to learn and I’ve added them to my wiki-books.
Just wanted to mention that I’ve had another surge of big ideas Scott! 😉
What I think I’ve realized is that mathematics may not work in the way many think! In physics, more complex things are always built-up from less complex building blocks , so far as we know (that’s the meaning of methodological reductionism). But does *mathematics* have to work this way? Can more complex mathematical constructions always be decomposed into simpler ones? Constructivists think yes, but I think no!
Now it’s clear that you *can* put mathematical concepts on a ‘ladder of abstraction’, where you rank things from less abstract to most abstract , just as you can for physics concepts. But in the case of math , my dawning realization is that something strange is happening – reductionism may not work for math!
For instance, algebra seems to be the most concrete domain, analysis is more abstract and mathematical logic is the most abstract. On the basis of this ‘ladder of abstraction’ one might be tempted to apply methodological reductionism and try to pick one of these domains as the ‘foundation of math’ (the most fundamental level). But I think it doesn’t work!
Rather, the top- and bottom-levels of abstraction seem to rely on each other in some sort of peculiar fashion , which refutes methodological reductionism! That is to say, analysis *doesn’t* seem to be fundamental, but paradoxically, mathematical logic *and* algebra *do* both seem to be fundamental! (The most abstract *and* the least abstract math domains seem to depend on each other in a peculiar circular loop). This refutes methodological reductionism!
This is related to Bayesianism and the ‘Less Wrong’ crowd, who strongly favor Bayesian inference as the foundation for rationality. Now I think they’ve based this on methodological reductionism. The ladder of abstraction for computer science seems to be ‘Probability&Stats’ (most concrete) -> ‘Computational Complexity’ (middle-level) and ‘Computational Logic’ (most abstract). So they regard Bayesian inference as fundamental based on this ladder of abstraction, and they think that the more abstract domains of computer science are all ‘reducible’ to Bayes (more complex math built-up from simpler math). But if methodological reductionism fails for math, as I now strongly suspect, this falsifies the Bayesian philosophy!
Some seriously heavy mathematics philosophy to mull over on the weekend 😉
90. fred Says:
In chapter 9.
To confirm, Alice’s qubit in the example is the second one, right?
The gates she supposedly applies to her qubit only touch the second one (although her change of phase seems to be affecting both qubits, but I guess phase is special?).
91. fred Says:
In chapter 14
“The experiments don’t quite get to 85% success probability, given the usual
difficulties that afflict quantum experiments.”
If sharing two entangled qubit and doing two independent measurements on them is that complicated, how can we hope to ever build a QC with vast amounts of qubits?
92. Scott Says:
fred #91:
1) In QC, you at least don’t have the severe difficulty that you have in loophole-free Bell tests, of spreading entanglement across a mile or more.
2) Quantum fault-tolerance shows you don’t need perfect gates, just ~99.9% perfect.
3) But yes, building a universal QC is staggeringly hard. That people can now do much simpler things, that a few years ago they couldn’t do, means that at least there’s clear progress in the right direction!
93. Scott Says:
fred #90: No, Alice’s qubit is supposed to be the first one. Were they switched somewhere?
And no, phase is not really special, it’s just another unitary transformation. But you can apply a phase gate to any qubit of |0…0⟩+|1…1⟩ to get |0…0⟩-|1…1⟩ — think about it; it follows from linearity. That has no effect on the local density matrices of any of the qubits you didn’t act on—and for that matter, no effect on the local density matrix of the qubit you did act on! But it has a global effect that you could see if you measured all the qubits together.
94. Gabriel Says:
Hi Scott, What do you think about the story surrounding Theodore Hill’s “greater male variability” paper?
95. fred Says:
Regarding superdeterminism, ‘t Hooft is definitely correct when he points out
“In quantum physics, there’s a notion of counterfactual measurement. You measure what happens if I put the polarizer this way, and then you ask, what if I had it that way? In my opinion, that is basically illegal. There’s only one thing you can measure.”
but that’s just a fraction of what would be needed to show that a classical world can fake out the results of QM (this seems far-fetched for sure).
I just don’t understand the claim that “QM just works” and that questioning it is always some fool’s errand… what about addressing the elephant in the room: making QM work with gravity?
96. fred Says:
Scott,
I often see you refer to the maximum amount of info that can be stored in a given region of space (related to the study of blackholes, etc).
Similarly, can we ask how much (quantum) randomness can be generated from a given region of space?
I’m not sure the question makes sense since you also often mention that quantum information can never be destroyed – is there something being conserved as we keep generating randomness from a closed region of space?
97. Scott Says:
Gabriel #94: I think we can safely reach the conclusion that it’s a travesty, without needing to pass judgment about the correctness or importance of the paper’s technical content.
Even if everyone agreed that a paper was 100% wrong, trivial, plagiarized, whatever—even then, once it’s published, you can’t just “disappear” it from the record; that’s a flagrant and almost Soviet-level violation of academic norms. You can only retract a paper after a long and careful review process. (I know people who are currently advocating such a process with one of Joy Christian’s papers—which he managed to get into a journal!—claiming to have disproved the Bell inequality. But I personally have no stomach for such a fight, feeling that Joy’s claims have by now been completely unmasked as crackpot to anyone who knows anything, and also that the retraction process would take years … as it should.)
In the case at hand, from what little I know, it seems like there are plausible arguments that NYJM maybe shouldn’t have accepted the paper in the first place—e.g., because the model it proposes is just not that interesting, or the paper isn’t written to a high enough standard. I couldn’t say without reading it more carefully, and also learning more about NYJM’s scope and standards. In any case, I appreciate that people like Tim Gowers have been honestly grappling with the paper’s arguments, refusing to avail themselves of that all-purpose modern refutation, “I can’t even.”
On the other hand, it also seems clear that NYJM took the extraordinary step of “disappearing” the paper not for mundane reasons of technical quality, but because they’d been successfully intimidated by prominent people who simply wanted to place the “Greater Male Variability Hypothesis” beyond the pale of discussion. It seems clear, moreover, that the GMVH is taken extremely seriously by experts in evolutionary biology, going back to Darwin himself—certainly if one restricts the application of the hypothesis to non-human animals, rather than to hot-button debates involving humans. And I read the paper, and found no inflammatory language; the paper just summarizes various positions people have taken on the GMVH and then goes on to study its mathematical model. So I think that anyone concerned about academic freedom is absolutely right to be terrified.
If there’s one bright spot for the author, it’s that his work has now been rocketed from the obscurity it would’ve surely otherwise enjoyed to people all over the world reading and debating it, due to a predictable Streisand effect.
98. fred Says:
Regarding the Magic Square Game in chapter 14, has the quantum solution been implemented?
If so, wouldn’t that be enough to prove quantum supremacy? (because it’s proven that there is no classical approach to get prob 10).
99. fred Says:
I guess the definition of what is quantum supremacy is a bit of a moving goal post…
In a sense, showing that one can use the wave function associated with a qubit to store implicitly extra information, which has to be stored explicitly in a classical computer, is enough to show quantum supremacy. So the example of the flawed coin flip using a single qubit is enough! (but can’t be generalized to complex problems)
“We’ll call a set of quantum gates universal if, by composing gates from, you can
approximate any unitary transformation on any number of qubits to any desired precision. Note that, if is finite, then approximation is all we can hope for, because there are uncountably many unitary transformations, but only a countable infinity of quantum circuits built out of gates.”
Is this a flaw that’s specific to QC? Or is there an analogy in classical computers?
Do we know for sure this isn’t going to make it impractical to realize many quantum algorithms?
100. Scott Says:
fred #98-99: When people use the phrase “quantum supremacy,” they really mean “quantum computational supremacy.” They emphatically do not mean: “demonstrating any quantum phenomenon whatsoever that has no classical explanation.” If that’s all they meant, then “quantum supremacy” would already have been demonstrated by around 1910. But we’re talking specifically about asymptotic computational speedups! Accept no substitutes or imitators.
Also, the “flaw” that you can’t perfectly realize an arbitrary unitary transformation using a discrete set of gates, but can only approximate it to any desired precision, has a precise analogue in classical probabilistic algorithms. Can you program your computer to set a certain variable to “1” with probability exactly 1/e? Not in ≤T steps you can’t, if T is any uniform upper bound on the program’s running time! But it doesn’t matter, because you can get the probability so close to 1/e that the discrepancy makes no difference. And the analogous issue in quantum computation doesn’t matter either, for the same reason. This is not a matter of opinion; it follows from theorems proved by Bernstein and Vazirani in 1993, and by Solovay and Kitaev a few years later. For more, see Nielsen & Chuang or nearly any other QC textbook or lecture notes (including mine 🙂 ).
101. Andrew Says:
Nice! Thanks a million for doing this, between this effort, “Burn Math Class”, “Our Mathematical Universe” and “The Particle at the End of the Universe”, I might actually learn some of this stuff eventually. I will now celebrate with a wild bout of uninformed speculation:
1.) Big Bang happens. At this point, there is no “time”.
2.) First observer appears. Not quite sure “what” this is, exactly, but basically it coincides with the appearance of the Higgs field. At this point or somewhere thereabouts, “time” happens (and so does quantization of fields, at least for observers “inside” time).
3.) So, there are now two halves of physical reality, separated by (this is a dumb name) a “time membrane”. Basically, on one side is everything we observe as espoused by the Standard Model, QFT, and so on. On the other side, there is no time and therefore no quantization of the constituent fields. This “latent” energy distributed through the various fields doesn’t manifest on our side as particles, but still can affect the Universe (i.e. this is dark energy and dark matter).
4.) We can observe the crossing of this “time membrane” in experiments. Weird things happen w.r.t. time because, on the other side, there is no time and therefore things “evolve” instantaneously.
5.) And, as icing on top, the imposition of “time” requires huge amounts of energy, which accounts for the difference between ZPE and the Cosmological constant. Basically, time is the energy that prevents the Universe from evolving in a wavefunction immediately from the Big Bang to a statistical distribution of the various quantum fields in space. It introduces a sort of “viscosity” or back pressure that prevents this unitary evolution. The total energy released by the Big Bang is thus ZPE + Cosmological constant, and the difference between the two is in essence the amount of energy “locked up” by this imposition of time.
There, I got that out. Now I can continue to learn the real stuff. Thanks again!
102. mjgeddes Says:
Scott,
(1) I think that both “Probability&Stats” *and* “Computational Logic” form an ‘abstract duality’. If both fields are equally fundamental to computer science (but at opposite ends of the ladder of abstraction), then the idea is that the field of “Computational Complexity” is a sort of *composite* (middle-out) of the other two fields (the middle between the two poles).
(2) There are ‘ladders of abstractions’ for each of the 3 main areas of Computer Science , (a) “Probability&Stats”, (b) Computational Complexity Theory, ( c ) Computational Logic.
If we ‘zoom-in’ on the field of “Probability&Stats” first (decomposing the field into more fine-grained sub-domains), I propose this ordering:
Probability theory (bottom) — Statistics (middle) — Stochastic processes (top)
In the other direction, ‘zooming-in’ on the field of “Computational Logic” (decomposing the field into sub-domains), I’m proposing:
Formal language theory (top) — Type theory (middle) — Modal logic (bottom)
And for “Computational Complexity Theory” my proposed decomposition and ordering is:
Automata (top) — Complexity classes (middle) — Information&Coding Theory (bottom)
Zooming out again and looking at the 3 main areas of CS a whole, I propose this global ordering:
Probability&Stats (bottom) – Computational Complexity (middle ) -Computational Logic (top)
For these 3 main areas, I decomposed into a total of 9 sub-fields (3 sub-fields for each), and given the orderings I postulated, “Complexity Classes” are dead-center on the ladder of abstraction.
We then view this as an ‘abstract duality’, with “Probability&Stats”/”Computational Logic” being the two poles, such that one could construct ‘Complexity Classes” *both* from the top-down (starting from formal languages) *and* from the bottom-up (starting from probabilities).
Thus, everything converges to computational complexity theory! All roads lead to the ‘Complexity Classes’… 🙂
103. fred Says:
Something just came up about some perplexing thought experiment in QM:
https://www.sciencedaily.com/releases/2018/09/180918114438.htm
104. Andrew Says:
@fred #103 — I propose an experiment, ’cause I’m curious (and maybe this has already been done), that goes something like this:
Set up a double-slit apparatus, like in this paper: http://iopscience.iop.org/article/10.1088/1367-2630/15/3/033018
Fire individual electrons (let’s say 10k) and construct a data set that records, for each electron, a.) number of electron (1st fired, 2nd fired, 3rd fired, and so on), b.) time detected, c.) x position, d.) y position, e.) z offset/position. This can be done from the .mov files included with the above experiment, along with a value for the “z” offset.
Now, move the apparatus 1mm in the x or y direction, and conduct the experiment again. Continue doing this to cover a “square” of 1 cm x 1 cm in the x/y plane, stopping every millimeter to re-do the experiment. After the whole square is “covered”, move the apparatus backwards or forwards 1mm and re-do the square. Repeat this process, moving in the “z” direction 1mm each time, until the experiment has been conducted within a 1 cm cube.
Also set up a “control” where the same experiment is run, the same number of electrons are fired, but the apparatus stays in the same physical position the whole time instead of moving after every 10k electrons are fired.
With that data, then, we can conduct a statistical analysis to determine if location affected the outcome in any way. For instance, where does the 4th electron fired tend to land, or is it completely random? Are all the events actually random, or are there any underlying patterns?
If my concept of continuous QFT fields is correct, then differences in field energies “outside” of the quantized values may somehow “skew” the results — not in a way that affects the overall scattering pattern, but perhaps in some other way. I, for one, would like to know how “random” events are placed within the confines of what’s predicted by the wave function.
105. skishore Says:
In Lecture 4, is the “Root NOT” matrix correct? Rotating twice maps |0> to |1> but |1> to -|0>. I think a true Root NOT needs to have imaginary coefficients.
106. Michael Says:
At various locations in lecture 19, the notes are missing the symbols p and q for the two primes.
107. Medved Says:
Sorry if it has been fixed in separate lectures — at page 43 in combined pdf there seems to be a misprint in the 3th formula — |111> instead of |101>.
108. Sheikh Abdul Raheem Ali Says:
In Lecture 5 (in both the combined and separate pdfs) there’s an error that makes the words around the kets appear in the wrong order. My guess is that LaTeX messed up? Anyway, something to look at.
Thanks for the notes!
109. Sheikh Abdul Raheem Ali Says:
Oh, and typo in Lecture 4, under Elitzur-Vaidman bomb, after making a query with a classical bit, “But then we either learn find nothing”.
110. Quantum computing since Democritus – Computação e Informação Quântica Says:
[…] link, notas de aula do curso “Quantum Information Science“. São 29 aulas, cobrindo muitos tópicos importantes deste […]
111. Shtetl-Optimized » Blog Archive » Quantum Computing Lecture Notes 2.0 Says:
[…] years ago, I posted detailed lecture notes on this blog for my Intro to Quantum Information Science undergrad course at UT Austin. Today, with […]
|
Connect with us
# saviaLAB: SaviaCBD, MelatonCBD, TurmeriCBD and SaviaCBD Balm
Published
on
SaviaLAB became a reality when its co-founder, Matt Castellon was diagnosed with ulcerative colitis. Having tried different treatments only to see them fail, left the co-founder in distress. It was his wife who recommended the likes of CBD, curcumin and other natural ingredients, which seemed to work in his favor.
Together with Ian Wolff, the duo created saviaLAB, which has been prided as helping the lives of millions by simply retorting to the combination of development and production of the finest plants and advanced technology.
To see how saviaLAB’s approach has fulfilled their mission in endorsing wellness achieved by nature’s best, here’s an overview of the essentials currently offered.
## What Has saviaLAB Made Available to Consumers?
Currently, consumers can choose between four different CBD-infused products, all tending to different health-related concerns. The following will break down each product in relation to the intentions behind them along with the chosen ingredients.
### saviaCBD (25 mg) |CALM ($79) The saviaCBD’s aim is to induce a sense of calmness in consumers. Available in 10 or 25mg of CBD per serving, the source of hemp used here (like all other products) entails a full spectrum source. While CBD is the star, it isn’t the only contributor to calmness, as other phytocannabinoids and terpenes have been deemed as making a difference as well. ### melatonCBD (25 mg) | NIGHT ($89)
The melatonCBD was designed to ensure that consumers attain restful sleep. In addition to the use of CBD, 1 gram of melatonin per dose has also been included. Melatonin is naturally secreted in the pineal gland within the body. It is responsible for advising the body of when to wake up and sleep. Some natural foods that carry melatonin include grapes, cherries and teas (i.e. chamomile) among others.
### turmeriCBD (25 mg) | ACTIVE ($89) The turmeriCBD was created with the intentions of ridding one of pain and inflammation. The unique ingredient here is the use of turmeric. This is a smart move considering that it is heavy in curcumins, substances known for halting molecules’ ability to induce different forms of inflammation in the body. ### saviaCBD Premium Hemp Balm (1000mg of Hemp extract) |$89
Compared to the aforementioned three, this respective product is a topical solution that permits consumers to apply the solution directly onto paining areas. In addition to the use of CBD, this Hemp Balm also combines beeswax and a number of essential oils that support one’s skin health.
## saviaLAB Final Thoughts
Overall, saviaLAB’s claims of retorting to science, technological advancements and nature is evident in the products offered. For instance, the ingredients combined with CBD work in conjunction to promote a healthier self – most of which are well known pairings concluded by existing, scientific studies.
Next, their reliance on nanotechnology is helpful as it increases the body’s absorption capacities and bioavailability of the embedded nutrients.
Finally, their level of transparency is not comparable to the average CBD product, as they not only have their lab reports available online, but a QR scan code is attached on each product for immediate results as one pleases. To explore the different facets of saviaLAB visit them at https://www.savialab.shop
Abby Veronika is one of the leading researchers here and resides in Southern California where she is a student at a local college to study Communications. While she is a best-selling women’s romance novel author on Amazon, her real expertise and passion lives within the cannabis, health and nutrition space.
|
# Chapter 9 - Section 9.5 - Absolute Convergence; The Ratio and Root Tests - Exercises - Page 515: 4
Converges
#### Work Step by Step
Consider $a_n=\dfrac{2^{n+1}}{n3^{n-1}}$ Now, $l=\lim\limits_{n \to \infty} |\dfrac{a_{n+1}}{a_{n+1}} |=\lim\limits_{n \to \infty}|\dfrac{\dfrac{2^{n+2}}{(n+1)3^{n}}}{\dfrac{2^{n+1}}{n3^{n-1}}}|$ Thus, we have $l=\lim\limits_{n \to \infty}|\dfrac{2n}{3(n+1)}|=\lim\limits_{n \to \infty}|\dfrac{2n}{3n+3}|=\dfrac{2}{3} \lt 1$ Hence, the series Converges absolutely by the ratio test.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
# How do you solve using gaussian elimination or gauss-jordan elimination, x-2y-z=2, 2x-y+z=4, -x+y-2z=-4?
Feb 16, 2018
$x = 1 , y = - 1 , z = 1$
#### Explanation:
To solve the matrix equation $A x = b$ we start by writing the augmented matrix $\left(A | b\right)$ :
$\left(\begin{matrix}1 & - 2 & - 1 & 2 \\ 2 & - 1 & 1 & 4 \\ - 1 & 1 & - 2 & - 4\end{matrix}\right)$
usually we would put a vertical line to separate the fourth column (the column vector $b$) from the first three (which represent $A$, but couldn't figure out how to do it!
The aim in Gauss elimination is to carry out row operations on the augmented matrix until $A$ is converted to reduced row-echelon form. To do this we first subtract twice the first row from the second (denote this by ${R}_{2} - 2 {R}_{1}$ ) and ${R}_{3} + {R}_{1}$
$\left(\begin{matrix}1 & - 2 & - 1 & 2 \\ 0 & 3 & 3 & 0 \\ 0 & - 1 & - 3 & - 2\end{matrix}\right)$
Divide the second row by 3 $\left({R}_{2} / 3\right)$ :
$\left(\begin{matrix}1 & - 2 & - 1 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & - 1 & - 3 & - 2\end{matrix}\right)$
${R}_{3} + {R}_{2}$
$\left(\begin{matrix}1 & - 2 & - 1 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & - 2 & - 2\end{matrix}\right)$
$\left({R}_{3} / \left\{- 2\right\}\right)$
$\left(\begin{matrix}1 & - 2 & - 1 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1\end{matrix}\right)$
This completes the Gauss-elimination process. We can now carry out back-substitution to determine the required solution:
$z = 1$
$y + z = 0 \implies y = - 1$
$x - 2 y - z = 2 \implies x = 2 y + z + 2 = 2 \times \left(- 1\right) + 1 + 2 = 1$
So, the solution is $x = 1 , y = - 1 , z = 1$
To carry out Gauss Jordan elimination, the steps are the same until we reach
$\left(\begin{matrix}1 & - 2 & - 1 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & - 1 & - 3 & - 2\end{matrix}\right)$
${R}_{1} + 2 {R}_{2} , {R}_{3} + {R}_{2}$
$\left(\begin{matrix}1 & 0 & 1 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & - 2 & - 2\end{matrix}\right)$
$\left({R}_{3} / \left\{- 2\right\}\right)$
$\left(\begin{matrix}1 & 0 & 1 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1\end{matrix}\right)$
${R}_{1} - {R}_{3} , {R}_{2} - {R}_{3}$
$\left(\begin{matrix}1 & 0 & 0 & 1 \\ 0 & 1 & 0 & - 1 \\ 0 & 0 & 1 & 1\end{matrix}\right)$
from this we can immediately read off $x = 1 , y = - 1 , z = 1$
|
# Limit of a Lebesgue integral
What is the value of:
$$\lim_{n\to\infty}\sqrt{n}\int_0^{1}(1-t^2)^ndt$$
I think I have to use the Theorem of dominated convergence
• What have you done so far? People are much more likely to help if they see your dedication to the question. Oct 16 '13 at 16:51
• Yes, but I wouldn't ask here If I knew how to start... I'm totally lost. Oct 16 '13 at 16:55
• Have you tried computing the integral? At least for the first few $n$ and maybe even for general $n$? Oct 16 '13 at 17:01
• A little partial integration gave me ($A_n$ denoting the integral part) $$A_n = A_{n-1} + \frac{2}{n} \int_0^1 t (1-t^2)^n dt = A_{n-1} + \frac{2}{n(n+1)} A_{n+1}$$ So we have a recurrence relation of $$A_{n+1} = \frac{n(n+1)}{2} (A_n - A_{n-1})$$ Oct 16 '13 at 17:04
You can find the value of the limit by squeezing, since: $$\sqrt{n}\int_{0}^{1}(1-t^2)^n dt\leq \sqrt{n}\int_{0}^{1}e^{-nt^2}dt = \int_{0}^{\sqrt{n}}e^{-t^2}dt<\int_{0}^{+\infty}e^{-t^2}dt=\frac{\sqrt{\pi}}{2},$$ and the differences between the first and the second term, the third and the fourth, are $O\left(\frac{1}{\sqrt{n}}\right).$ Through the $t=\cos\theta$ substitution you can also recognize the Wallis product in the LHS.
• Perhaps more quickly, you could substitute $t' = t/\sqrt{n}$. Oct 16 '13 at 17:19
From my comment: \begin{align*} A_n & = \int_0^1 (1-t^2)^n dt = \int_0^1 (1-t^2)^{n-1} dt - \int_0^1 t^2 (1-t^2)^{n-1} dt \\ & = A_{n-1} - \left( \underbrace{\frac{1}{n} \left[ t^2 (1-t^2)^n \right]_0^1}_{=0} - \frac{2}{n} \int_0^1 t (1-t^2)^n dt \right) \\ & = A_{n-1} + \frac{2}{n} \left( \underbrace{\frac 1 {n+1} \left[ t(1-t^2)^{n+1} \right]_0^1}_{=0} - \frac{1}{n+1} \int_0^1 (1-t^2)^{n+1} \right) \\ & = A_{n-1} - \frac 2 {n(n+1)} A_{n+1} \end{align*} So $$A_{n+1} = \frac{n(n+1)} 2 (A_{n-1} - A_n)$$ With $A_0 = 1, A_1 = \frac{2}{3}$.
|
# Multikink scattering in the $\phi^6$ model revisited
@inproceedings{Adam2022MultikinkSI,
title={Multikink scattering in the \$\phi^6\$ model revisited},
author={Christoph Adam and Patrick Dorey and Alberto Garc{\'i}a Mart{\'i}n-Caro and Miguel Huidobro and K. Oleś and T. Romańczukiewicz and Ya. M. Shnir and Andrzej Wereszczynski},
year={2022}
}
• Published 19 September 2022
• Physics
Antikink-kink (¯KK) collisions in the φ 6 model exhibit resonant scattering although the φ 6 kinks do not support any bound states to which energy could be transferred. In Phys. Rev. Lett. 107 (2011) 091602 it was conjectured that, instead, energy is transferred to a collective bound mode of the full ¯KK configuration. Here we present further strong evidence for this conjecture. Further, we construct a collective coordinate model (CCM) for ¯KK scattering based on this collective bound mode…
## References
SHOWING 1-10 OF 52 REFERENCES
For kink–antikink scattering within the φ4 non-linear field theory in one space and one time dimension resonance type configurations emerge when the relative velocity between kink and antikink falls
• Physics
Physical Review D
• 2020
Kink-antikink scattering in the $\phi^4$ model is investigated in the limit when the static inter-soliton force vanishes. We observe the formation of spectral walls and, further, identify a new
• Mathematics
Physical review letters
• 2011
Although the single-kink solutions for this model do not possess an internal vibrational mode, simulations reveal a resonant scattering structure, thereby providing a counterexample to the standard belief that the existence of such a mode is a necessary condition for multibounce resonances in general kink-antikink collisions.
Inelastic kink-antikink collisions are investigated in the two-dimensional ¢' model. It is shown that a bound state of the kink and antikink is formed when the colliding velocity V is less than a
• Physics
• 2014
We study kink scattering processes in the ( 1 + 1 ) -dimensional φ 6 model in the framework of the collective coordinate approximation. We find critical values of the initial velocities of the
• Physics
SIAM J. Appl. Dyn. Syst.
• 2005
A detailed mathematical explanation of a phenomenon known as the two-bounce resonance observed in collisions between kink and antikink traveling waves of the $\phi^4$ equations of mathematical physics, including the origin of several mathematical assumptions needed by previous researchers are provided.
|
# In a triangle ABC, E is the mid-point of median AD.
Question. In a triangle $A B C, E$ is the mid-point of median $A D .$ Show that $\operatorname{ar}(B E D)=\frac{1}{4} \operatorname{ar}(A B C)$
Solution:
$\mathrm{AD}$ is the median of $\triangle \mathrm{ABC}$. Therefore, it will divide $\triangle \mathrm{ABC}$ into two triangles of equal areas.
$\therefore$ Area $(\triangle \mathrm{ABD})=$ Area $(\triangle \mathrm{ACD})$
$\Rightarrow$ Area $(\Delta A B D)=\frac{1}{2}$ Area $(\triangle A B C) \ldots$(1)
In $\triangle A B D, E$ is the mid-point of $A D$. Therefore, $B E$ is the median.
$\therefore$ Area $(\triangle B E D)=$ Area $(\triangle A B E)$
$\Rightarrow \operatorname{Area}(\Delta B E D)=\frac{1}{2}$ Area $(\triangle A B D)$
$\Rightarrow$ Area $(\Delta B E D)=\frac{1}{2} \times \frac{1}{2}$ Area $(\triangle A B C)$ [From equation (1)]
$\Rightarrow$ Area $(\Delta B E D)=\frac{1}{4}$ Area $(\triangle A B C)$
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 1.17: Lesson Seventeen
Difficulty Level: At Grade Created by: CK-12
## Matrixes
1. A matrix can help you sort out sounds and letters. A matrix looks like a big square divided up into smaller squares, like this:
2. A matrix has columns and rows. Columns run up and down on the page — like the stone columns in front of a big building. Rows run across the page — like a row of people on a bench. So we can label our matrix this way:
Left Column Right Column
Top Row
Bottom Row
3. We can also number the little squares:
Left Column Right Column
Top Row Square #1 Square #2
Bottom Row Square #4 Square #3
4. Squares #1 and #2 make up the top row. Which two squares make up the bottom row? #3 and #4
5. Squares #1 and #3 make up the left column. Which two squares make up the right column? #2 and #4
6. The left column and the top row overlap in Square #1. In what square do the left column and the bottom row overlap? Square #3
7. What column and row overlap in square #4? Right column and bottom row
Teaching Notes.
1. Two-dimensional matrixes like the four-square models introduced in this lesson are used extensively in upcoming lessons. They are a very powerful tool for helping students solve the kinds of problems posed for them in the Basic Speller. Nearly always in solving these problems the students must notice how two different conditions either do or do not occur together. Two-dimensional matrixes make that job easier.
Because matrixes are so important to upcoming lessons, it is crucial that the students understand the basic concepts introduced in this lesson: What a column is. What a row is. How a square is created when a column and a row overlap. Most students seem to catch on to the basic idea of matrixes very readily. If anyone is having trouble, you might find it useful to point out that they operate just like a multiplication table. In fact, a multiplication table is nothing but a two-dimensional matrix with a lot of rows and columns:
234562468101236912151848121620245101520253061218243036\begin{align*}& && 2 && 3 && 4 && 5 && 6\\ &2 && 4 && 6 && 8 && 10 && 12\\ &3 && 6 && 9 && 12 && 15 && 18\\ &4 && 8 && 12 && 16 && 20 && 24\\ &5 && 10 && 15 && 20 && 25 && 30\\ &6 && 12 && 18 && 24 && 30 && 36\end{align*}
You might point out that these matrixes are all over the place: Your attendance sheet is probably a two-dimensional matrix, so too any progress charts you may keep on the bulletin board. A monthly calendar is a two-dimensional matrix; it is just that we usually don't bother to label the rows. The columns are labeled with the days of the week.
An informal matrix hunt might turn up some surprising examples. And such hunts are quite powerful teaching and learning strategies since the ability to identify a new, and perhaps slightly different, instance is an excellent sign of mastery of the general concept.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Please to create your own Highlights / Notes
Show Hide Details
Description
Authors:
Tags:
Subjects:
|
Home > Statistics > Multiple equation models: Estimation and marginal effects using gsem
## Multiple equation models: Estimation and marginal effects using gsem
Starting point: A hurdle model with multiple hurdles
In a sequence of posts, we are going to illustrate how to obtain correct standard errors and marginal effects for models with multiple steps.
Our inspiration for this post is an old Statalist inquiry about how to obtain marginal effects for a hurdle model with more than one hurdle (http://www.statalist.org/forums/forum/general-stata-discussion/general/1337504-estimating-marginal-effect-for-triple-hurdle-model). Hurdle models have the appealing property that their likelihood is separable. Each hurdle has its own likelihood and regressors. You can estimate each one of these hurdles separately to obtain point estimates. However, you cannot get standard errors or marginal effects this way.
In this post, we show how to get the marginal effects and standard errors for a hurdle model with two hurdles using gsem. gsem is ideal for this purpose because it allows us to estimate likelihood-based models with multiple equations.
The model
Suppose we are interested in the mean spending on dental care, given the characteristic of the individuals. Some people spend zero dollars on dental care in a year, and some people spend more than zero dollars. Only the individuals that cross a hurdle are willing to spend a positive amount on dental care. Hurdle models allow the characteristics of the individuals that spend a positive amount and those who spend zero to differ.
There could be more than one hurdle. In the dental-care spending example, the second hurdle could be insurance coverage: uninsured, basic insurance, or premium insurance. We model the first hurdle of spending zero or a positive amount by a probit. We model the second hurdle of insurance level using an ordered probit. Finally, we model the positive amount spent using an exponential-mean model.
We are interested in the marginal effects for the mean amount spent for someone with premium insurance, given individual characteristics. The expression for this conditional mean is
\begin{eqnarray*}
\exp\left(X_e\beta_e\right)
\end{eqnarray*}
The conditional mean accounts for the probabilities of being in different threshold levels and for the expenditure preferences among those spending a positive amount.We use the subscripts $$p$$, $$o$$, and $$e$$ to emphasize that the covariates and coefficients related to the probit, ordered probit, and exponential mean are different.
Below we will use gsem to estimate the model parameters from simulated data. spend is a binary outcome for whether an individual spends money on dental care, insurance is an ordered outcome indicating insurance level, and expenditure corresponds to the amount spent on dental care.
. gsem (spend <- x1 x2 x4, probit)
> (insurance <- x3 x4, oprobit)
> (expenditure <- x5 x6 x4, poisson),
> vce(robust)
note: expenditure has noncount values;
you are responsible for the family(poisson) interpretation
Iteration 0: log pseudolikelihood = -171938.67
Iteration 1: log pseudolikelihood = -79591.213
Iteration 2: log pseudolikelihood = -78928.015
Iteration 3: log pseudolikelihood = -78925.126
Iteration 4: log pseudolikelihood = -78925.126
Generalized structural equation model Number of obs = 10,000
Response : spend
Family : Bernoulli
Response : insurance
Family : ordinal
Response : expenditure
Family : Poisson
Log pseudolikelihood = -78925.126
----------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
---------------+------------------------------------------------------------
spend <- |
x1 | .5189993 .0161283 32.18 0.000 .4873884 .5506102
x2 | -.4755281 .02257 -21.07 0.000 -.5197646 -.4312917
x4 | .5300193 .0187114 28.33 0.000 .4933455 .566693
_cons | .4849085 .0288667 16.80 0.000 .4283308 .5414862
---------------+------------------------------------------------------------
insurance <- |
x3 | .299793 .0084822 35.34 0.000 .2831681 .3164178
x4 | -.2835648 .0135266 -20.96 0.000 -.3100765 -.2570531
---------------+------------------------------------------------------------
expenditure <- |
x5 | -.2992792 .0192201 -15.57 0.000 -.3369499 -.2616086
x6 | .319377 .0483959 6.60 0.000 .2245229 .4142312
x4 | .448041 .0252857 17.72 0.000 .3984819 .4976001
_cons | 1.088217 .0375369 28.99 0.000 1.014646 1.161788
---------------+------------------------------------------------------------
insurance |
/cut1 | -1.28517 .0236876 -54.26 0.000 -1.331596 -1.238743
/cut2 | -.2925979 .0216827 -13.49 0.000 -.3350951 -.2501006
/cut3 | .7400875 .0230452 32.11 0.000 .6949198 .7852552
----------------------------------------------------------------------------
The estimated probit parameters are in the spend equation. The estimated ordinal-probit parameters are in the insurance equation. The estimated expenditure parameters are in the expenditure equation. We could have obtained these point estimates using probit, oprobit, and poisson. With gsem, we do this jointly and obtain correct standard errors when computing marginal effects. In the case of the poisson model, we are using gsem to obtain an exponential mean and should interpret the outcomes from a quasilikelihood perspective. Because of the quasilikelihood nature of the problem, we use the vce(robust) option.
The average of the marginal effect of x4 is
\begin{equation*}
\frac{1}{N}\sum_{i=1}^N \frac{\partial \hat{E}\left(\text{expenditure}_i|X_i, {\tt insurance}_i\right)}{\partial {\tt x4}_i}
\end{equation*}
and we estimate it by
. margins, vce(unconditional) predict(expression(normal(eta(spend))*
> normal(eta(insurance)-_b[insurance_cut2:_cons])*
> exp(eta(expenditure)))) dydx(x4)
Average marginal effects Number of obs = 10,000
Expression : Predicted normal(eta(spend))*
normal(eta(insurance)-_b[insurance_cut2:_cons])* e,
predict(expression(normal(eta(spend))*
normal(eta(insurance)-_b[insurance_cut2:_cons])*
exp(eta(expenditure))))
dy/dx w.r.t. : x4
---------------------------------------------------------------------------
| Unconditional
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
----------+----------------------------------------------------------------
x4 | .5382276 .0506354 10.63 0.000 .4389841 .6374711
---------------------------------------------------------------------------
We used the expression() option to write an expression for the expected value of interest and predict() and eta() to denote the linear predictions for each model. We use the vce(unconditional) option to allow the covariates to be random instead of fixed. In other words, we are estimating a population effect instead of a sample effect.
Final considerations
We illustrated how to use gsem to obtain the estimates and standard errors for a multiple hurdle model and its marginal effect. In subsequent posts, we will obtain these results using other Stata tools.
Appendix
Below is the code used to produce the data.
clear
set seed 111
set obs 10000
// Generating exogenous variables
generate x1 = rnormal()
generate x2 = int(3*rbeta(2,3))
generate x3 = rchi2(1)-2
generate x4 = ln(rchi2(4))
generate x5 = rnormal()
generate x6 = rbeta(2,3)>.6
// Generating unobservables
generate ep = rnormal() // for probit
generate eo = rnormal() // for ordered probit
generate e = rnormal() // for lognormal equation
// Generating linear predictions
generate xbp = .5*(1 + x1 - x2 + x4)
generate xbo = .3*(1 + x3 - x4)
generate xbe = .3*(1 - x5 + x6 + x4)
// Generating outcomes
generate spend = xbp + ep > 0
generate yotemp = xbo + eo
generate insurance = yotemp
generate yexp = exp(xbe + e)
replace insurance = 1 if yotemp < -1
replace insurance = 2 if yotemp> -1 & yotemp<0
replace insurance = 3 if yotemp> 0 & yotemp <1
replace insurance = 4 if yotemp>1
generate expenditure = spend*insurance*yexp
Categories: Statistics Tags:
|
# Algebra
Algebra Level 1
If $$b = 2$$ and $$a + a + \dfrac{b}{2} = 3,$$ then find the value of $$a$$.
×
|
### Lei Mao
Machine Learning, Artificial Intelligence, Computer Science.
# Hungarian Matching Algorithm
### Introduction
The Hungarian matching algorithm is a combinatorial optimization algorithm that solves the assignment linear-programming problem in polynomial time. The assignment problem is an interesting problem and the Hungarian algorithm is difficult to understand.
In this blog post, I would like to talk about what assignment is and give some intuitions behind the Hungarian algorithm.
### Minimum Cost Assignment Problem
#### Problem Definition
In the matrix formulation, we are given a nonnegative $n \times n$ cost matrix, where the element in the $i$-th row and $j$-th column represents the cost of assigning the $j$-th job to the $i$-th worker. We have to find an assignment of the jobs to the workers, such that each job is assigned to one worker, each worker is assigned one job, and the total cost of assignment is minimum.
Finding a brute-force solution for this problem takes $O(n!)$ because the number of valid assignments is $n!$. We really need a better algorithm, which preferably takes polynomial time, to solve this problem.
#### Maximum Cost Assignment VS Minimum Cost Assignment
Solving a maximum cost assignment problem could be converted to solving a minimum cost assignment problem, and vice versa. Suppose the cost matrix is $c$, solving the maximum cost assignment problem for cost matrix $c$ is equivalent to solving the minimum cost assignment problem for cost matrix $-c$. So we will only discuss the minimum cost assignment problem in this article.
#### Non-Square Cost Matrix
In practice, it is common to have a cost matrix which is not square. But we could make the cost matrix square, fill the empty entries with $0$, and apply the Hungarian algorithm to solve the optimal cost assignment problem.
### Hungarian Matching Algorithm
#### Algorithm
Brilliant has a very good summary on the Hungarian algorithm for adjacency cost matrix. Let’s walk through it.
1. Subtract the smallest entry in each row from all the other entries in the row. This will make the smallest entry in the row now equal to 0.
2. Subtract the smallest entry in each column from all the other entries in the column. This will make the smallest entry in the column now equal to 0.
3. Draw lines through the row and columns that have the 0 entries such that the fewest lines possible are drawn.
4. If there are nn lines drawn, an optimal assignment of zeros is possible and the algorithm is finished. If the number of lines is less than nn, then the optimal number of zeroes is not yet reached. Go to the next step.
5. Find the smallest entry not covered by any line. Subtract this entry from each row that isn’t crossed out, and then add it to each column that is crossed out. Then, go back to Step 3.
Note that we could not tell the algorithm time complexity from the description above. The time complexity is actually $O(n^3)$ where $n$ is the side length of the square cost adjacency matrix.
#### Example
The Brilliant Hungarian algorithm for adjacency cost matrix also comes with a good example. We slightly modified the example by making the cost matrix non-square.
Company Cost for Musician Cost for Chef Cost for Cleaners
Company A 108 125 150
Company B 150 135 175
To avoid duplicating the solution on Brilliant, instead of solving it manually, we will use the existing SciPy linear sum assignment optimizer to solve, and verified using a brute force solver.
# hungarian.py
from typing import List, Tuple
import itertools
import numpy as np
from scipy.optimize import linear_sum_assignment
def linear_sum_assignment_brute_force(
cost_matrix: np.ndarray,
maximize: bool = False) -> Tuple[List[int], List[int]]:
h = cost_matrix.shape[0]
w = cost_matrix.shape[1]
if maximize is True:
cost_matrix = -cost_matrix
minimum_cost = float("inf")
if h >= w:
for i_idices in itertools.permutations(list(range(h)), min(h, w)):
row_ind = i_idices
col_ind = list(range(w))
cost = cost_matrix[row_ind, col_ind].sum()
if cost < minimum_cost:
minimum_cost = cost
optimal_row_ind = row_ind
optimal_col_ind = col_ind
if h < w:
for j_idices in itertools.permutations(list(range(w)), min(h, w)):
row_ind = list(range(h))
col_ind = j_idices
cost = cost_matrix[row_ind, col_ind].sum()
if cost < minimum_cost:
minimum_cost = cost
optimal_row_ind = row_ind
optimal_col_ind = col_ind
return optimal_row_ind, optimal_col_ind
if __name__ == "__main__":
cost_matrix = np.array([[108, 125, 150], [150, 135, 175]])
row_ind, col_ind = linear_sum_assignment_brute_force(
cost_matrix=cost_matrix, maximize=False)
minimum_cost = cost_matrix[row_ind, col_ind].sum()
print(
f"The optimal assignment from brute force algorithm is: {list(zip(row_ind, col_ind))}."
)
print(f"The minimum cost from brute force algorithm is: {minimum_cost}.")
row_ind, col_ind = linear_sum_assignment(cost_matrix=cost_matrix,
maximize=False)
minimum_cost = cost_matrix[row_ind, col_ind].sum()
print(
f"The optimal assignment from Hungarian algorithm is: {list(zip(row_ind, col_ind))}."
)
print(f"The minimum cost from Hungarian algorithm is: {minimum_cost}.")
### Final Remarks
I rarely applaud for deep learning algorithms. Conventional algorithms such as the Hungarian matching algorithm are truly amazing.
|
Categories
# Angles of Star | AMC 8, 2000 | Problem 24
Try this beautiful problem from GeometryAMC-8, 2000 ,Problem-24, based triangle. You may use sequential hints to solve the problem.
Try this beautiful problem from Geometry from AMC-8, 2000, Problem-24, based on angles of Star
## Angles of Star | AMC-8, 2000 | Problem 24
If $\angle A = 20^\circ$ and $\angle AFG =\angle AGF$, then $\angle B+\angle D =$
• $90$
• $70$
• $80$
### Key Concepts
Geometry
Star
Triangle
Answer:$80$
AMC-8, 2000 problem 24
Pre College Mathematics
## Try with Hints
Find the $\angle AFG$
Can you now finish the problem ……….
sum of the angles of a Triangle is $180^\circ$
can you finish the problem……..
we know that the sum of the angles of a Triangle is $180^\circ$
In the $\triangle AGF$ we have,$(\angle A +\angle AGF +\angle AFG) =180^\circ$
$\Rightarrow 20^\circ +2\angle AFG=180^\circ$(as $\angle A =20^\circ$ & $\angle AFG=\angle AGF$)
$\Rightarrow \angle AFG=80^\circ$ i.e $\angle EFD=\angle 80^\circ$
So the $\angle BFD=\frac{360^\circ -80^\circ-80^\circ}{2}=100^\circ$
Now in the $\triangle BFD$,$(\angle BFD +\angle B +\angle D$)=$180^\circ$
$\Rightarrow \angle B +\angle D=180^\circ -100^\circ$
$\Rightarrow \angle B +\angle D=80^\circ$
|
# Plotting translated vector fields from user-defined functions
I have two time- and space-dependent vector fields, one which is:
$$\mathbf A(x,y,t) = -y \, t \, \hat{\mathbf x} + x \, t \, \hat{\mathbf y}, \tag 1$$
and another $$\mathbf B(x,y,t)$$ which is $$\mathbf A$$ translated $$3/2$$ units in the positive $$\hat{\mathbf x}$$ direction and $$3/2$$ units in the positive $$\hat{\mathbf y}$$ direction, so:
$$\mathbf B(x,y,t) = - \left( y - \dfrac{3}{2} \right) \, t \, \hat{\mathbf x} + \left( x - \dfrac{3}{2} \right) \, t \, \hat{\mathbf y}. \tag 2$$
I want to plot $$\mathbf B$$ evaluated at $$t = 1$$ in Mathematica. I tried two different ways but only one worked. The first one is directly from the expression (2):
$$\mathbf B(x,y,1) = - \left( y - \dfrac{3}{2} \right) \, \hat{\mathbf x} + \left( x - \dfrac{3}{2} \right) \, \hat{\mathbf y}. \tag 3$$
The resulting plot correctly shows the translation in both axes:
StreamPlot[{3/2 - y, -(3/2) + x}, {x, -5, 5}, {y, -5, 5}]
The second way I plot $$\mathbf B(x,y,1)$$ is by first defining $$\mathbf A$$ and then translating it. However, the resulting plot only shows $$\mathbf A$$ translated in the $$\hat{\mathbf x}$$ direction, even though B[1] gives the same expression as (3):
Clear[A, B];
A[t_] := {-y*t, x*t};
B[t_] := ReplaceAll[ReplaceAll[A[t], x -> x - 3/2], y -> y - 3/2];
B[1]
StreamPlot[B[1], {x, -5, 5}, {y, -5, 5}]
Note that if you use the function VectorPlot instead of StreamPlot, the error is still present in the second method.
Why the second method didn't work? I want to fix it because I'm also using Mathematica to automatically translate the field $$\mathbf A$$.
Given that B[1] is the same expression as (3), I think the problem is related with user-defining B.
• replace B[1] with Evaluate@B[1]?
– kglr
Sep 7, 2021 at 20:54
The simplest solution is to use StreamPlot[Evaluate[B[1]],...].
At each point the plot is effectively evaluating
Block[{x = x0, y = y0}, B[1]]
to determine what the vector is. Suppose $$x_0=1$$ and $$y_0=2$$, then the evaluation of B[1] at that point amounts to:
In[34]:= ReplaceAll[ReplaceAll[{-2*1, 1*1}, 1 -> 1 - 3/2], 2 -> 2 - 3/2]
Out[34]= {-2, -1/2}
instead of the expected {-1/2, -1/2}. Using Evaluate forces the symbolic calculations to be computed before evaluating at numeric coordinates.
• Thanks for explaining why using Evaluate solves the problem! The Wiki has very few examples. Sep 7, 2021 at 21:03
|
# Use technology to construct the confidence intervals for the population variance sigma^{2} and the population standard deviation sigma. Assume the sam
Use technology to construct the confidence intervals for the population variance ${\sigma }^{2}$ and the population standard deviation $\sigma$. Assume the sample is taken from a normally distributed population. $c=0.99,s=37,n=20$ The confidence interval for the population variance is (?, ?). The confidence interval for the population standard deviation is (?, ?)
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Asma Vang
Step 1 Solution: We need to construct the 99% confidence interval for the population variance. We have been provided with the following information about the sample variance and sample size: ${s}^{2}=1369$
$n=20$ Step 2 The critical values for $\alpha =0.01$ and $df=19$ degrees of freedom are ${X}_{L}^{2}={X}_{1-\frac{\alpha }{2},n-1}^{2}=6.844$ and ${X}_{U}^{2}={X}_{1-\frac{\alpha }{2},n-1}^{2}=38.5823$. The corresponding confidence interval is computed as shown below: $CI\left(Variance\right)=\left(\frac{\left(n-1\right){s}^{2}}{{X}_{\frac{\alpha }{2},n-1}^{2}},\frac{\left(n-1\right){s}^{2}}{{X}_{1-\frac{\alpha }{2},n-1}^{2}}\right)$
$\left(\frac{\left(20-1\right)×1369}{38.5823},\frac{\left(20-1\right)×1369}{6.844}$
$=\left(674.17,3800.5711\right)$ Now that we have the limits for the confidence interval, the limits for the 99% confidence interval for the population standard deviation are obtained by simply taking the squared root of the limits of the confidence interval for the variance, so then: CI(Standard Deviation) $=\left(\sqrt{674.17},\sqrt{3800.5711}\right)=\left(25.9648,61.6488\right)$ Therefore, based on the data provided, the 99% confidence interval for the population variance is $674.17<{\sigma }^{2}<3800.5711$, and the 99% confidence interval for the population standard deviation is $25.9648<\sigma <61.6488$. Step 3 Ci for population variance (674.17 , 3800.57) Ci for population standard deviation ( 25.96 , 61.65)
|
It's not required, but please consider linking to this page or the main page from your site if you like this product.
Before downloading or using this product, make sure you understand and accept the terms of the license.
This plugin allows you to include Google AdSense advertisings into your wiki page. It is possible to configure whether or not ads are shown to admins and/or logged-in users.
The plugin also exports a function for use with your template, so you will have to insert the following code into your template (main.php), somewhere inside of the <head></head> tags.
<?php
?>
Note: Inserting the code above is required, not optional.
Template Authors Note: You can insert the above code and make your template "Google AdSense Ready", even if your users do not use Google AdSense (or have this plugin.)
## Install
As a plugin all you need to do is unpack the file into the lib/plugins/ directory (you should end up with a lib/plugins/googleads folder.)
To upgrade, remove the original lib/plugins/googleads folder, and install the new version as instructed above. You may wish to make a note of your google analytics code first though.
## What's New
August 30, 2009
• Terence has taken over the project from Bernd and the project will be hosted on this site from here on.
April 9, 2007
• Don't display ads on login/registration pages
Mar 14, 2007
• Initial release (Based on Google Analytics plugin.)
|
# Many-worlds interpretation
The quantum-mechanical "Schrödinger's cat" paradox according to the many-worlds interpretation. In this interpretation, every event is a branch point; the cat is both alive and dead, even before the box is opened, but the "alive" and "dead" cats are in different branches of the universe, both of which are equally real, but which do not interact with each other.[1]
The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual "world" (or "universe"). In lay terms, the hypothesis states there is a very large—perhaps infinite[2]—number of universes, and everything that could possibly have happened in our past, but did not, has occurred in the past of some other universe or universes. The theory is also referred to as MWI, the relative state formulation, the Everett interpretation, the theory of the universal wavefunction, many-universes interpretation, or just many-worlds.
The original relative state formulation is due to Hugh Everett in 1957.[3][4] Later, this formulation was popularized and renamed many-worlds by Bryce Seligman DeWitt in the 1960s and 1970s.[1][5][6][7] The decoherence approaches to interpreting quantum theory have been further explored and developed,[8][9][10] becoming quite popular. MWI is one of many multiverse hypotheses in physics and philosophy. It is currently considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the historical Copenhagen interpretation),[11] and hidden variable theories such as the Bohmian mechanics.
Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views reality as a many-branched tree, wherein every possible quantum outcome is realised.[12] Many-worlds reconciles the observation of non-deterministic events, such as the random radioactive decay, with the fully deterministic equations of quantum physics.
In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence, and this is supposed to resolve all of the correlation paradoxes of quantum theory, such as the EPR paradox[13][14] and Schrödinger's cat,[1] since every possible outcome of every event defines or exists in its own "history" or "world".
{{#invoke: Sidebar | collapsible }}
## Outline
File:Hugh-Everett.jpg
Hugh Everett (1930–1982) was the first physicist who proposed the many-worlds interpretation (MWI) of quantum physics, which he termed his "relative state" formulation.
Although several versions of many-worlds have been proposed since Hugh Everett's original work,[4] they all contain one key idea: the equations of physics that model the time evolution of systems without embedded observers are sufficient for modelling systems which do contain observers; in particular there is no observation-triggered wave function collapse which the Copenhagen interpretation proposes. Provided the theory is linear with respect to the wavefunction, the exact form of the quantum dynamics modelled, be it the non-relativistic Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, does not alter the validity of MWI since MWI is a metatheory applicable to all linear quantum theories, and there is no experimental evidence for any non-linearity of the wavefunction in physics.[15][16] MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of very many, possibly even non-denumerably infinitely[2] many, increasingly divergent, non-communicating parallel universes or quantum worlds.[7]
The idea of MWI originated in Everett's Princeton Ph.D. thesis "The Theory of the Universal Wavefunction",[7] developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 entitled "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state";[17] Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt,[7] who was responsible for the wider popularisation of Everett's theory, which had been largely ignored for the first decade after publication. DeWitt's phrase "many-worlds" has become so much more popular than Everett's "Universal Wavefunction" or Everett–Wheeler's "Relative State Formulation" that many forget that this is only a difference of terminology; the content of both of Everett's papers and DeWitt's popular article is the same.
The many-worlds interpretation shares many similarities with later, other "post-Everett" interpretations of quantum mechanics which also use decoherence to explain the process of measurement or wavefunction collapse. MWI treats the other histories or worlds as real since it regards the universal wavefunction as the "basic physical entity"[18] or "the fundamental entity, obeying at all times a deterministic wave equation".[19] The other decoherent interpretations, such as consistent histories, the Existential Interpretation etc., either regard the extra quantum worlds as metaphorical in some sense, or are agnostic about their reality; it is sometimes hard to distinguish between the different varieties. MWI is distinguished by two qualities: it assumes realism,[18][19] which it assigns to the wavefunction, and it has the minimal formal structure possible, rejecting any hidden variables, quantum potential, any form of a collapse postulate (i.e., Copenhagenism) or mental postulates (such as the many-minds interpretation makes).
Decoherent interpretations of many-worlds using einselection to explain how a small number of classical pointer states can emerge from the enormous Hilbert space of superpositions have been proposed by Wojciech H. Zurek. "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected."[20] These ideas complement MWI and bring the interpretation in line with our perception of reality.
Many-worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many-worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett) or by those who propose that all the other, non-MW interpretations, are inconsistent, illogical or unscientific in their handling of measurements; Hugh Everett argued that his formulation was a metatheory, since it made statements about other interpretations of quantum theory; that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world."[21] Deutsch is dismissive that many-worlds is an "interpretation", saying that calling it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records."[22]
## Interpreting wavefunction collapse
As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) are passed through the double slit, a calculation assuming wave-like behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves.
### Many-minds
{{#invoke:main|main}} The many-minds interpretation is a multi-world interpretation that defines the splitting of reality on the level of the observers' minds. In this, it differs from Everett's many-worlds interpretation, in which there is no special role for the observer's mind.[72]
## Reception
There is a wide range of claims that are considered "many-worlds" interpretations. It was often claimed by those who do not believe in MWI[74] that Everett himself was not entirely clear[75] as to what he believed; however, MWI adherents (such as DeWitt, Tegmark, Deutsch and others) believe they fully understand Everett's meaning as implying the literal existence of the other worlds. Additionally, recent biographical sources make it clear that Everett believed in the literal reality of the other quantum worlds.[22] Everett's son reported that Hugh Everett "never wavered in his belief over his many-worlds theory".[76] Also Everett was reported to believe "his many-worlds theory guaranteed him immortality".[77]
One of MWI's strongest advocates is David Deutsch.[78] According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed in this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing,[79] he suggested that parallelism that results from the validity of MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". Deutsch has also proposed that when reversible computers become conscious that MWI will be testable (at least against "naive" Copenhagenism) via the reversible observation of spin.[56]
Asher Peres was an outspoken critic of MWI; for example, a section in his 1993 textbook had the title Everett's interpretation and other bizarre theories. In fact, Peres not only questioned whether MWI is really an "interpretation", but rather, if any interpretations of quantum mechanics are needed at all. Indeed, an interpretation can be regarded as a purely formal transformation, which adds nothing to the rules of the quantum mechanics. Peres seems to suggest that positing the existence of an infinite number of non-communicating parallel universes is highly suspect per those who interpret it as a violation of Occam's razor, i.e., that it does not minimize the number of hypothesized entities. However, it is understood that the number of elementary particles are not a gross violation of Occam's Razor, one counts the types, not the tokens. Max Tegmark remarks that the alternative to many-worlds is "many words", an allusion to the complexity of von Neumann's collapse postulate. On the other hand, the same derogatory qualification "many words" is often applied to MWI by its critics who see it as a word game which obfuscates rather than clarifies by confounding the von Neumann branching of possible worlds with the Schrödinger parallelism of many worlds in superposition.
MWI is considered by some to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others[56] claim MWI is directly testable. Everett regarded MWI as falsifiable since any test that falsifies conventional quantum theory would also falsify MWI.[21]
According to Martin Gardner, the "other" worlds of MWI have two different interpretations: real or unreal; he claims that Stephen Hawking and Steve Weinberg both favour the unreal interpretation.[80] Gardner also claims that the nonreal interpretation is favoured by the majority of physicists, whereas the "realist" view is only supported by MWI experts such as Deutsch and Bryce DeWitt. Hawking has said that "according to Feynman's idea", all the other histories are as "equally real" as our own,[81] and Martin Gardner reports Hawking saying that MWI is "trivially true".[82] In a 1983 interview, Hawking also said he regarded the MWI as "self-evidently correct" but was dismissive towards questions about the interpretation of quantum mechanics, saying, "When I hear of Schrödinger's cat, I reach for my gun." In the same interview, he also said, "But, look: All that one does, really, is to calculate conditional probabilities—in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities."[83] Elsewhere Hawking contrasted his attitude towards the "reality" of physical theories with that of his colleague Roger Penrose, saying, "He's a Platonist and I'm a positivist. He's worried that Schrödinger's cat is in a quantum state, where it is half alive and half dead. He feels that can't correspond to reality. But that doesn't bother me. I don't demand that a theory correspond to reality because I don't know what it is. Reality is not a quality you can test with litmus paper. All I'm concerned with is that the theory should predict the results of measurements. Quantum theory does this very successfully."[84] For his own part, Penrose agrees with Hawking that QM applied to the universe implies MW, although he considers the current lack of a successful theory of quantum gravity negates the claimed universality of conventional QM.[65]
### Polls
Advocates of MWI often cite a poll of 72 "leading cosmologists and other quantum field theorists"[85] conducted by the American political scientist David Raub in 1995 showing 58% agreement with "Yes, I think MWI is true".[86]
The poll is controversial: for example, Victor J. Stenger remarks that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann is working toward the development a more "palatable" post-Everett quantum mechanics. Stenger thinks it's fair to say that most physicists dismiss the many-world interpretation as too extreme, while noting it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse".[87]
Max Tegmark also reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop.[88] According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations." Such polls have been taken at other conferences, for example, in response to Sean Carroll's observation, "As crazy as it sounds, most working physicists buy into the many-worlds theory"[89] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." However, Nielsen notes that it seemed most attendees found it to be a waste of time: Asher Peres "got a huge and sustained round of applause… when he got up at the end of the polling and asked ‘And who here believes the laws of physics are decided by a democratic vote?’"[90]
A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[91]
A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[92] the authors remark that the results are similar to Tegmark's 1998 poll.
## Speculative implications
Speculative physics deals with questions which are also discussed in science fiction.
### Quantum suicide thought experiment
{{#invoke:main|main}} Quantum suicide, as a thought experiment, was published independently by Hans Moravec in 1987[93][94] and Bruno Marchal in 1988[95][96] and was independently developed further by Max Tegmark in 1998.[97] It attempts to distinguish between the Copenhagen interpretation of quantum mechanics and the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide regardless of the odds.[98]
### Weak coupling
Another speculation is that the separate worlds remain weakly coupled (e.g., by gravity) permitting "communication between parallel universes". A possible test of this using quantum-optical equipment is described in a 1997 Foundations of Physics article by Rainer Plaga.[72] It involves an isolated ion in an ion trap, a quantum measurement that would yield two parallel worlds (their difference just being in the detection of a single photon), and the excitation of the ion from only one of these worlds. If the excited ion can be detected from the other parallel universe, then this would constitute direct evidence in support of the many-worlds interpretation and would automatically exclude the orthodox, "logical", and "many-histories" interpretations. The reason the ion is isolated is to make it not participate immediately in the decoherence which insulates the parallel world branches, therefore allowing it to act as a gateway between the two worlds, and if the measure apparatus could perform the measurements quickly enough before the gateway ion is decoupled then the test would succeed (with electronic computers the necessary time window between the two worlds would be in a time scale of milliseconds or nanoseconds, and if the measurements are taken by humans then a few seconds would still be enough). R. Plaga shows that macroscopic decoherence timescales are a possibility. The proposed test is based on technical equipment described in a 1993 Physical Review article by Itano et al.[99] and R. Plaga says that this level of technology is enough to realize the proposed inter-world communication experiment. The necessary technology for precision measurements of single ions already exists since the 1970s, and the ion recommended for excitation is 199Hg+. The excitation methodology is described by Itano et al. and the time needed for it is given by the Rabi flopping formula[100]
Such a test as described by R. Plaga would mean that energy transfer is possible between parallel worlds. This does not violate the fundamental principles of physics because these require energy conservation only for the whole universe and not for the single parallel branches.[72] Neither the excitation of the single ion (which is a degree of freedom of the proposed system) leads to decoherence, something which is proven by Welcher Weg detectors which can excite atoms without momentum transfer (which causes the loss of coherence).[101]
The proposed test would allow for low-bandwidth inter-world communication, the limiting factors of bandwidth and time being dependent on the technology of the equipment. Because of the time needed to determine the state of the partially decohered isolated excited ion based on Itano et al.'s methodology, the ion would decohere by the time its state is determined during the experiment, so Plaga's proposal would pass just enough information between the two worlds to confirm their parallel existence and nothing more. The author contemplates that with increased bandwidth, one could even transfer television imagery across the parallel worlds.[72] For example, Itano et al.'s methodology could be improved (by lowering the time needed for state determination of the excited ion) if a more efficient process were found for the detection of fluorescence radiation using 194 nm photons.[72]
A 1991 article by J.Polchinski also supports the view that inter-world communication is a theoretical possibility.[102] Other authors in a 1994 preprint article also contemplated similar ideas.[103]
The reason inter-world communication seems like a possibility is because decoherence which separates the parallel worlds is never fully complete,[104][105] therefore weak influences from one parallel world to another can still pass between them,[104][106] and these should be measurable with advanced technology. Deutsch proposed such an experiment in a 1985 International Journal of Theoretical Physics article,[107] but the technology it requires involves human-level artificial intelligence.[72]
### Similarity to modal realism
The many-worlds interpretation has some similarity to modal realism in philosophy, which is the view that the possible worlds used to interpret modal claims exist and are of a kind with the actual world. Unlike the possible worlds of philosophy, however, in quantum mechanics counterfactual alternatives can influence the results of experiments, as in the Elitzur–Vaidman bomb-testing problem or the Quantum Zeno effect. Also, while the worlds of the many-worlds interpretation all share the same physical laws, modal realism postulates a world for every way things could conceivably have been.
### Time travel
The many-worlds interpretation could be one possible way to resolve the paradoxes [78] that one would expect to arise if time travel turns out to be permitted by physics (permitting closed timelike curves and thus violating causality). Entering the past would itself be a quantum event causing branching, and therefore the timeline accessed by the time traveller simply would be another timeline of many. In that sense, it would make the Novikov self-consistency principle unnecessary.
## Many-worlds in literature and science fiction
A map from Robert Sobel's novel For Want of a Nail, an artistic illustration of how small events – in this example the branching or point of divergence from our timeline's history is in October 1777 – can profoundly alter the course of history. According to the many-worlds interpretation every event, even microscopic, is a branch point; all possible alternative histories actually exist.[1]
The many-worlds interpretation (and the somewhat related concept of possible worlds) has been associated to numerous themes in literature, art and science fiction.
Some of these stories or films violate fundamental principles of causality and relativity, and are extremely misleading since the information-theoretic structure of the path space of multiple universes (that is information flow between different paths) is very likely extraordinarily complex. Also see Michael Clive Price's FAQ referenced in the external links section below where these issues (and other similar ones) are dealt with more decisively.
Another kind of popular illustration of many-worlds splittings, which does not involve information flow between paths, or information flow backwards in time considers alternate outcomes of historical events. According to the many-worlds interpretation, all of the historical speculations entertained within the alternate history genre are realized in parallel universes.[1]
The many-worlds interpretation of reality was anticipated with remarkable fidelity in Olaf Stapledon’s 1937 science fiction novel Star Maker, in a paragraph describing one of the many universes created by the Star Maker god of the title. "In one inconceivably complex cosmos, whenever a creature was faced with several possible courses of action, it took them all, thereby creating many distinct temporal dimensions and distinct histories of the cosmos. Since in every evolutionary sequence of the cosmos there were very many creatures, and each was constantly faced with many possible courses, and the combinations of all their courses were innumerable, an infinity of distinct universes exfoliated from every moment of every temporal sequence in this cosmos."
## Notes
1. Bryce Seligman DeWitt, Quantum Mechanics and Reality: Could the solution to the dilemma of indeterminism be a universe in which all possible outcomes of an experiment actually occur?, Physics Today, 23(9) pp 30–40 (September 1970) "every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself." See also Physics Today, letters followup, 24(4), (April 1971), pp 38–44
2. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
3. Hugh Everett Theory of the Universal Wavefunction, Thesis, Princeton University, (1956, 1973), pp 1–140
4. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
5. Cecile M. DeWitt, John A. Wheeler eds, The Everett–Wheeler Interpretation of Quantum Mechanics, Battelle Rencontres: 1967 Lectures in Mathematics and Physics (1968)
6. Bryce Seligman DeWitt, The Many-Universes Interpretation of Quantum Mechanics, Proceedings of the International School of Physics "Enrico Fermi" Course IL: Foundations of Quantum Mechanics, Academic Press (1972)
7. Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, pp 3–140.
8. H. Dieter Zeh, On the Interpretation of Measurement in Quantum Theory, Foundation of Physics, vol. 1, pp. 69–76, (1970).
9. Wojciech Hubert Zurek, Decoherence and the transition from quantum to classical, Physics Today, vol. 44, issue 10, pp. 36–44, (1991).
10. Wojciech Hubert Zurek, Decoherence, einselection, and the quantum origins of the classical, Reviews of Modern Physics, 75, pp 715–775, (2003)
11. The Many Worlds Interpretation of Quantum Mechanics
12. David Deutsch argues that a great deal of fiction is close to a fact somewhere in the so called multiverse, Beginning of Infinity, p. 294
13. Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, where the claim to resolves all paradoxes is made on pg 118, 149.
14. Hugh Everett, Relative State Formulation of Quantum Mechanics, Reviews of Modern Physics vol 29, (July 1957) pp 454–462. The claim to resolve EPR is made on page 462
15. Steven Weinberg, Dreams of a Final Theory: The Search for the Fundamental Laws of Nature (1993), ISBN 0-09-922391-0, pg 68–69
16. Steven Weinberg Testing Quantum Mechanics, Annals of Physics Vol 194 #2 (1989), pg 336–386
17. John Archibald Wheeler, Geons, Black Holes & Quantum Foam, ISBN 0-393-31991-1. pp 268–270
18. Everett 1957, section 3, 2nd paragraph, 1st sentence
19. Everett [1956]1973, "Theory of the Universal Wavefunction", chapter 6 (e)
20. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
21. Everett
22. Peter Byrne, The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family, ISBN 978-0-19-955227-6
23. "Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed." Albert Einstein to Werner Heisenberg, objecting to placing observables at the heart of the new quantum mechanics, during Heisenberg's 1926 lecture at Berlin; related by Heisenberg in 1968, quoted by Abdus Salam, Unification of Fundamental Forces, Cambridge University Press (1990) ISBN 0-521-37140-6, pp 98–101
24. N.P. Landsman, "The conclusion seems to be that no generally accepted derivation of the Born rule has been given to date, but this does not imply that such a derivation is impossible in principle.", in Compendium of Quantum Physics (eds.) F.Weinert, K. Hentschel, D.Greenberger and B. Falkenburg (Springer, 2008), ISBN 3-540-70622-4
25. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
26. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
27. James Hartle, Quantum Mechanics of Individual Systems, American Journal of Physics, 1968, vol 36 (#8), pp. 704–712
28. E. Farhi, J. Goldstone & S. Gutmann. How probability arises in quantum mechanics., Ann. Phys. (N.Y.) 192, 368–382 (1989).
29. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
30. Deutsch, D. (1999). Quantum Theory of Probability and Decisions. Proceedings of the Royal Society of London A455, 3129–3137. [1].
31. David Wallace: Quantum Probability and Decision Theory, Revisited
32. David Wallace. Everettian Rationality: defending Deutsch’s approach to probability in the Everett interpretation. Stud. Hist. Phil. Mod. Phys. 34 (2003), 415–438.
33. David Wallace, 2009,A formal proof of the Born rule from decision-theoretic assumptions
34. Simon Saunders: Derivation of the Born rule from operational assumptions. Proc. Roy. Soc. Lond. A460, 1771–1788 (2004).
35. Simon Saunders, 2004: What is Probability?
36. David J Baker, Measurement Outcomes and Probability in Everettian Quantum Mechanics, Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, Volume 38, Issue 1, March 2007, Pages 153–169
37. H. Barnum, C. M. Caves, J. Finkelstein, C. A. Fuchs, R. Schack: Quantum Probability from Decision Theory? Proc. Roy. Soc. Lond. A456, 1175–1182 (2000).
38. Template:Cite news (Summary only).
39. Breitbart.com, Parallel universes exist – study, Sept 23 2007
40. Perimeter Institute, Seminar overview, Probability in the Everett interpretation: state of play, David Wallace – Oxford University, 21 Sept 2007
41. Perimeter Institute, Many worlds at 50 conference, September 21–24, 2007
42. Wojciech H. Zurek: Probabilities from entanglement, Born’s rule from envariance, Phys. Rev. A71, 052105 (2005).
43. M. Schlosshauer & A. Fine: On Zurek's derivation of the Born rule. Found. Phys. 35, 197–213 (2005).
44. Lutz Polley, Position eigenstates and the statistical axiom of quantum mechanics, contribution to conference Foundations of Probability and Physics, Vaxjo, Nov 27 – Dec 1, 2000
45. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
46. Mark A. Rubin, Locality in the Everett Interpretation of Heisenberg-Picture Quantum Mechanics, Foundations of Physics Letters, 14, (2001) , pp. 301–322, Template:Arxiv
47. Paul C.W. Davies, Other Worlds, chapters 8 & 9 The Anthropic Principle & Is the Universe an accident?, (1980) ISBN 0-460-04400-1
48. Paul C.W. Davies, The Accidental Universe, (1982) ISBN 0-521-28692-1
49. Everett FAQ "Does many-worlds violate Ockham's Razor?"
50. Bryce Seligman DeWitt, Quantum Mechanics and Reality: Could the solution to the dilemma of indeterminism be a universe in which all possible outcomes of an experiment actually occur?, Physics Today, 23(9) pp 30–40 (September 1970); see equation 10
51. Penrose, R. The Road to Reality, §21.11
52. Tegmark, Max The Interpretation of Quantum Mechanics: Many Worlds or Many Words?, 1998. To quote: "What Everett does NOT postulate: “At certain magic instances, the world undergoes some sort of metaphysical 'split' into two branches that subsequently never interact.” This is not only a misrepresentation of the MWI, but also inconsistent with the Everett postulate, since the subsequent time evolution could in principle make the two terms...interfere. According to the MWI, there is, was and always will be only one wavefunction, and only decoherence calculations, not postulates, can tell us when it is a good approximation to treat two terms as non-interacting."
53. Paul C.W. Davies, J.R. Brown, The Ghost in the Atom (1986) ISBN 0-521-31316-3, pp. 34–38: "The Many-Universes Interpretation", pp 83–105 for David Deutsch's test of MWI and reversible quantum memories
54. Christoph Simon, 2009, Conscious observers clarify many worlds
55. Joseph Gerver, The past as backward movies of the future, Physics Today, letters followup, 24(4), (April 1971), pp 46–7
56. Bryce Seligman DeWitt, Physics Today,letters followup, 24(4), (April 1971), pp 43
57. Arnold Neumaier's comments on the Everett FAQ, 1999 & 2003
58. Everett [1956] 1973, "Theory of the Universal Wavefunction", chapter V, section 4 "Approximate Measurements", pp. 100–103 (e)
59. Henry Stapp, The basis problem in many-world theories, Canadian J. Phys. 80,1043–1052 (2002) [2]
60. Harvey R Brown and David Wallace, Solving the measurement problem: de Broglie–Bohm loses out to Everett, Foundations of Physics 35 (2005), pp. 517–540. [3]
61. Mark A Rubin (2005), There Is No Basis Ambiguity in Everett Quantum Mechanics, Foundations of Physics Letters, Volume 17, Number 4 / August, 2004, pp 323–341
62. Template:Cite web
63. Everett FAQ "Does many-worlds violate conservation of energy?"
64. Everett FAQ "How do probabilities emerge within many-worlds?"
65. Everett FAQ "When does Schrodinger's cat split?"
66. Template:Cite web
67. Deutsch, D., (1986) ‘Three experimental implications of the Everett interpretation’, in R. Penrose and C.J. Isham (eds.), Quantum Concepts of Space and Time, Oxford: The Clarendon Press, pp. 204–214.
68. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
69. Template:Cite arXiv
70. Jeffrey A. Barrett, The Quantum Mechanics of Minds and Worlds, Oxford University Press, 1999. According to Barrett (loc. cit. Chapter 6) "There are many many-worlds interpretations."
71. Template:Cite web Again, according to Barrett "It is... unclear precisely how this was supposed to work."
72. Template:Cite news
73. Eugene Shikhovtsev's Biography of Everett, in particular see "Keith Lynch remembers 1979–1980"
74. David Deutsch, The Fabric of Reality: The Science of Parallel Universes And Its Implications, Penguin Books (1998), ISBN 0-14-027541-X Cite error: Invalid <ref> tag; name "deutsch98" defined multiple times with different content
75. David Deutsch, Quantum theory, the Church–Turing principle and the universal quantum computer, Proceedings of the Royal Society of London A 400, (1985), pp. 97–117
77. Award winning 1995 Channel 4 documentary "Reality on the rocks: Beyond our Ken" [4] where, in response to Ken Campbell's question "all these trillions of Universes of the Multiverse, are they as real as this one seems to be to me?" Hawking states, "Yes.... According to Feynman's idea, every possible history (of Ken) is equally real."
78. {{#invoke:citation/CS1|citation |CitationClass=book }}
79. {{#invoke:citation/CS1|citation |CitationClass=book }}
80. {{#invoke:citation/CS1|citation |CitationClass=book }}
81. {{#invoke:citation/CS1|citation |CitationClass=book }}
82. {{#invoke:citation/CS1|citation |CitationClass=book }}
83. {{#invoke:citation/CS1|citation |CitationClass=book }}
84. Max Tegmark on many-worlds (contains MWI poll)
85. Template:Cite web
86. Template:Cite web
87. Interpretation of Quantum Mechanics class survey
88. "A Snapshot of Foundational Attitudes Toward Quantum Mechanics", Schlosshauer et al 2013
89. Template:Cite web
90. {{#invoke:citation/CS1|citation |CitationClass=book }} (If MWI is true, apocalyptic particle accelerators won't function as advertised).
91. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
92. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
93. Tegmark, Max The Interpretation of Quantum Mechanics: Many Worlds or Many Words?, 1998
94. Template:Cite web
95. W.M.Itano et al., Phys.Rev. A47,3354 (1993).
|
# [OS X TeX] Sierra 10.12.4: TexShop Preview jumpy
Herbert Schulz herbs at wideopenwest.com
Mon Apr 10 16:14:50 CEST 2017
> On Apr 10, 2017, at 8:11 AM, mmurray <michael.murray at adelaide.edu.au> wrote:
>
> I just upgraded to Sierra 10.12.4 and have been using the latest TeXShop.
> Suddenly if I try to scroll down a pdf preview on my MacBook Air with two
> fingers on the trackpad instead of scrolling it jumps to the next page.
> Scrolling with the scroll bar is OK. Scrolling the pdf in Preview with two
> fingers on the trackpad is fine.
>
> For example if I use
>
> \documentclass{article}
>
> \begin{document}
>
> Page 1, Page 1, Page 1
>
> \newpage
>
> Page 2, Page 2, Page 2
>
> \newpage
>
> Page 3, Page 3, Page 3
>
> \newpage
>
> Page 4, Page 4, Page 4
>
> \end{document}
>
> I start to drag on page 1 and before the lines of text disappear up the
> screen it has jumped to pages 2 and then 3.
>
> The pdf is here
>
> https://www.dropbox.com/s/tuiik07msooozu9/test-jump.pdf?dl=0
>
> Any ideas to stop this ?
>
> Thanks - Michael
Howdy,
Hmmm... no scrolling problem here. What are your settings in the Preview tab of TeXShop->Preferences? Especially, what is the setting for the Default page Style? If you have it as Single Page you would see that behavior. make sure it's set to Multi-Page (or Double-Multi-Page if that's your preference).
Good Luck,
Herb Schulz
(herbs at wideopenwest dot com)
----------- Please Consult the Following Before Posting -----------
TeX FAQ: http://www.tex.ac.uk/faq
List Reminders and Etiquette: https://www.esm.psu.edu/~gray/tex/
List Archives: http://dir.gmane.org/gmane.comp.tex.macosx
https://email.esm.psu.edu/pipermail/macosx-tex/
TeX on Mac OS X Website: http://mactex-wiki.tug.org/
List Info: https://email.esm.psu.edu/mailman/listinfo/macosx-tex
|
# Functional Equation
1. Apr 23, 2009
### ritwik06
1. The problem statement, all variables and given/known data
Suppose a function satisfies the conditions
1. f(x+y) = (f(x)+f(y))/(1+f(x)f(y)) for all real x & y
2. f '(0)=1.
3. -1<f(x)<1 for all real x
Show that the function is increasing throughout its domain. Then find the value:
Limitx -> Infinity f(x)x
3. The attempt at a solution
I proceed by putting x,y=0 in eq 1.
I get the following roots for f(0)={-1,0,1}
But if I take f(0)={-1,1}, f(x) will become a constant function and will be equal to +1 when f(0)=1 and -1 when f(0)=-1, thereby violating condition 3
So f(0)=0
From equation 1: I assume 'y' as a constant and differentiate wrt x
f ' (x+y)=(f ' (x)(1-f2(y))) / (1+f(x)f(y))2
I put x=0;
I get f ' (y)=1-f2(y) Using condition 3; I prove that the derivative is always positive.
I have been able able to solve the first part of the question. But I couldn't evaluate the limit
Limitx -> Infinity f(x)x. Please help me on the limit part.
Last edited: Apr 23, 2009
2. Apr 23, 2009
### Billy Bob
It may be helpful to "cheat" and use the fact that f(x) is really tanh x, to figure out what to do. Then go back and do it without using that fact.
First show lim f(y)=1 as y approaches infinity.
After that, then you find your limit, which has indeterminate form 1^infty, by using natural log and l'Hopital, just like you would do if you knew f was tanh. Unfortunately with f, you don't have all the trig identities at your disposal. Take a stab at it and ask again if you get stuck.
3. Apr 23, 2009
### Dick
You could take Billy Bob's hint farther and prove that f(x) really is tanh(x). Take f(x+e) (e is epsilon). Put that into your formula for f and rearrange it into a difference quotient and take the limit as e->0. Notice since f'(0)=1, lim f(e)/e ->1. That will give you a differential equation to solve for f.
4. Apr 29, 2009
### ritwik06
Thanks a lot for helping me solve my problem :D
|
## A little bit of LaTeX
Posted on
I’ve been busy watching videos. A lot of them, as part of my learning process for quantum computing. To concretize my learning, I’ve been taking notes. As quantum computing is hardcore math, it’s impossible to not have some sort of equations in notes. I’ve never needed to learn or use LaTeX before: all simple algebraic equations can be written in plain text using some jugaad, say using superscripts for exponentials. But how do you represent column vectors and matrices? That’s where LaTeX comes super-handy. As I’m using it more, the more fun it gets.
Auto-LaTeX Equations is an excellent add-on for Google Docs that makes it super-easy to write LaTeX code and convert it into good-quality images of math stuff and equations.
An example LaTeX render from my notes is this matrix that represents the phase shift quantum gate:
## Linear Algebra
Posted on
Such is the impact of linear algebra in the world of computer science that today it’s impossible (for all practical reasons) to stay away from the topic. Computer graphics, machine learning and, even, quantum computing all model their data using the same language–you guessed it–linear algebra.
When we were taught this seemingly obscure topic at high school, it was hard to imagine then that it would come back haunting with such force. What then felt like “why read what I’d probably never-ever use again in life?” now feels like “why the hell did I not revisit this a couple of years back?”. The article 5 Reasons to Learn Linear Algebra for Machine Learning makes a great (but cautionary) case for why you should too start learning the subject right away!
I have been a machine learning practitioner since more than a year now, but never did I learn LA deep enough to be able to interpret the results of certain deep technical research papers on ML. So, it’s good that I pretty much have to learn this subject now because of my current research work on quantum computing, something you just cannot get a hang of without knowing the various data notations which unsurprisingly are some form of LA notations.
Now, how do you actually learn the thing enough to get deeper into ML or QC or whatever you are working on that requires linear algebra? Good question. The short answer is — do NOT buy a book and spend months. The shorter answer is — check this YouTube course Essence of Linear Algera by 3Blue1Brown. Other cool learning resources exist on the subject, such as Khan Academy, but I highlighted the one by 3Blue1Brown just because that’s the one I’m learning from. And it’s LEGENDARILY awesome because of it’s purely visual explanations (which, btw, are animations created using–you guessed it again–linear algebra!).
|
## Project Euler个人解答(Problem 51~60)
2013年12月28日 发表评论 阅读评论
# Problem 51:Prime digit replacements
By replacing the 1st digit of the 2-digit number *3, it turns out that six of the nine possible values: 13, 23, 43, 53, 73, and 83, are all prime.
By replacing the 3rd and 4th digits of 56**3 with the same digit, this 5-digit number is the first example having seven primes among the ten generated numbers, yielding the family: 56003, 56113, 56333, 56443, 56663, 56773, and 56993. Consequently 56003, being the first member of this family, is the smallest prime with this property.
Find the smallest prime which, by replacing part of the number (not necessarily adjacent digits) with the same digit, is part of an eight prime value family.
1 2 3 4 5 6 7 8 9 10 0 6 6 0 6 6 0 6 6 0 6 1 7 7 10 7 7 10 7 7 10 7 2 7 7 10 7 7 10 7 7 10 7
• 最后一位必须是1,3,7,9其中之一
• 除去最后一位,剩余位数至少有3个0或者三个1或者三个2
• 这个数本身是个质数【顺便保证了这个数不可能被3整除】
[0, 1, 1, 1, 0],[1, 0, 1, 1, 0]
[1, 1, 0, 1, 0],[1, 1, 1, 0, 0]
[0, 0, 1, 1, 1, 0],[0, 1, 0, 1, 1, 0]
[0, 1, 1, 0, 1, 0],[0, 1, 1, 1, 0, 0]
[1, 0, 0, 1, 1, 0],[1, 0, 1, 0, 1, 0]
[1, 0, 1, 1, 0, 0],[1, 1, 0, 0, 1, 0]
[1, 1, 0, 1, 0, 0],[1, 1, 1, 0, 0, 0]
### Code:Python
#判断是否是质数
def PrimeQ(n):
for i in range(2,int(sqrt(n))+1):
if n%i == 0:
return False;
return True;
#生成n挑k个数的模板,比如bits=5,k=3,返回会类似[1,0,1,0,1]
def NPickK(bits,k):
if k == 0:
yield [0]*bits
elif k == bits:
yield [1]*k
else:
for first in [0,1]:
for sub in NPickK(bits-1,k-first):
yield [first]+sub
#生成实际可用的模板,比如bit=7
#那么会生成NPickK(6,3)和NPickK(6,6)的模板,然后最后一位补零
#判断模板内是不是同一个数
return len(set(num)) == 1
#计算根据模板替换后质数的个数
counter = 0;
strlist = [str(dig) if mask[x]==1 else strlist[x]
counter += PrimeQ(int(''.join(strlist)))
return counter
#测试这个数是否有符合条件
def template_test(n):
strlist = list(str(n))
return True
return False
#main
num = 1000
while True:
#是否是1,3,7,9结尾
if not mod(num,10) in [1,3,7,9]:
pass
#是不是有超过3个的0,1,2
elif not (list(str(num)).count('0')>=3 or \
list(str(num)).count('1')>=3 or \
list(str(num)).count('2')>=3):
pass
elif not PrimeQ(num):
pass
elif not template_test(num):
pass
else:
print num;
break
num += 1
# Problem 52:Permuted multiples
It can be seen that the number, 125874, and its double, 251748, contain exactly the same digits, but in a different order.
Find the smallest positive integer, x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits.
125874和它的二倍,251748, 包含着同样的数字,只是顺序不同。
### Code:Python
def sortchar(n):
L = list(str(n))
L.sort()
return L
def QQ(n):
if sortchar(n)==\
sortchar(2*n)==\
sortchar(3*n)==\
sortchar(4*n)==\
sortchar(5*n)==\
sortchar(6*n):
return True
return False
bit = 3
k = 102
while True:
if QQ(k):
print k
break
k += 3
if k*6 > 10**bit:
k = 10**bit+2
bit += 1
# Problem 53:Combinatoric selections
There are exactly ten ways of selecting three from five, 12345:
123, 124, 125, 134, 135, 145, 234, 235, 245, and 345
In combinatorics, we use the notation, $^5C_3=10$.
In general,
$^nC_r=\dfrac{n!}{r!(n-r)!}$,where r ≤ n, n! = n×(n−1)×...×3×2×1, and 0! = 1.
It is not until n = 23, that a value exceeds one-million: $^{23}C_{10}= 1144066$ .
How many, not necessarily distinct, values of $^nC_r$, for 1 ≤ n ≤ 100, are greater than one-million?
123, 124, 125, 134, 135, 145, 234, 235, 245, and 345
$^nC_r=\dfrac{n!}{r!(n-r)!}$,其中 r ≤ n, n! = n×(n−1)×...×3×2×1, 并且0! = 1.
n = 23时产生第一个超过一百万的数:$^{23}C_{10}= 1144066$ .
$\dfrac{^nC_{r+1}}{^nC_r}=\dfrac{n-r}{r+1}$
### Code:Python
counter = 0;
for n in range(23,101):
d = n;
r = 1;
while d < 1000000 and r <= int(n/2):
d = d * (n-r)/(r+1)
r += 1
counter += n-2*r+1
print counter
# Problem 54:Poker hands
In the card game poker, a hand consists of five cards and are ranked, from lowest to highest, in the following way:
• High Card: Highest value card.
• One Pair: Two cards of the same value.
• Two Pairs: Two different pairs.
• Three of a Kind: Three cards of the same value.
• Straight: All cards are consecutive values.
• Flush: All cards of the same suit.
• Full House: Three of a kind and a pair.
• Four of a Kind: Four cards of the same value.
• Straight Flush: All cards are consecutive values of same suit.
• Royal Flush: Ten, Jack, Queen, King, Ace, in same suit.
The cards are valued in the order:
2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King, Ace.
If two players have the same ranked hands then the rank made up of the highest value wins; for example, a pair of eights beats a pair of fives (see example 1 below). But if two ranks tie, for example, both players have a pair of queens, then highest cards in each hand are compared (see example 4 below); if the highest cards tie then the next highest cards are compared, and so on.
Consider the following five hands dealt to two players:
Hand Player 1 Player 2 Winner 1 5H 5C 6S 7S KD Pair of Fives 2C 3S 8S 8D TD Pair of Eights Player 2 2 5D 8C 9S JS AC Highest card Ace 2C 5C 7D 8S QH Highest card Queen Player 1 3 2D 9C AS AH AC Three Aces 3D 6D 7D TD QD Flush with Diamonds Player 2 4 4D 6S 9H QH QC Pair of Queens Highest card Nine 3D 6D 7H QD QS Pair of Queens Highest card Seven Player 1 5 2H 2D 4C 4D 4S Full House With Three Fours 3C 3D 3S 9S 9D Full House with Three Threes Player 1
The file, poker.txt, contains one-thousand random hands dealt to two players. Each line of the file contains ten cards (separated by a single space): the first five are Player 1's cards and the last five are Player 2's cards. You can assume that all hands are valid (no invalid characters or repeated cards), each player's hand is in no specific order, and in each hand there is a clear winner.
How many hands does Player 1 win?
• High Card: 最高值的牌.
• One Pair: 两张面值一样的牌.
• Two Pairs: 两个值不同的One Pair.
• Three of a Kind: 三张面值一样的牌.
• Straight: 所有的牌面值为连续数值.
• Flush: 所有的牌花色相同.
• Full House: Three of a Kind 加一个One Pair.
• Four of a Kind: 四张牌面值相同.
• Straight Flush: 所有的牌花色相同并且为连续数值.
• Royal Flush: 10,J,Q,K和A,并且为相同花色。
2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King, Ace.
局 玩家 1 玩家 2 胜利者 1 5H 5C 6S 7S KD 一对5 2C 3S 8S 8D TD 一对8 玩家 2 2 5D 8C 9S JS AC 最大面值牌A 2C 5C 7D 8S QH 最大面值牌Q 玩家 1 3 2D 9C AS AH AC 三个A 3D 6D 7D TD QD 方片Flush 玩家 2 4 4D 6S 9H QH QC 一对Q 最大牌9 3D 6D 7H QD QS 一对Q 最大牌7 玩家 1 5 2H 2D 4C 4D 4S 三个4的Full House 3C 3D 3S 9S 9D 三个3的Full House 玩家 1
### Code:Mathematica
PokeInfo = Import["poker.txt", "Table"];
GetWinner[game_] := Block[{p1p, p2p, p1c, p2c, lv1, lv2},
(*替换成数字,并排序*)
p1p = ToExpression[(StringTake[#, 1] & /@ game[[1 ;; 5]]) /.
{"A" ->14, "K" -> 13, "Q" -> 12, "J" -> 11, "T" -> 10}] // Sort;
p2p = ToExpression[(StringTake[#, 1] & /@ game[[6 ;; 10]]) /.
{"A" -> 14, "K" -> 13, "Q" -> 12,"J" -> 11, "T" -> 10}] // Sort;
(*分别获取玩家的牌*)
p1c = StringTake[#, {2}] & /@ game[[1 ;; 5]];
p2c = StringTake[#, {2}] & /@ game[[6 ;; 10]];
(*获得玩家的牌的类型*)
lv1 = GetLevel[p1p, p1c];
lv2 = GetLevel[p2p, p2c];
WHOWIN[lv1, lv2]]
GetLevel[p_, c_] := Block[{},
Which[
(*等级1000,同花顺*)
Length@Union@c == 1 && Differences[p] === {1, 1, 1, 1},{100,Reverse[p]},
(*等级99,四带一*)
Sort[Tally[p][[All, 2]]] === {1,4},
{99, {Select[Tally[p], #[[2]] == 4 &][[1, 1]],
Select[Tally[p], #[[2]] == 1 &][[1, 1]]}},
(*等级98,三带二*)
Sort[Tally[p][[All, 2]]] === {2,3},
{98, {Select[Tally[p], #[[2]] == 3 &][[1, 1]],
Select[Tally[p], #[[2]] == 2 &][[1, 1]]}},
(*等级97,一色*)
Length@Union@c == 1 , {97, Reverse[p]},
(*等级96,一条龙*)
Differences[p] === {1, 1, 1, 1}, {96, Reverse[p]},
(*等级95,三张一样*)
Sort[Tally[p][[All, 2]]] === {1, 1,3},{95, {Select[Tally[p], #[[2]] == 3 &][[1, 1]],
Select[Tally[p], #[[2]] == 1 &][[All, 1]] // Sort // Reverse} //Flatten},
(*等级94,两对*)
Sort[Tally[p][[All, 2]]] === {1, 2, 2},
{94, {Select[Tally[p], (#[[2]] == 2 &)][[All, 1]]//Sort//Reverse,
Select[Tally[p], #[[2]] == 1 &][[1, 1]]} // Flatten},
(*等级93,一对*)
Sort[Tally[p][[All, 2]]] === {1, 1, 1, 2},
{93, {Select[Tally[p], (#[[2]] == 2 &)][[All, 1]],
Select[Tally[p], #[[2]] == 1 &][[All, 1]] // Sort // Reverse} //Flatten},
(*啥都没有,等级为最大那张牌的点*)
True, {Max[p], Reverse[p]}]
]
(*判断胜负*)
WHOWIN[v1_, v2_] := Block[{},
Which[v1[[1]] > v2[[1]], 1,
v1[[1]] < v2[[1]], 2,
v1[[2, 1]] > v2[[2, 1]], 1,
v1[[2, 1]] < v2[[2, 1]], 2,
v1[[2, 2]] > v2[[2, 2]], 1,
v1[[2, 2]] < v2[[2, 2]], 2,
v1[[2, 3]] > v2[[2, 3]], 1,
v1[[2, 3]] < v2[[2, 3]], 2,
v1[[2, 4]] > v2[[2, 4]], 1,
v1[[2, 4]] < v2[[2, 4]], 2,
v1[[2, 5]] > v2[[2, 5]], 1,
v1[[2, 5]] < v2[[2, 5]], 2,
True, 0
]]
(*计算A胜了多少场*)
Select[PokeInfo, GetWinner[#] == 1 &] // Length
# Problem 55:Lychrel numbers
If we take 47, reverse and add, 47 + 74 = 121, which is palindromic.
Not all numbers produce palindromes so quickly. For example,
349 + 943 = 1292,
1292 + 2921 = 4213
4213 + 3124 = 7337
That is, 349 took three iterations to arrive at a palindrome.
Although no one has proved it yet, it is thought that some numbers, like 196, never produce a palindrome. A number that never forms a palindrome through the reverse and add process is called a Lychrel number. Due to the theoretical nature of these numbers, and for the purpose of this problem, we shall assume that a number is Lychrel until proven otherwise. In addition you are given that for every number below ten-thousand, it will either (i) become a palindrome in less than fifty iterations, or, (ii) no one, with all the computing power that exists, has managed so far to map it to a palindrome. In fact, 10677 is the first number to be shown to require over fifty iterations before producing a palindrome: 4668731596684224866951378664 (53 iterations, 28-digits).
Surprisingly, there are palindromic numbers that are themselves Lychrel numbers; the first example is 4994.
How many Lychrel numbers are there below ten-thousand?
NOTE: Wording was modified slightly on 24 April 2007 to emphasise the theoretical nature of Lychrel numbers.
349 + 943 = 1292,
1292 + 2921 = 4213
4213 + 3124 = 7337
10000以下一共有多少个Lychrel数?
### Code:Python
def GetPalindromic(n):
return int(''.join(list(str(n))[::-1]))
def IsPalindromic(n):
return n == GetPalindromic(n)
def IsLychrel(n):
cc = 0
n = n + GetPalindromic(n)
while cc <= 50:
if IsPalindromic(n):
return False
n = n + GetPalindromic(n)
cc += 1
return True
result = filter(IsLychrel,range(10001))
print len(result)
### Code:Mathematica
GetHW[x_] := FromDigits@Reverse@IntegerDigits[x]
IsNotHW[x_] := FromDigits@Reverse@IntegerDigits[x] != x
IsLychrel[t_] := Block[{x = t + GetHW[t], Result = True},
For[i = 1, i < = 50, i += 1,
If[IsNotHW[x], x = x + GetHW[x], Result = False; Break]];
Result]
Select[Range[1, 10000], IsLychrel] // Length
# Problem 56:Powerful digit sum
A googol (10100) is a massive number: one followed by one-hundred zeros; 100100 is almost unimaginably large: one followed by two-hundred zeros. Despite their size, the sum of the digits in each number is only 1.
Considering natural numbers of the form, ab, where a, b < 100, what is the maximum digital sum?
Python表示不甘示弱。。
### Code:Mathematica
Max@Table[Total@IntegerDigits@(a^b), {a, 1, 99}, {b, 1, 99}]
### Code:Python
print max([sum(map(int,list(str(a**b))))
for a in range(1,100)
for b in range(1,100)])
# Problem 57:Square root convergents
It is possible to show that the square root of two can be expressed as an infinite continued fraction.
$\sqrt{2}=1+\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2+...}}}=1.414213...$
By expanding this for the first four iterations, we get:
1 + $\dfrac{1}{2}$ = 3/2 = 1.5
1 + $\dfrac{1}{2+\dfrac{1}{2}}$ = 7/5 = 1.4
1 + $\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2}}}$= 17/12 = 1.41666...
1 + $\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2}}}}$ = 41/29 = 1.41379...
The next three expansions are 99/70, 239/169, and 577/408, but the eighth expansion, 1393/985, is the first example where the number of digits in the numerator exceeds the number of digits in the denominator.
In the first one-thousand expansions, how many fractions contain a numerator with more digits than denominator?
2的平方根可以被表示为无限延伸的分数:
$\sqrt{2}=1+\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2+...}}}=1.414213...$
1 + $\dfrac{1}{2}$ = 3/2 = 1.5
1 + $\dfrac{1}{2+\dfrac{1}{2}}$ = 7/5 = 1.4
1 + $\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2}}}$= 17/12 = 1.41666...
1 + $\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2+\dfrac{1}{2}}}}$ = 41/29 = 1.41379...
### Code:Python
a = 2
b = 3
counter = 0
for i in range(1,1001):
a,b = b+a,2*a+b
if len(str(b)) > len(str(a)):
counter += 1
print counter
### Code:Mathematica
Select[NestList[{2 #[[2]] + #[[1]], #[[2]] + #[[1]]} &, {3, 2}, 1000],
Length[IntegerDigits[#[[1]]]] > Length[IntegerDigits[#[[2]]]] &] // Length
# Problem 58:Spiral primes
Starting with 1 and spiralling anticlockwise in the following way, a square spiral with side length 7 is formed.
37 36 35 34 33 32 31
38 17 16 15 14 13 30
39 18 5 4 3 12 29
40 19 6 1 2 11 28
41 20 7 8 9 10 27
42 21 22 23 24 25 26
43 44 45 46 47 48 49
It is interesting to note that the odd squares lie along the bottom right diagonal, but what is more interesting is that 8 out of the 13 numbers lying along both diagonals are prime; that is, a ratio of 8/13 ≈ 62%.
If one complete new layer is wrapped around the spiral above, a square spiral with side length 9 will be formed. If this process is continued, what is the side length of the square spiral for which the ratio of primes along both diagonals first falls below 10%?
37 36 35 34 33 32 31
38 17 16 15 14 13 30
39 18 5 4 3 12 29
40 19 6 1 2 11 28
41 20 7 8 9 10 27
42 21 22 23 24 25 26
43 44 45 46 47 48 49
### Code:Python
def PrimeQ(n):
if n <= 1:
return False
for i in xrange(2,int(n**0.5+1)):
if n%i == 0:
return False;
return True;
k = 1
dif = 2
prime_counter = 0
while True:
k += dif * 4
dif += 2
prime_counter += PrimeQ(k+dif)
prime_counter += PrimeQ(k+2*dif)
prime_counter += PrimeQ(k+3*dif)
if prime_counter*10<(2*dif+1):
print dif+1
break
# Problem 59:XOR decryption
Each character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase A = 65, asterisk (*) = 42, and lowercase k = 107.
A modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, 65 XOR 42 = 107, then 107 XOR 42 = 65.
For unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both "halves", it is impossible to decrypt the message.
Unfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.
Your task has been made easy, as the encryption key consists of three lower case characters. Using cipher1.txt (right click and 'Save Link/Target As...'), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text.
### Code:Mathematica
(*读取文件,转为数字*)
cipher = ToExpression@StringSplit[ToLowerCase@Import["cipher1.txt"], ","];
(*获得加密原文*)
decodetext[key_, ci_] := Block[{test, result},
(*循环密钥使之与密文等长*)
test = Table[key[[Mod[i - 1, 3] + 1]], {i, 1, Length[ci]}];
(*译码,key是密钥,ci是密文,返回a,e,i,s的词频数*)
decode[key_, ci_] := Total[
Count[decodetext[key, ci], #]&/@(ToCharacterCode/@{"e", "a", "i", "s"}//Flatten)]
(*测试所有加密方案*)
ParallelTable[{{a, b, c}, decode[{a, b, c}, cipher]},
{a, 97, 122}, {b, 97, 122}, {c, 97, 122}];
(*找出加密方案中最佳的那个*)
(Reverse@SortBy[Flatten[%, 2], Last])//First
(*获得原文全部ASCII码之和*)
decodetext[First@%, cipher] // Total
(*统计the*)
decode2[key_, ci_]:=StringCount[FromCharacterCode@decodetext[key, ci], "the"]
(*获取原文*)
decodetext[First@%, cipher]//FromCharacterCode
# Problem 60:Prime pair sets
The primes 3, 7, 109, and 673, are quite remarkable. By taking any two primes and concatenating them in any order the result will always be prime. For example, taking 7 and 109, both 7109 and 1097 are prime. The sum of these four primes, 792, represents the lowest sum for a set of four primes with this property.
Find the lowest sum for a set of five primes for which any two primes concatenate to produce another prime.
### Code:C++
#define T 100000000
int IsPrime[T] = {1};
int* primelist = NULL;
//初始化质数表
void init()
{
for(int i = 0;i < T;i++)
IsPrime[i] = 1;
IsPrime[0] = -1;
IsPrime[1] = -1;
}
//筛选法找质数
void Seive()
{
int counter = T-2;
for(int i = 2;i <= T/2;i++)
{
if(IsPrime[i] != 1)
continue;
int j = 2*i;
while(j < T)
{
if(IsPrime[j] == 1)
counter--;
IsPrime[j] = -1;
j += i;
}
}
primelist =new int[counter];
int cc = 0;
for(int i = 0;i < T;i++)
if(IsPrime[i] == 1)
primelist[cc++] = i;
}
//数字拼接,673+123=673123
int join(int a,int b)
{
int k = int(log10(double(a))+1);
return int(b*pow(10.0,k)+a);
}
//判断两个数连接起来后是不是质数
bool Judge(int i,int j)
{
return IsPrime[join(i,j)] == 1 && IsPrime[join(j,i)] == 1;
}
//判断list里面最后一个数和前面每个数连接起来后都是质数
int JudgeList(int list[],int num)
{
for(int i = 0;i < num-1;i++)
if(!Judge(list[i],list[num-1]))
return 0;
return 1;
}
//递归求解,i是前一个数,idx是现在算到第几个数了
void figure(int i,int list[],int idx)
{
if(idx == 6)
{
for(int d = 0;d < 5;d++)
cout<<list[d]<<" ";
system("pause");
}
for(int j = i-1;j >= 5-idx;j--)
{
list[idx-1] = primelist[j];
if (!JudgeList(list,idx))
continue;
figure(j,list,idx+1);
}
}
int main(int argc,char* argv[])
{
init();
Seive();
int list[5];
for(int i = 4;i < T;i++)
{
list[0] = primelist[i];
figure(i,list,2);
}
return 0;
}
【完】
1. 2013年12月28日13:26 | #1
看起来好高端,啊哈哈
• 2013年12月28日13:29 | #2
其实一点也不。。只是练练算法题和编程。。。防止头脑老化。。
• 2013年12月28日13:30 | #3
真上进
• 2013年12月28日13:33 | #4
爱好而已。。。将来找工作希望有点用。。
2. 2014年1月5日21:07 | #5
pe57:Count[NestList[{2 #2 + #, #2 + #} & @@ # &, {3, 2}, 1000], {a_, b_} /; IntegerLength@a > IntegerLength@b]
• 2014年1月6日00:04 | #6
研读完您的留言代码,获益良多,学会了很多新函数和思想,非常感谢!
3. 2014年11月10日17:55 | #7
打扰了!, 关于PE51,我找到个11113,替换三个数之后得{11113, 19993, 22123, 31333, 41443, 77713, 81883, 88813}都是质数,一共刚好8个,,这样行不?
• 2014年11月10日18:00 | #8
对于56**3,两个星号替换的数字必须一样
• 2014年11月11日08:45 | #9
哈哈,,有道理,,看题不仔细
4. 2014年11月11日23:22 | #10
|
ISSN 0439-755X
CN 11-1911/B
Acta Psychologica Sinica ›› 2014, Vol. 46 ›› Issue (11): 1772-1781.
### Perceiving Groups and Persons: Inverse Base-Rate Effect
CHEN Shujuan1,2; WANG Pei1; LIANG Yajun1
1. (1 College of Education, Shanghai Normal University, Shanghai 200234, China) (2 College of Education, Ningxia University, Yinchuan 750021, China)
• Received:2013-03-27 Published:2014-11-25 Online:2014-11-25
• Contact: WANG Pei, E-mail: [email protected]
Abstract:
Hamilton and Sherman (1996) argued that forming an impression of an individual and developing a conception of a group were governed by the same fundamental information-processing system, while researchers didn’t find adequate evidences supporting this argument. According to the Inverse Base-Rate Effect (IBRE) (Medin & Edelson, 1988), Sherman et al. (2009) discovered the base-rate of different kinds of cues had an impaction on stereotype formation. In addition, similar phenomenon was found in person perception. Based in these ideas and findings, this research sought to investigate whether the impaction modes of base-rate on the perception of groups and persons would be the same using IBRE design. Two experiments provided evidences for the above hypothesis. In experiment 1, the impression formation targets were groups, and 39 Chinese undergraduates participated, and the experiment design was the same to experiment 1 of Sherman et al. (2009) except Chinese materials. In experiment 2, the targets were individuals and there were 46 participants. Based on IBRE problem construct, the basic design involved a pair of persons, designated an acquaintance and a stranger. The acquaintance and the stranger occurred with a 3:1 base ratio. The acquaintance was characterized by two traits labeled by PC and I (PC was a perfect predictor of the common group, and I was an imperfect predictor), and the stranger was also characterized by two traits labeled by PR and I (PR was a perfect predictor of the rare group). PC was the perfect trait of the acquaintance, which always predicted the acquaintance and never the stranger; and PR was the perfect trait of the stranger, which always predicted the stranger and never the acquaintance; and I was an imperfect predictor of the two individuals in that the acquaintance and the stranger were both associated with this trait. The experiment was comprised of 2 basic designs. Participants were asked to engage in an impression formation task. During training, participants were asked to judge different persons from patterns of traits, and given feedbacks. Following training, participants were tested with combinations of traits not shown during training, which were PC, PR, I, PC+PR, PC+PR+I. The result of experiment 1 was consisting with the one of (Sherman et al., 2009), in which participants showed strong selection preference for a minority group. Results of the experiment 2 showed: firstly, base-rate information were learned and consistently applied to training and testing cases. Secondly, the frequent traits (PC and I) and the acquaintance were learned earlier than the infrequent trait (PR) and the stranger, so that the former ones were encoded by his typical features and the infrequent targets were encoded by his distinctive features, which resulted in the Inverse Base-Rate Effect (IBRE). In conclusion, whatever the impression formation targets were groups or individuals, information base-rate would influence the cognition processes and lead similarly progressing bias. The Inverse Base-Rate Effect might be a general phenomenon in social cognition.
|
# on 27-Aug-2020 (Thu)
#### Annotation 5729929530636
#biology #neurology #sleep Even the reports from the Guinness World Record attempt at sleeplessness (Randy Gardner's awakathon in 1964 lasted 11 days) trivialized the effects of sleeplessness
Good sleep, good learning, good life
does not make people die (at least not immediately). It does make them feel miserable, but the ease with which we recover by getting just one good night of sleep seems to make sleep look cheap. <span>Even the reports from the Guinness World Record attempt at sleeplessness (Randy Gardner's awakathon in 1964 lasted 11 days) trivialized the effects of sleeplessness. Many books on psychiatry and psychology still state that there aren't any significant side effects to prolonged sleeplessness! This is false! The Guinness Book of Records has since wit
#### Flashcard 5729931103500
Tags
#biology #neurology #sleep
Question
Randy Gardner's sleep-less world record lasted...
11 days
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Even the reports from the Guinness World Record attempt at sleeplessness (Randy Gardner's awakathon in 1964 lasted 11 days) trivialized the effects of sleeplessness
#### Original toplevel document
Good sleep, good learning, good life
does not make people die (at least not immediately). It does make them feel miserable, but the ease with which we recover by getting just one good night of sleep seems to make sleep look cheap. <span>Even the reports from the Guinness World Record attempt at sleeplessness (Randy Gardner's awakathon in 1964 lasted 11 days) trivialized the effects of sleeplessness. Many books on psychiatry and psychology still state that there aren't any significant side effects to prolonged sleeplessness! This is false! The Guinness Book of Records has since wit
#### Annotation 5729933462796
#biology #neurology #sleep Nearly everyone has pulled an all nighter once upon a time. Even if this is often an unpleasant experience, it nearly always ends up with a 100% recovery after a single night of solid sleep. It is therefore a bit surprising to know that that a week or two of sleep deprivation can result in death! Sleep researchers constructed a cruel contraption that would wake up rats as soon as they fell asleep. This contraptions showed that it takes an average of 3 weeks to kill a rat by sleep deprivation (or some 5 months by REM sleep deprivation alone)(Rechtschaffen 1998[7]). Dr Siegel demonstrated brain damage in sleep-deprived rats (Siegel 2003[8]). Due to an increase in the level of glucocorticoids, neurogenesis in some portions of the brain is inhibited by lack of sleep
Good sleep, good learning, good life
ecessary. Otherwise my writing effort would not be needed. Good sleep makes us nicer, smarter, and saves lives! See: 10 Things to Hate About Sleep Loss from WebMD. If you do not sleep, you die! <span>Nearly everyone has pulled an all nighter once upon a time. Even if this is often an unpleasant experience, it nearly always ends up with a 100% recovery after a single night of solid sleep. It is therefore a bit surprising to know that that a week or two of sleep deprivation can result in death! Sleep researchers constructed a cruel contraption that would wake up rats as soon as they fell asleep. This contraptions showed that it takes an average of 3 weeks to kill a rat by sleep deprivation (or some 5 months by REM sleep deprivation alone)(Rechtschaffen 1998[7]). Dr Siegel demonstrated brain damage in sleep-deprived rats (Siegel 2003[8]). Due to an increase in the level of glucocorticoids, neurogenesis in some portions of the brain is inhibited by lack of sleep[9]. In short, sleep deprivation is very bad for the health of the brain. Sleep deprivation is a well-known form of torture. Yet, for ethical reasons, the rat experiment could not be rep
#### Annotation 5729935559948
#biology #neurology #sleep It is impossible to quantify the contribution of those three factors to the fatal outcome of prolonged sleep deprivation: network malfunction, or secondary effects of sleep protection program, or continuous catabolic state.
Good sleep, good learning, good life
did not help much in preventing death from those infections. Sleep deprived rats would die anyway. The infection might speed up death that was otherwise inevitable. Why do we die without sleep? <span>It is impossible to quantify the contribution of those three factors to the fatal outcome of prolonged sleep deprivation: network malfunction, or secondary effects of sleep protection program, or continuous catabolic state. Even though the latter two could possibly be remedied pharmacologically, there is no way around network remolding in sleep. Researchers who hope to find a remedy against sleep are plodd
#### Annotation 5729937657100
#biology #neurology #sleep There are two components of sleepiness that drive you to bed: circadian component - sleepiness comes back to us in cycles which are usually about one day long homeostatic component - sleepiness increases with the length of time we stay awake Only a combination of these two components determines the optimum time for sleep. Most importantly, you should remember that even strong sleepiness resulting from the homeostatic component may not be sufficient to get good sleep if the timing goes against the greatest sleep propensity determined by the circadian component.
Good sleep, good learning, good life
rrational shift-work patterns, sleeping pills, alcohol, caffeine, etc. For a chance to break out from unhealthy sleep habits, you need to understand the two-component model of sleep regulation. <span>There are two components of sleepiness that drive you to bed: circadian component - sleepiness comes back to us in cycles which are usually about one day long homeostatic component - sleepiness increases with the length of time we stay awake Only a combination of these two components determines the optimum time for sleep. Most importantly, you should remember that even strong sleepiness resulting from the homeostatic component may not be sufficient to get good sleep if the timing goes against the greatest sleep propensity determined by the circadian component. Circadian component There are around hundred known body functions that oscillate between maximum and minimum values in a day-long cycle. Because these functions take about a day's time
#### Annotation 5729939754252
#biology #neurology #sleep Yet some dramatic facts related to sleep deprivation have slowly come into light. Each year sleep disorders add $16 billion to national health-care costs (e.g. by contributing to high blood pressure and heart disease). That does not include accidents and lost productivity at work. For this, the National Commission on Sleep Disorders estimates that sleep deprivation costs$150 billion a year in higher stress and reduced workplace productivity[1]. 40% of truck accidents are attributable to fatigue and drowsiness, and there is an 800% increase in single vehicle commercial truck accidents between midnight and 8 am. Major industrial disasters have been attributed to sleep deprivation (Mitler et al. 1988[2])(incl. Three Mile Island, Chernobyl, the gas leak at Bhopal, Zeebrugge disaster, and the Exxon Valdez oil spill).
Good sleep, good learning, good life
ift after over a year of campaigning for president, he answered without hesitation: 8 hours of sleep. The bad example of disrespect for sleep comes from the most important people in the nation! <span>Yet some dramatic facts related to sleep deprivation have slowly come into light. Each year sleep disorders add $16 billion to national health-care costs (e.g. by contributing to high blood pressure and heart disease). That does not include accidents and lost productivity at work. For this, the National Commission on Sleep Disorders estimates that sleep deprivation costs$150 billion a year in higher stress and reduced workplace productivity[1]. 40% of truck accidents are attributable to fatigue and drowsiness, and there is an 800% increase in single vehicle commercial truck accidents between midnight and 8 am. Major industrial disasters have been attributed to sleep deprivation (Mitler et al. 1988[2])(incl. Three Mile Island, Chernobyl, the gas leak at Bhopal, Zeebrugge disaster, and the Exxon Valdez oil spill). It has been known since the 1920s that sleep improves recall in learning. However, only at the turn of the millennium, research by Dr Robert Stickgold, Associate Professor of Psychiatry
#### Annotation 5729946569996
#biology #neurology #sleep For example, muscles do not need to shut off completely to get rest. The critical function of sleep is dramatically illustrated in experiments in which rats chronically deprived of sleep eventually die usually within 2.5 weeks
Good sleep, good learning, good life
pparent that one long-lasting sleep episode with suppression of consciousness does not seem to be the right way for evolution to tackle depleted resources, toxic wastes, or energy conservation. <span>For example, muscles do not need to shut off completely to get rest. The critical function of sleep is dramatically illustrated in experiments in which rats chronically deprived of sleep eventually die usually within 2.5 weeks (for more see: If you do not sleep, you die!). In evolutionary terms, sleep is a very old phenomenon and it clearly must play a role that is critical to survival. Only quite recently, i
#### Annotation 5729948142860
#biology #neurology #sleep Even ants take naps.
Good sleep, good learning, good life
ks of the brain. This function is so essential that no complex nervous system can survive without it. This is why all complex animals sleep (which is not always easy to tell (Siegel 2008[149]). <span>Even ants take naps. The size of the cortex is fixed. This means that there are anatomical and functional limitations on how much information can be stored there. Don't believe mnemonic gurus who tell you "
#### Annotation 5729950240012
#biology #neurology #sleep
Good sleep, good learning, good life
n, multiple causes conspire to produce the final inevitable outcome. Probably nobody knows the exact answer to this mystery. However, research into the role of sleep gives us pretty strong hints<span>. One of the most important functions of sleep is the re-organization of neural networks in the brain. During the day, we learn new things, memorize, acquire skills, figure things out, set new memories through creative associations, etc. After a long day of waking, the brain is full of d
#### Annotation 5729952337164
#biology #neurology #sleep There is a second layer of trouble in sleep deprivation. Due to the importance of sleep, all advanced organisms implement a sleep protection program. This program ensures that sleep deprivation results in unpleasant symptoms. It also produces a remarkably powerful sleep drive that is very hard to overcome
Good sleep, good learning, good life
nced software or neural function is always dangerous! Luckily, all we need to eliminate the danger is to just go to sleep every day. For more see: Neural optimization in sleep. Sleep protection <span>There is a second layer of trouble in sleep deprivation. Due to the importance of sleep, all advanced organisms implement a sleep protection program. This program ensures that sleep deprivation results in unpleasant symptoms. It also produces a remarkably powerful sleep drive that is very hard to overcome. Staying awake becomes unbearable. Closing one's eyes becomes one of the most soothing things in the universe. Are these symptoms a result of network malfunction? Definitely not. If the
#### Annotation 5729954434316
#biology #neurology #sleep . 1550 annual fatalities in the US can be attributed to drowsy driving. That's nearly an equivalent of six WTC collapse tragedies in a decade! Amazingly, as the pain and suffering is diluted in the population, drowsy driving does not nearly make as many headlines as a terrorist attack. At least a third of Americans have fallen asleep behind the wheel at least once!
Good sleep, good learning, good life
St. Louis from a dive caused by microsleep. Sleep deprivation has changed the future of nuclear fission and the future of oil exploration. Poor sleep kills as many people on the roads as alcohol<span>. 1550 annual fatalities in the US can be attributed to drowsy driving. That's nearly an equivalent of six WTC collapse tragedies in a decade! Amazingly, as the pain and suffering is diluted in the population, drowsy driving does not nearly make as many headlines as a terrorist attack. At least a third of Americans have fallen asleep behind the wheel at least once! During the shift to DST in spring, car accidents increase by 9%. Sleep deprivation carries an astronomical cost to industrialized societies. There are zillions of hours wasted on unprod
#### Annotation 5729958366476
#biology #neurology #sleep In an average case, the maximum sleepiness comes in the middle of the night, reaches the minimum at awakening, and again increases slightly at siesta time in the afternoon. However, the circadian sleepiness is often shifted in phase as compared with your desired sleep time. Consequently, if your maximum sleepiness comes in the morning, you may find it difficult to fall asleep late in the evening, even if you missed a lot of sleep on the preceding day
Good sleep, good learning, good life
erm circadian rhythm was coined by Dr Franz Halberg of Germany in 1959 (in Latin circadian means about a day). The overall tendency to maintain sleep is also subject to such a circadian rhythm. <span>In an average case, the maximum sleepiness comes in the middle of the night, reaches the minimum at awakening, and again increases slightly at siesta time in the afternoon. However, the circadian sleepiness is often shifted in phase as compared with your desired sleep time. Consequently, if your maximum sleepiness comes in the morning, you may find it difficult to fall asleep late in the evening, even if you missed a lot of sleep on the preceding day. In other words, the optimum timing of your sleep should take into consideration your circadian rhythm. Homeostatic component Homeostasis is the term that refers to maintaining equilibr
#### Annotation 5729959939340
#biology #neurology #sleep On the other hand, caffeine, stress, exercise and other factors may temporarily reduce your homeostatic sleepiness. The homeostatic mechanism prepares you for sleep after a long day of intellectual work. At the same time it prevents you from falling asleep in emergencies
Good sleep, good learning, good life
ar mechanisms are used to regulate overall sleepiness and its multiple subcomponents. The longer you stay awake, the more you learn, the more you think, the higher your tendency to fall asleep. <span>On the other hand, caffeine, stress, exercise and other factors may temporarily reduce your homeostatic sleepiness. The homeostatic mechanism prepares you for sleep after a long day of intellectual work. At the same time it prevents you from falling asleep in emergencies. Clock and Hourglass metaphor A metaphor is useful in explaining the two components of sleep (for a more scientific explanation see: Borbely model). Deep in the brain, your body clock i
#### Annotation 5729961512204
#biology #neurology #sleep Every 24 hours, metaphorically, the clock releases a sleepy potion that puts you to sleep (for details see: Why we fall asleep).
Good sleep, good learning, good life
metaphor is useful in explaining the two components of sleep (for a more scientific explanation see: Borbely model). Deep in the brain, your body clock is running a 24 hours cycle of activity. <span>Every 24 hours, metaphorically, the clock releases a sleepy potion that puts you to sleep (for details see: Why we fall asleep). If you try to sleep at wrong hours, without the sleepy potion, you may find it very hard to fall asleep. All insomniacs suffer from the lack of sleepy potion. If they go to sleep too ea
#### Flashcard 5730027572492
Tags
#biology #neurology #sleep
Question
sleepy potion (every 24 hours)
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Every 24 hours, metaphorically, the clock releases a sleepy potion that puts you to sleep (for details see: Why we fall asleep).
#### Original toplevel document
Good sleep, good learning, good life
metaphor is useful in explaining the two components of sleep (for a more scientific explanation see: Borbely model). Deep in the brain, your body clock is running a 24 hours cycle of activity. <span>Every 24 hours, metaphorically, the clock releases a sleepy potion that puts you to sleep (for details see: Why we fall asleep). If you try to sleep at wrong hours, without the sleepy potion, you may find it very hard to fall asleep. All insomniacs suffer from the lack of sleepy potion. If they go to sleep too ea
#### Annotation 5731042856204
#biology #neurology #sleep The brain also uses the hourglass of mental energy that gives you some time every day that you can devote to intellectual work. When you wake up, the hourglass is full and starts being emptied. With every waking moment, with everything your brain absorbs, with every mental effort, the hourglass is less and less full. Only when the hourglass of mental energy is empty will you able to quickly fall asleep
Good sleep, good learning, good life
or hours. You need to listen to your body clock to know the right moment to go to sleep. It is important to know that sleepy potion produced by the body clock is not enough to put you to sleep. <span>The brain also uses the hourglass of mental energy that gives you some time every day that you can devote to intellectual work. When you wake up, the hourglass is full and starts being emptied. With every waking moment, with everything your brain absorbs, with every mental effort, the hourglass is less and less full. Only when the hourglass of mental energy is empty will you able to quickly fall asleep. To get a good night sleep, you need to combine two factors: your body clock must be saying "time to sleep" (circadian component of sleep) your hourglass of power must be saying "no mor
#### Annotation 5731044429068
#biology #neurology #sleep To get high quality night sleep that maximizes your learning effects your sleep start time should meet these two criteria: strong homeostatic sleepiness: this usually means going to sleep not earlier than 15-19 hours after awakening from the previous night sleep ascending circadian sleepiness: this means going to sleep at a time of day when you usually experience a rapid increase in drowsiness. Not earlier and not later! Knowing the timing of your circadian rhythm is critical for good night sleep
Good sleep, good learning, good life
a horse kicked you in the stomach. That's the acme of a criminal attack on your brain's health. The fundamental theorem of good sleep Let us now formulate the fundamental theorem of good sleep: <span>To get high quality night sleep that maximizes your learning effects your sleep start time should meet these two criteria: strong homeostatic sleepiness: this usually means going to sleep not earlier than 15-19 hours after awakening from the previous night sleep ascending circadian sleepiness: this means going to sleep at a time of day when you usually experience a rapid increase in drowsiness. Not earlier and not later! Knowing the timing of your circadian rhythm is critical for good night sleep You should be aware that using the circadian component will only work when all its physiological subcomponents run in sync (as it is the case in free running sleep). People with irregul
#### Annotation 5731046001932
#biology #neurology #sleep A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle)
Good sleep, good learning, good life
tivity to light zeitgeber) increased demand for sleep (e.g. as a result of intense learning, highly creative job position, exercise, etc.) stress endocrine disorders sleep disorders adolescence <span>A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle). In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the physiological function of sleep
#### Annotation 5731047574796
#biology #neurology #sleep A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle) . In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the physiological function of sleep, which is the rewiring of the neural networks of the brain, we can naturally expect that the demand for sleep be associated with the amount of learning on the preceding days. This link may also explain a decreased demand for sleep in retirement due to a decrease in intellectual activity. This age-related drop in the demand for sleep is less likely to be observed in highly active individuals. For similar reasons, the entrainment failure can often be found among students during exams. It is not clear how much of this failure can be attributed to stress, or to the desire to do more on a given day, or to the actual increase in the demand for sleep.
Good sleep, good learning, good life
ivity to light zeitgeber) increased demand for sleep (e.g. as a result of intense learning, highly creative job position, exercise, etc.) stress endocrine disorders sleep disorders adolescence <span>A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle). In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the physiological function of sleep, which is the rewiring of the neural networks of the brain, we can naturally expect that the demand for sleep be associated with the amount of learning on the preceding days. This link may also explain a decreased demand for sleep in retirement due to a decrease in intellectual activity. This age-related drop in the demand for sleep is less likely to be observed in highly active individuals. For similar reasons, the entrainment failure can often be found among students during exams. It is not clear how much of this failure can be attributed to stress, or to the desire to do more on a given day, or to the actual increase in the demand for sleep. Formula for good sleep There is a little-publicized formula that acts as a perfect cure for people who experience continual or seasonal problems with sleep entrainment. This formula is
#### Flashcard 5731049147660
Tags
#biology #neurology #sleep
Question
entrainment failure
failure to reset 25h circadian rhythm to 24h daylight cycle
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle) . In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the physiological function of slee
#### Original toplevel document
Good sleep, good learning, good life
ivity to light zeitgeber) increased demand for sleep (e.g. as a result of intense learning, highly creative job position, exercise, etc.) stress endocrine disorders sleep disorders adolescence <span>A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle). In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the physiological function of sleep, which is the rewiring of the neural networks of the brain, we can naturally expect that the demand for sleep be associated with the amount of learning on the preceding days. This link may also explain a decreased demand for sleep in retirement due to a decrease in intellectual activity. This age-related drop in the demand for sleep is less likely to be observed in highly active individuals. For similar reasons, the entrainment failure can often be found among students during exams. It is not clear how much of this failure can be attributed to stress, or to the desire to do more on a given day, or to the actual increase in the demand for sleep. Formula for good sleep There is a little-publicized formula that acts as a perfect cure for people who experience continual or seasonal problems with sleep entrainment. This formula is
#### Flashcard 5731052031244
Tags
#biology #neurology #sleep
Question
24.5-25.5h
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle) . In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the
#### Original toplevel document
Good sleep, good learning, good life
ivity to light zeitgeber) increased demand for sleep (e.g. as a result of intense learning, highly creative job position, exercise, etc.) stress endocrine disorders sleep disorders adolescence <span>A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle). In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the physiological function of sleep, which is the rewiring of the neural networks of the brain, we can naturally expect that the demand for sleep be associated with the amount of learning on the preceding days. This link may also explain a decreased demand for sleep in retirement due to a decrease in intellectual activity. This age-related drop in the demand for sleep is less likely to be observed in highly active individuals. For similar reasons, the entrainment failure can often be found among students during exams. It is not clear how much of this failure can be attributed to stress, or to the desire to do more on a given day, or to the actual increase in the demand for sleep. Formula for good sleep There is a little-publicized formula that acts as a perfect cure for people who experience continual or seasonal problems with sleep entrainment. This formula is
#### Flashcard 5731054390540
Tags
#biology #neurology #sleep
Question
hypothesis: why older people sleep less
decrease in intellectual activity -> less homeostatic sleep pressure
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
e brain, we can naturally expect that the demand for sleep be associated with the amount of learning on the preceding days. This link may also explain a decreased demand for sleep in retirement <span>due to a decrease in intellectual activity. This age-related drop in the demand for sleep is less likely to be observed in highly active individuals. For similar reasons, the entrainment failure can often be found among students
#### Original toplevel document
Good sleep, good learning, good life
ivity to light zeitgeber) increased demand for sleep (e.g. as a result of intense learning, highly creative job position, exercise, etc.) stress endocrine disorders sleep disorders adolescence <span>A great deal of sleep disorders can be explained by entrainment failure (i.e. the failure to reset the 25-hour circadian rhythm to the 24-hour daylight cycle). In other words, in the interdependence between sleep disorders and entrainment failure, the cause-effect relationship will often be reversed! Due to the physiological function of sleep, which is the rewiring of the neural networks of the brain, we can naturally expect that the demand for sleep be associated with the amount of learning on the preceding days. This link may also explain a decreased demand for sleep in retirement due to a decrease in intellectual activity. This age-related drop in the demand for sleep is less likely to be observed in highly active individuals. For similar reasons, the entrainment failure can often be found among students during exams. It is not clear how much of this failure can be attributed to stress, or to the desire to do more on a given day, or to the actual increase in the demand for sleep. Formula for good sleep There is a little-publicized formula that acts as a perfect cure for people who experience continual or seasonal problems with sleep entrainment. This formula is
#### Annotation 5731056749836
#biology #neurology #sleep Partners and spouses can free run their sleep in separate cycles, but they will often be surprised to find out that it is easier to synchronize with each other than with the rest of the world (esp. if they have similar interests and daily routines). If they are co-sleeping, one of the pair will usually get up slightly earlier and work as a strong zeitgeber for the other. The problem will appear only when the length of the naturally preferred sleep cycles differs substantially between the two. In such cases, instead of being a zeitgeber, the other person becomes a substitute for an alarm clock.
Good sleep, good learning, good life
ep later, take longer to fall asleep, and wake up faster, far less refreshed. Combating stress is one of the most important things in everyone's life for the sake of longevity and productivity. <span>Partners and spouses can free run their sleep in separate cycles, but they will often be surprised to find out that it is easier to synchronize with each other than with the rest of the world (esp. if they have similar interests and daily routines). If they are co-sleeping, one of the pair will usually get up slightly earlier and work as a strong zeitgeber for the other. The problem will appear only when the length of the naturally preferred sleep cycles differs substantially between the two. In such cases, instead of being a zeitgeber, the other person becomes a substitute for an alarm clock. Even if you are not convinced, you should try free running sleep to better understand the concept of the sleep phase, and how the sleep phase is affected by various lifestyle factors. Y
#### Annotation 5731058322700
#biology #neurology #sleep You will know that you execute your free running sleep correctly if it takes no more than 5 min. to fall asleep (without medication, alcohol or other intervention), and if you wake up pretty abruptly with the sense of refreshment. Being refreshed in the morning cannot be taken for granted. Even minor misalignment of sleep and the circadian phase will take the refreshed feeling away. After months or weeks of messy sleep, some circadian variables might be running in different cycles and free running sleep will not be an instant remedy. It may take some time to regulate it well enough to accomplish its goals
Good sleep, good learning, good life
ula is called free running sleep. For many people, after years of sleep abuse, even free running sleep can be tricky. It will take a while to discover one's own body's rules and to accept them. <span>You will know that you execute your free running sleep correctly if it takes no more than 5 min. to fall asleep (without medication, alcohol or other intervention), and if you wake up pretty abruptly with the sense of refreshment. Being refreshed in the morning cannot be taken for granted. Even minor misalignment of sleep and the circadian phase will take the refreshed feeling away. After months or weeks of messy sleep, some circadian variables might be running in different cycles and free running sleep will not be an instant remedy. It may take some time to regulate it well enough to accomplish its goals. It cannot even be excluded that after years of shift-work or jetlag, some brain cells in the sleep control centers might have died out making it even harder to achieve well aligned ref
#### Annotation 5731059895564
#biology #neurology #sleep Additionally, shortsightedness, the ailment of the information age, makes us less sensitive to the light zeitgeber and artificially prolongs the circadian cycle
Good sleep, good learning, good life
erests, give up the Internet, evening TV, etc. We live more stressful and more exciting lives than our grandparents. Turning the lights off in the early evening would probably only be wasteful. <span>Additionally, shortsightedness, the ailment of the information age, makes us less sensitive to the light zeitgeber and artificially prolongs the circadian cycle. There are a number of downsides to free running sleep. The worst shortcoming is a difficulty in establishing an activity cycle that could be well synchronized with the rest of the worl
#### Annotation 5731067235596
#biology #neurology #sleep There are a couple of determinants that make a good, efficient and persistent student. Here are some characteristics of a person who is likely to be successful in learning: highly optimistic sleeps well knowledge hungry stress-tolerant energetic, but able to slow down at the time of learning
Good sleep, good learning, good life
ty from sleepy people who make catastrophic errors in industry and transportation" (Merrill Mitler, PhD) I have studied student personalities among users of SuperMemo for over twenty years now. <span>There are a couple of determinants that make a good, efficient and persistent student. Here are some characteristics of a person who is likely to be successful in learning: highly optimistic sleeps well knowledge hungry stress-tolerant energetic, but able to slow down at the time of learning Here are some unfortunate characteristics that do not correlate well with the ability to study effectively: prone to depression or mood swings problems with sleep (esp. insomnia) high l
#### Annotation 5731068808460
#biology #neurology #sleep When you are drowsy in the afternoon, your hourglass of mental power might be almost empty. A quick nap will then help you fill it up again and be very productive in the evening
Good sleep, good learning, good life
ody clock releases the sleepy potion. When you wake up early with an alarm clock, you can hardly get to your feet because your body is full of sleepy potion, which begs you to go back to sleep. <span>When you are drowsy in the afternoon, your hourglass of mental power might be almost empty. A quick nap will then help you fill it up again and be very productive in the evening. If you drink coffee in the morning, it helps you charge the hourglass and add some extra mental energy. But coffee combined with the sleepy potion produces a poisonous mix that engulfs
#### Flashcard 5731070381324
Tags
#biology #neurology #sleep
Question
zeitgebers
status measured difficulty not learned 37% [default] 0
Good sleep, good learning, good life
ours. Most of us are able to entrain this 25 circadian rhythm into a 24-hour cycle by using factors that reset the oscillation. These factors include intense morning light, work, exercise, etc. <span>German scientists have named these factors zeitgebers (i.e. factors that give time). As a result of the influence of zeitgebers, in a well-adjusted individual, the cycle can be set back by 30-60 minutes each day. However, the entrainment t
#### Annotation 5731072740620
#biology #neurology #sleep Even if you are not convinced, you should try free running sleep to better understand the concept of the sleep phase, and how the sleep phase is affected by various lifestyle factors. You will often notice that your supposed sleep disorder disappears! Note that the free running sleep period is not solely genetic. Various factors in the daily schedule are able to shorten or lengthen the period. Of the obvious ones, bright light in the morning or melatonin in the evening may shorten the cycle. Exciting activities in the evening will lengthen it. The period changes slightly with seasons
Good sleep, good learning, good life
the length of the naturally preferred sleep cycles differs substantially between the two. In such cases, instead of being a zeitgeber, the other person becomes a substitute for an alarm clock. <span>Even if you are not convinced, you should try free running sleep to better understand the concept of the sleep phase, and how the sleep phase is affected by various lifestyle factors. You will often notice that your supposed sleep disorder disappears! Note that the free running sleep period is not solely genetic. Various factors in the daily schedule are able to shorten or lengthen the period. Of the obvious ones, bright light in the morning or melatonin in the evening may shorten the cycle. Exciting activities in the evening will lengthen it. The period changes slightly with seasons. It will also change when you leave on vacation. It often gets shorter with age. Try free running sleep to understand your own sleep parameters. This will help you synchronize with the
#### Annotation 5731074313484
#biology #neurology #sleep It is true that people who try to free run their sleep may find themselves sleeping outrageously long in the very beginning. This, however, will not last in a healthy individual as long sleep is a body's counter-reaction to various sleep deficits resulting from sleep deprivation. Unlike it is the case with foods, there does not seem to be any evolutionary advantage to getting extra sleep on days when we can afford to sleep longer. In the course of evolution, we have developed a tendency to overeat. This is a protection against periods when food is scarce. Adipose tissue works as a survival kit for bad times. However, considering the function of sleep, the demand for sleep should be somewhat proportional to the amount of new learning received on preceding days
Good sleep, good learning, good life
of the cycle is possible with self-discipline in adhering to cycle-reset rules such as morning exercise, bright light, sleep protective zone in the evening, etc. Argument 2: Excessive sleeping <span>It is true that people who try to free run their sleep may find themselves sleeping outrageously long in the very beginning. This, however, will not last in a healthy individual as long sleep is a body's counter-reaction to various sleep deficits resulting from sleep deprivation. Unlike it is the case with foods, there does not seem to be any evolutionary advantage to getting extra sleep on days when we can afford to sleep longer. In the course of evolution, we have developed a tendency to overeat. This is a protection against periods when food is scarce. Adipose tissue works as a survival kit for bad times. However, considering the function of sleep, the demand for sleep should be somewhat proportional to the amount of new learning received on preceding days. In ancient times, we did not have exam days as opposed to lazy days. Consequently, the link between learning and demand for sleep is quite weak. The body clock will still make us sleep
#### Annotation 5731075886348
#biology #neurology #sleep However, you will often hear two arguments against adopting the use of free running sleep: Argument 1 - free running sleep will often result in a day that is longer than 24 hours. This ultimately leads to sleeping in atypical hours. This seems to go against the natural 24-hour cycle of light and darkness. Less often, the cycle will be less than 24 hours Argument 2 - sleep can be compared to eating. Your body will always try to get more than it actually needs. This will result in spending more time in sleep than necessary. In other words, free running sleep is time-inefficient
Good sleep, good learning, good life
ll be discussed later, free running sleep can be used to solve a number of sleep disorders except for those where there is an underlying organic disorder that disrupts natural sleep mechanisms. <span>However, you will often hear two arguments against adopting the use of free running sleep: Argument 1 - free running sleep will often result in a day that is longer than 24 hours. This ultimately leads to sleeping in atypical hours. This seems to go against the natural 24-hour cycle of light and darkness. Less often, the cycle will be less than 24 hours Argument 2 - sleep can be compared to eating. Your body will always try to get more than it actually needs. This will result in spending more time in sleep than necessary. In other words, free running sleep is time-inefficient Argument 1: Phase shifts It is true that free running sleep will often run against the natural cycle of light and darkness. However, the departure from the natural rhythm is a direct co
#### Annotation 5731077459212
#biology #neurology #sleep Secondly, every extra minute of sleep might improve the quality of neural wiring in the brain. Sleep would better be compared to drinking rather than eating. We do not have much capacity to survive without drinking due to our poor water storage ability
Good sleep, good learning, good life
posed to lazy days. Consequently, the link between learning and demand for sleep is quite weak. The body clock will still make us sleep 7-8 hours on nights following the days of total inaction. <span>Secondly, every extra minute of sleep might improve the quality of neural wiring in the brain. Sleep would better be compared to drinking rather than eating. We do not have much capacity to survive without drinking due to our poor water storage ability. Similarly, we cannot sleep in advance in preparation for a double all-nighter before an exam or important deadline. The claim that free running sleep increases the natural need for sle
#### Annotation 5731079032076
#biology #neurology #sleep do not take a nap later than 7-8 hours from waking. Late naps are likely to affect the expected bedtime and disrupt your cycle. If you feel sleepy in the evening, you will have to wait for the moment when you believe you will be able to sleep throughout the night
Good sleep, good learning, good life
circadian sleepiness. Your sleep will be shorter and less refreshing. Your measurements will be less regular and you will find it harder to predict the optimum timing of sleep in following days <span>do not take a nap later than 7-8 hours from waking. Late naps are likely to affect the expected bedtime and disrupt your cycle. If you feel sleepy in the evening, you will have to wait for the moment when you believe you will be able to sleep throughout the night Sleep logging tips In free running conditions, it should not be difficult to record the actual hours of sleep. In conditions of entrainment failure, you may find it hard to fall asleep,
#### Annotation 5731080604940
#biology #neurology #sleep
### Free running sleep algorithm
1. Start with a meticulous log in which you will record the hours in which you go to sleep and wake up in the morning. If you take a nap during the day, put it in the log as well (even if the nap takes as little as 1-3 minutes). The log will help you predict the optimum sleeping hours and improve the quality of sleep. Once your self-research phase is over, you will accumulate sufficient experience to need the log no longer; however, you will need it at the beginning to better understand your rhythms. You can use SleepChart to simplify the logging procedure and help you read your circadian preferences.
2. Go to sleep only then when you are truly tired. You should be able to sense that your sleep latency is likely to be less than 5-10 minutes. If you do not feel confident you will fall asleep within 10-20 minutes, do not go to sleep! If this requires you to stay up until early in the morning, so be it!
3. Be sure nothing disrupts your sleep! Do not use an alarm clock! If possible, sleep without a bed partner (at least in the self-research period). Keep yourself well isolated from sources of noise and from rapid changes in lighting.
4. Avoid stress during the day, esp. in the evening hours. This is particularly important in the self-research period while you are still unsure how your optimum sleep patterns look. Stress hormones have a powerful impact on the timing of sleep. Stressful thoughts are also likely to keep you up at the time when you shall be falling asleep.
5. After a couple of days, try to figure out the length of your circadian cycle. If you arrive at a number that is greater than 24 hours, your free running sleep will result in going to sleep later on each successive day. This will ultimately make you sleep during the day at times. This is why you may need a vacation to give free running sleep an honest test. Days longer than 24 hours are pretty normal, and you can stabilize your pattern with properly timed signals such as light and exercise. This can be very difficult if you are a DSPS type.
6. Once you know how much time you spend awake on average, make a daily calculation of the expected hour at which you will go to sleep (I use the term expected bedtime and expected retirement hour to denote times of going to bed and times of falling asleep, which in free running sleep are almost the same). This calculation will help you predict the sleep onset. On some days you may feel sleepy before the expected bedtime. Do not fight sleepiness, go
Good sleep, good learning, good life
#### Annotation 5731082177804
#biology #neurology #sleep Sleep maintenance circadian component correlates with (but is not equal to): (1) negatively with: temperature, ACTH, cortisol, catecholamines, and (2) positively with: melatonin and REM sleep propensity
Good sleep, good learning, good life
calibrations of the vertical axis). Mid-day slump in alertness is also circadian, but is biologically different and results in short sleep that does not register as red sleep maintenance peak. <span>Sleep maintenance circadian component correlates with (but is not equal to): (1) negatively with: temperature, ACTH, cortisol, catecholamines, and (2) positively with: melatonin and REM sleep propensity. For more details see: Circadian graph and Biphasic nature of human sleep. Best brainwork time Optimum timing of brainwork requires both low homeostatic sleepiness and low circadian sle
#### Annotation 5731083750668
#biology #has-images #neurology #sleep The following exemplary circadian graph was generated with SleepChart using a log of free-running sleep: The horizontal axis expresses the number of hours from awakening (note that the free running rhythm period is often longer than 24 hours). Light blue dots are actual sleep episode measurements with timing on the horizontal, and the length on the left vertical axis. Homeostatic sleepiness can roughly be expressed as the ability to initiate sleep. Percent of the initiated sleep episodes is painted as a thick blue line (right-side calibrations of the vertical axis). Homeostatic sleep propensity increases in proportion to mental effort and can be partially cleared by caffeine, stress, etc. Circadian sleepiness can roughly be expressed as the ability to maintain sleep. Average length of initiated sleep episodes is painted as a thick red line (left-side calibrations of the vertical axis). Mid-day slump in alertness is also circadian, but is biologically different and results in short sleep that does not register as red sleep maintenance peak.
Good sleep, good learning, good life
grateful for your submissions that will be useful in further research (sending data from SleepChart takes just a single click). Optimizing the timing of brainwork Circadian graph and brainwork <span>The following exemplary circadian graph was generated with SleepChart using a log of free-running sleep: The horizontal axis expresses the number of hours from awakening (note that the free running rhythm period is often longer than 24 hours). Light blue dots are actual sleep episode measurements with timing on the horizontal, and the length on the left vertical axis. Homeostatic sleepiness can roughly be expressed as the ability to initiate sleep. Percent of the initiated sleep episodes is painted as a thick blue line (right-side calibrations of the vertical axis). Homeostatic sleep propensity increases in proportion to mental effort and can be partially cleared by caffeine, stress, etc. Circadian sleepiness can roughly be expressed as the ability to maintain sleep. Average length of initiated sleep episodes is painted as a thick red line (left-side calibrations of the vertical axis). Mid-day slump in alertness is also circadian, but is biologically different and results in short sleep that does not register as red sleep maintenance peak. Sleep maintenance circadian component correlates with (but is not equal to): (1) negatively with: temperature, ACTH, cortisol, catecholamines, and (2) positively with: melatonin and REM
#### Annotation 5731086896396
#biology #neurology #sleep People with a particularly long circadian cycle or with an insufficient sensitivity to zeitgebers are classified as suffering from Delayed Sleep Phase Syndrome (DSPS for short). Sometimes the abbreviation DSPD is used where syndrome is replaced with disorder
Good sleep, good learning, good life
ecline sufficiently enough to make it truly enjoyable. Delayed Sleep Phase Syndrome (DSPS) When a tendency to go to sleep later each day is strongly pronounced, it may become a serious problem. <span>People with a particularly long circadian cycle or with an insufficient sensitivity to zeitgebers are classified as suffering from Delayed Sleep Phase Syndrome (DSPS for short). Sometimes the abbreviation DSPD is used where syndrome is replaced with disorder. The terms non 24-hour sleep/wake syndrome (N24, N-24, Non-24) or hypernychthemeral syndrome (with a few spelling variants) are occasionally used to refer to the most severe cases. I wi
#### Flashcard 5731088993548
Tags
#biology #neurology #sleep
Question
Delayed Sleep Phase Syndrome
long circadian cycles or insufficient zeitgeber sentivity
status measured difficulty not learned 37% [default] 0
Good sleep, good learning, good life
collecting exciting data for the experiment would keep me up with my thoughts racing. I might try this in retirement when my vital powers decline sufficiently enough to make it truly enjoyable. <span>Delayed Sleep Phase Syndrome (DSPS) When a tendency to go to sleep later each day is strongly pronounced, it may become a serious problem. People with a particularly long circadian cycle or with an insufficient sen
#### Annotation 5731091877132
#biology #neurology #sleep The main factors contributing to DSPS: increased period of the body clock (well above 25 hours) reduced or increased sensitivity to factors that reset or advance body clock (e.g. light, activity, stress, exercise, etc.) electric lighting, 24-hour economy and the resulting "want to do more" lifestyle
Good sleep, good learning, good life
ull mental powers. DSPS students feel best after midnight when everyone else is asleep and they can focus on learning or other activities (reading, Internet, watching TV, computer games, etc.). <span>The main factors contributing to DSPS: increased period of the body clock (well above 25 hours) reduced or increased sensitivity to factors that reset or advance body clock (e.g. light, activity, stress, exercise, etc.) electric lighting, 24-hour economy and the resulting "want to do more" lifestyle A normal individual has a body clock running with a period slightly longer than 24 hours. The clock is reset in the morning with activity and bright light. Thus a normal individual easi
#### Annotation 5731093974284
#biology #neurology #sleep If you go to a sleep expert with your DSPS problem, you will likely be prescribed melatonin or a bright light therapy only to discover their limited impact on the quality of your sleep. If you are an insomniac, you may additionally be prescribed sleeping pills that might help you sleep without achieving the desired effect: a refreshed mind
Good sleep, good learning, good life
. DSPS is also associated with problems with falling asleep if you try to keep an earlier bedtime. In other words, any cure for DSPS is also likely to solve the problem of sleep onset insomnia. <span>If you go to a sleep expert with your DSPS problem, you will likely be prescribed melatonin or a bright light therapy only to discover their limited impact on the quality of your sleep. If you are an insomniac, you may additionally be prescribed sleeping pills that might help you sleep without achieving the desired effect: a refreshed mind. This chapter should help you solve the problem. Using the properties of the human sleep control system, it can be proven mathematically that the problem of DSPS, and the associated ins
#### Annotation 5731096071436
#biology #neurology #sleep There are true hardcore DSPS cases with some psychiatric overtones or other health issues that might be particularly intractable, however, those should form a rare minority in the ever-increasing mass of people struggling with DSPS. That mass now includes a countless population of insomniacs who have never heard of DSPS and never even arrived to the problem of phase shift due to the employment of the alarm clock. Weitzman hypothesized that a significant number of patients with sleep onset insomnia might be suffering from undiagnosed DSPS (Weitzman et al. 1981[35]). Now we know that hypothesis certainly holds true, which can be demonstrated by letting insomniacs free run their sleep. A significant phase delay may be observed within the first few days of such a release from the restrictions on the timing of sleep. At the same time, there is an accompanying and nearly instant disappearance of sleep-onset insomnia. ... rare minority of people have pathological DSPS. much of insomnia might be DSPS in disguise: Phase shift simply doesn't occur due to alarm clocks. When these "insomniacs" free run sleep, insomnia disappears, but phase shifts occur.
|
# Measurement in Quantum mechanics
I have got a quantum conservative system whose Hamiltonian is $H$. I consider an selfandjoint operator $O$ whose eigenvalues and eigenvectors are: $$O|\psi _{n}\rangle = \lambda _{n}|\psi _{n}\rangle$$
At initial time, the system is at the state $|\psi_{i}\rangle$. At time $t= \delta t$, I measure with the operator $O$. The approximation of the measured state to second order in $\delta t$ is
$$|\psi(\delta t)\rangle=U(\delta t, 0)|\psi(0)\rangle \;,$$ with $$U(\delta t, 0) = e^{-\frac{i}{\hbar}\delta t H} \approx 1 - \frac{i}{\hbar}\delta t H - \frac{1}{2}\left ( \frac{1}{\hbar^2}(\delta t)^2 H^2 \right ) \;;$$ the probability of obtaining $\lambda _{i}$ in the second order on $\delta t$ is
$$P(\lambda_{i}) \approx 1-\left ( \frac{\delta t}{\hbar}\Delta H_{\psi_{i}} \right )^2 \;,$$ where I denote $$\Bigl(\Delta H_{\psi_{i}}\Bigr)^2= \langle\psi_{i}|H^2|\psi_{i}\rangle-\langle\psi_{i}|H|\psi_{i}\rangle^2$$
Question. In what regime the quadratic approximation on delta $\delta t$ is good?
• Looks like a quantum Zeno effect scenario, so t must be short compared to the Zeno time. Jan 3, 2013 at 19:24
• What is the Zeno time? Jan 3, 2013 at 23:05
• It's defined in Dima's answer to the question I linked - basically it's your $\Delta H_{\psi_i}$. You would probably need $\hbar$ to get the units right to compare it with $\delta t$ Jan 4, 2013 at 7:59
|
axiom-developer
[Top][All Lists]
## Re: [Axiom-developer] Re: Axiom and commercial success
From: Gabriel Dos Reis Subject: Re: [Axiom-developer] Re: Axiom and commercial success Date: 24 Aug 2006 11:39:59 +0200
root <address@hidden> writes:
| > | I encourage you to raise your eyes to the horizon and ignore the
| >
| > that is well put; however, the rocky road needs to be dealt with, not
| > just ignored. How do you get people on board for long-term projects
| > when you don't make room for them to make sure they can embark on the
| > long journey?
|
| Do research.
yes, that is what counts; and you know what counts for tenure :-)
publish or prish; get grant or perish; etc. How does one convince NSF
to spend money in a legacy codes that only one person on the planet
understands, instead of building a new shiny tool?
Axiom is an oportunity, no doubt.
[...good list of things to do snipped...]
interestingly, I spend a good of my night on only 5 points of your
long list before I saw it. Three of them are listed "big" and the
other 2 are "small" :-/
| Room really isn't the issue. Time and money are.
Yes, that is part of what I call "room". If we narrow sufficiently
the window for Axiom to shoot at the nearest target (e.g. "computer
algebra systems") we probably would have something for the immediat
future. I'm however less convinced that window is sufficient enough to
attract people that might contribute to many points on your list, and
attract fundings...
| \begin{flame}
|
| All existing computational math platforms have suffered from the
| total lack of vision by the traditional funding sources. The
| NSF/Universities/Research Labs (traditional funding sources) used
| to consider "computer math" as leading edge research. They now
| consider it a "solved problem with commercial implementations" (NSF).
| Or not a good revenue stream since it won't make money this quarter (IBM).
| Or a cross-discipline crevice (Universities).
That is a real concern for those of us in the US.
| Traditional funding sources, (at least in the US, the rest of the world
| seems rather more rational) seem unable to understand that the science
| of computational mathematics is hardly born yet. It's like the funders
| have said "Plato discovered geometry... math is complete."
|
| The NSF has a policy that "if there is a commercial implementation we
| don't fund it" and feels that Axiom has something to do with MMA and
| Maple. We are being compared with the "commercial systems" and
| considered losers.
still; from time to time I see fundings for projcts "similar" in nature...
| "Do not compare yourselves to others or you will become vain or bitter"
|
| \end{flame}
|
|
| Axiom is a research platform in computational mathematics.
| There is no competition, commercial or otherwise.
| Real science is not a commercial enterprise.
| Fund it with your time and energy.
I need to feed my family too :-)
| > How do you get people on board for long-term projects
|
| I don't know what motivates people to skip sunshine and wrestle
| with hard problems. I'm irish. I don't do sunshine :-).
|
| My primary job seems to be to carry this software thru the funding
| dark ages. I'd make progress quicker if I could but it takes a LOT
| of time to make sure it "just works". Most of the things I'm
| actually working on will take years to complete. But I'm in no hurry.
tenure-track has a clock ;-p
-- Gaby
|
# Documentation
### This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.
# comm.GeneralQAMTCMModulator System object
Package: comm
Convolutionally encode binary data and map using arbitrary QAM constellation
## Description
The GeneralQAMTCMModulator object implements trellis-coded modulation (TCM) by convolutionally encoding the binary input signal. The object then maps the result to an arbitrary signal constellation. The Signal constellation property lists the signal constellation points in set-partitioned order.
To modulate a signal using a trellis-coded, general quadrature amplitude modulator:
1. Define and set up your general QAM TCM modulator object. See Construction.
2. Call step to modulate a signal according to the properties of comm.GeneralQAMTCMModulator. The behavior of step is specific to each object in the toolbox.
Note: Starting in R2016b, instead of using the step method to perform the operation defined by the System object™, you can call the object with arguments, as if it were a function. For example, y = step(obj,x) and y = obj(x) perform equivalent operations.
## Construction
H = comm.GeneralQAMTCMModulator creates a trellis-coded, general quadrature amplitude (QAM TCM) modulator System object, H. This object convolutionally encodes a binary input signal and maps the result using QAM modulation with a signal constellation specified in the Constellation property.
H = comm.GeneralQAMTCMModulator(Name,Value) creates a general QAM TCM modulator System object, H, with each specified property set to the specified value. You can specify additional name-value pair arguments in any order as (Name1,Value1,...,NameN,ValueN).
H = comm.GeneralQAMTCMModulator(TRELLIS,Name,Value) creates a general QAM TCM modulator System object, H. This object has the TrellisStructure property set to TRELLIS, and the other specified properties set to the specified values.
## Properties
TrellisStructure Trellis structure of convolutional code Specify trellis as a MATLAB® structure that contains the trellis description of the convolutional code. Use the istrellis function to check if a structure is a valid trellis structure. The default is the result of poly2trellis([1 3], [1 0 0; 0 5 2]). TerminationMethod Termination method of encoded frame Specify the termination method as one of Continuous | Truncated | Terminated. The default is Continuous. When you set this property to Continuous, the object retains the encoder states at the end of each input vector for use with the next input vector. When you set this property to Truncated, the object treats each input vector independently. The encoder is reset to the all-zeros state at the start of each input vector. When you set this property to Terminated, the object treats each input vector independently. For each input vector, the object uses extra bits to set the encoder to the all-zeros state at the end of the vector. For a rate K/N code, the step method outputs the vector with length $y=N×\left(L+S\right)}{K}$, where S = constraintLength–1. In the case of multiple constraint lengths, S = sum(constraintLength(i)–1)). L represents the length of the input to the step method. ResetInputPort Enable modulator reset input Set this property to true to enable an additional input to the step method. The default is false. When this additional reset input is a nonzero value, the internal states of the encoder reset to their initial conditions. This property applies when you set the TerminationMethod property to Continuous. Constellation Signal constellation Specify a double- or single-precision complex vector that lists the points in the signal constellation that were used to map the convolutionally encoded data. You must specify the constellation in set-partitioned order. See documentation for the General TCM Encoder block for more information on set-partitioned order. The length of the constellation vector must equal the number of possible input symbols to the convolutional decoder of the general QAM TCM demodulator object. This corresponds to 2N for a rate K/N convolutional code. The default corresponds to a set-partitioned order for the points of an 8-PSK signal constellation. This value is expressed exp($2×\pi ×j×\left[\begin{array}{cccccccc}0& 4& 2& 6& 1& 5& 3& 7\end{array}\right]}{8}$). OutputDataType Data type of output Specify the output data type as one of double | single. The default is double.
## Methods
clone Create general QAM TCM modulator object with same property values getNumInputs Number of expected inputs to step method getNumOutputs Number of outputs from step method isLocked Locked status for input attributes and nontunable properties release Allow property value and input characteristics changes reset Reset states of the general QAM TCM modulator object step Convolutionally encode binary data and map using arbitrary QAM constellation
## Examples
expand all
Modulate data using QAM TCM modulation with an arbitrary 4-point constellation. Display a scatter plot of the modulated data.
Create binary data.
data = randi([0 1],1000,1);
Use the trellis structure with generating polynomial [171 133] and 4-point arbitrary constellation { , , , } to perform QAM TCM modulation.
t = poly2trellis(7,[171 133]); hMod = comm.GeneralQAMTCMModulator(t,... 'Constellation',exp(pi*1i*[1 2 3 6]/4));
Modulate and plot the data.
modData = step(hMod,data); scatterplot(modData);
## Algorithms
This object implements the algorithm, inputs, and outputs described on the General TCM Encoder block reference page. The object properties correspond to the block parameters.
|
# Clarify vectors at angles of 90 and 270 degrees from the vertical for motion in a vertical circle
I believe the expression for motion in a circle, measured at the top and bottom of that circle,is $\frac{mv^2}{r} = T - mg$ where $mg$ is negative because it acts downwards at all times.
I am confused about the tension $T$ at positions on the circle that are not at the top or the bottom, especially at $\theta=$90 degrees to the vertical, for example.
Take a mass $m$ connected to a pivot by a rod or string and made to rotate in a vertical circle. At $\theta=$90 degrees the rod or string holding the mass would be horizontal.
At any positions on the circumference of the circle the horizontal component of the weight, $mg \cos\theta$, is equal to zero, and the vertical component is $mg\sin\theta=mg$
And does the centripetal force still act towards the centre, so the horizontal component must be $\frac{mv^2}{r}$?
In which case, at $\theta=90$ degrees are the weight and centripetal forces orthogonal?
Does this mean that if $\frac{mv^2}{r}$ is horizontal and is the resultant of the tension and $mg$, then in order to obtain a horizontal resultant there must be a horizontal component to the tension? Does this mean the tension must point upwards from the horizontal? Or is the tension actually horizontal?
If the latter is the case, does it mean that the tension vector always points towards the centre of the circle at all points on the circumference?
|
# Thread: help with an exponential function
1. ## help with an exponential function
Hi , I need some help with the next problem
In 1965, Gordon Moose observed that the amount of computing power possible to put on a chip doubles every two years. In 1990, there were 1,000,000 transistors per chip. How many transistors per chip were there in 2000? 1985?
I have tried to solve it ; however, I don't know how to interpret its growth ( I mean when it doubles every 2 years). Can some help me to write a function that describes this problem? Thanks in advance!!
2. ## Exponential growth
Hello skorpiox
Originally Posted by skorpiox
Hi , I need some help with the next problem
In 1965, Gordon Moose observed that the amount of computing power possible to put on a chip doubles every two years. In 1990, there were 1,000,000 transistors per chip. How many transistors per chip were there in 2000? 1985?
I have tried to solve it ; however, I don't know how to interpret its growth ( I mean when it doubles every 2 years). Can some help me to write a function that describes this problem? Thanks in advance!!
We need a function that gives $n$, the number of transistors after a time t years (measured from 1990), such that
At time $t = 0, n = 10^6$
And, since the number doubles every 2 years, at time $t = 2, n = 10^6 \times 2$
At time $t = 4, n = 10^6 \times 2^2$
...and so on.
Clearly, we are going to need something like:
$n = 10^6 \times 2^{at + b}$ for some constants $a$ and $b$.
Notice that at time $t = 0, n = 10^6 \times 1 = 10^6 \times 2^0$
and at time $t = 2, n = 10^6 \times 2^1$
So we want $t = 0$ to give $at + b = 0$
$\Rightarrow b = 0$
and $t = 2$ to give $at + b = 1$
$\Rightarrow 2a + 0 = 1$
$\Rightarrow a = 0.5$
So the function we need is
$n = 10^6 \times 2^{0.5t}$
You now need to find the value of $n$ when $t = 10$, and when $t = -5$.
Can you complete it now?
|
Notes On Adsorption: Isotherms - CBSE Class 12 Chemistry
Adsorption is basically an "equilibrium process". To understand the equilibrium process, consider an adsorbent and a gaseous adsorbate in a closed vessel, placed at a particular pressure P. It is observed that due to adsorption, the pressure of the gas decreases initially, and becomes constant after some time. This indicates that a state of equilibrium has been attained. Once equilibrium is reached, the rate of adsorption becomes equal to the rate of desorption. The extent of adsorption = x/m Where 'x' is the mass of the gas adsorbed and 'm' is the mass of the adsorbent. The variation in the amount of gas adsorbed by the adsorbent with variation in the pressure, at constant temperature, can be expressed by means of a curve. This curve, at constant temperature, is called the "adsorption isotherm Adsorption isotherm: At constant temperature, a graph between the amount of the gas adsorbed per gram of the adsorbent and the equilibrium pressure of the adsorbate is called the adsorption isotherm. Freundlich, in 1909, was the first to propose a mathematical relation for an adsorption isotherm. He gave an empirical relationship between the quantity of gas adsorbed by a unit mass of a solid adsorbent and pressure at a particular temperature. The relation can be expressed as: x/m = K P1/n Where x/m is the amount of gas adsorbed by a unit mass of the adsorbent n and K are constants, and P is the pressure. This relationship is generally represented in the form of a curve, and is known as the Freundlich adsorption isotherm. Freundlich adsorption isotherm: A curve where a mass of gas adsorbed per gram of the mass of the adsorbent, which is plotted against pressure. In this adsorption isotherm, x/m reaches its maximum value at point 'P s.' In other words, the extent of adsorption is the highest at this point. From the curve even if the pressure is increased beyond 'P s', the extent of adsorption remains the same. This is called the 'saturation state,' and 'P s' is called the 'saturation pressure.' At low pressure x/m α p1 At a very high pressure is x/m independent of the value of 'P'. This is represented as x/m α P0 In the intermediate range of pressure x/m depends upon P1/n Where 'n' is a positive integer. At a particular temperature, 'n' and 'k' depend upon the nature of the adsorbate and the adsorbent. Now, when 1/n = 0 x/m = KP0 x/m = K It is in this part of the curve that adsorption is independent of pressure. On the other hand, when 1/n= 1 x/m = KP x/m α P In other words, adsorption varies directly with pressure in this part of the isotherm.
#### Summary
Adsorption is basically an "equilibrium process". To understand the equilibrium process, consider an adsorbent and a gaseous adsorbate in a closed vessel, placed at a particular pressure P. It is observed that due to adsorption, the pressure of the gas decreases initially, and becomes constant after some time. This indicates that a state of equilibrium has been attained. Once equilibrium is reached, the rate of adsorption becomes equal to the rate of desorption. The extent of adsorption = x/m Where 'x' is the mass of the gas adsorbed and 'm' is the mass of the adsorbent. The variation in the amount of gas adsorbed by the adsorbent with variation in the pressure, at constant temperature, can be expressed by means of a curve. This curve, at constant temperature, is called the "adsorption isotherm Adsorption isotherm: At constant temperature, a graph between the amount of the gas adsorbed per gram of the adsorbent and the equilibrium pressure of the adsorbate is called the adsorption isotherm. Freundlich, in 1909, was the first to propose a mathematical relation for an adsorption isotherm. He gave an empirical relationship between the quantity of gas adsorbed by a unit mass of a solid adsorbent and pressure at a particular temperature. The relation can be expressed as: x/m = K P1/n Where x/m is the amount of gas adsorbed by a unit mass of the adsorbent n and K are constants, and P is the pressure. This relationship is generally represented in the form of a curve, and is known as the Freundlich adsorption isotherm. Freundlich adsorption isotherm: A curve where a mass of gas adsorbed per gram of the mass of the adsorbent, which is plotted against pressure. In this adsorption isotherm, x/m reaches its maximum value at point 'P s.' In other words, the extent of adsorption is the highest at this point. From the curve even if the pressure is increased beyond 'P s', the extent of adsorption remains the same. This is called the 'saturation state,' and 'P s' is called the 'saturation pressure.' At low pressure x/m α p1 At a very high pressure is x/m independent of the value of 'P'. This is represented as x/m α P0 In the intermediate range of pressure x/m depends upon P1/n Where 'n' is a positive integer. At a particular temperature, 'n' and 'k' depend upon the nature of the adsorbate and the adsorbent. Now, when 1/n = 0 x/m = KP0 x/m = K It is in this part of the curve that adsorption is independent of pressure. On the other hand, when 1/n= 1 x/m = KP x/m α P In other words, adsorption varies directly with pressure in this part of the isotherm.
Previous
Next
|
# Move fast … Or you will lose
Suppose you're on a 4 × 6 grid, and want to go from the bottom left to the top right. How many different paths can you take? Avoid backtracking -- you can only move right or up.
• I think its from this site betterexplained.com/articles/… – user56760 Feb 27 at 3:32
• (In the future please be aware that for content you did not create yourself, proper attribution is required. You need to include (at minimum) where it came from—and any additional context you can provide is often helpful to solvers. Posts which use someone else's content without attribution are generally deleted.) – Rubio Feb 27 at 6:11
• – beppe9000 Feb 27 at 22:26
This is, I'm sure, answered somewhere else. It is also related to Pascal's triangle.
Simply fill out the grid as follows:
In this grid, each number represents the number of ways of getting to that particular intersection. And that number is precisely the number of ways to get to the intersection below it added to the number of ways to get to the intersection to the left of it.
• This tip is also mentioned in MathCounts Mini #89 – MilkyWay90 Feb 28 at 3:38
A more mathematically oriented answer:
You have $$10$$ moves to make in total and you need to choose which $$4$$ of them are going to be up. The number of ways to do that is $${10\choose 4}=210$$
Here is a small python program which solves it. Every step we can go up or right with the goal being 4,6.
def count(up, right):
if up > 4:
return 0
if right > 6:
return 0
if up == 4 and right == 6:
return 1
return count(up + 1, right) + count(up, right + 1)
print count(0,0)
output : 210
rrrrrruuuu
$$\binom30+\binom41+\binom52+\binom63+\binom74+\binom85+\binom96$$ $$=1+4+10+20+35+56+84$$ $$=210$$
|
+0
# Polynomials
0
23
1
Find the coefficient of z in the product (-3z^2 - 7z + 4)(6z^2 + 6z - 1).
Jul 23, 2022
The coefficient of z is $$(4 \times 6) + (-7 \times -1)$$
|
2,190 views
The function $f\left(x,y\right) = x^2y - 3xy + 2y +x$ has
1. no local extremum
2. one local minimum but no local maximum
3. one local maximum but no local minimum
4. one local minimum and one local maximum
$r = \frac{\partial^2 f}{\partial x^2} = 2y$
$s = \frac{\partial^2 f}{\partial x \partial y} = 2x - 3$
$t = \frac{\partial^2 f}{\partial y^2} = 0$
Since, $rt - s^2 \leq 0,$ (if < 0 then we have no maxima or minima, if = 0 then we can't say anything).
Maxima will exist when $rt - s^2 > 0$ and $r < 0.$
Minima will exist when $rt - s2 > 0$ and $r > 0.$
Since, $rt - s^2$ is never $> 0$ so we have no local extremum.
When x = 1.5, s2 = 0 and 0 - 0 = 0.
Corrected.
multivariable calculus not in syllabus?
1 vote
1
2,007 views
|
## All combinations of balanced parentheses (Generate Parentheses)
Given n pairs of parentheses, write a function to generate all combinations of well-formed parentheses.
For example, given n = 3, a solution set is:
“((()))”, “(()())”, “(())()”, “()(())”, “()()()”
Thoughts:
Recursion. Base case is which the solution has size of $2*n$. Recursive step: each time we count number of open parenthesis and closed parenthesis. If open == closed, then we have to add open parenthesis. If open > closed, we can add open parenthesis as long as we do not exceed $n$. We can also add closed parenthesis.
Code (Java):
```import java.util.ArrayList;
public class Solution {
public ArrayList<String> generateParenthesis(int n) {
ArrayList<String> solutions = new ArrayList<String>();
recursion(n, new String(), solutions);
return solutions;
}
private void recursion(int n, String str, ArrayList<String> sol) {
if(str.length() == 2 * n)
else {
int left = 0;
int right = 0;
for(int i = 0; i < str.length(); ++i) {
if(str.charAt(i) == '(')
left++;
if(str.charAt(i) == ')')
right++;
}
if(left == right)
recursion(n, str + "(", sol);
else if(right < left) {
if(left < n)
recursion(n, str + "(", sol);
recursion(n, str + ")", sol);
}
}
}
}
Code (C++):
class Solution {
public:
vector<string> generateParenthesis(int n) {
vector<string> solutions;
string solution;
recursion(n, solution, solutions);
return solutions;
}
void recursion(int n, string solution, vector<string> &solutions) {
if(solution.size() == 2*n) {
solutions.push_back(solution);
} else {
int left = 0;
int right = 0;
for(int i = 0; i < solution.size(); ++i) {
if(solution[i] == '(')
left++;
if(solution[i] == ')')
right++;
}
if(left == right)
recursion(n, solution + "(", solutions);
else if(left > right) {
if(left < n)
recursion(n, solution + "(", solutions);
recursion(n, solution + ")", solutions);
}
}
}
};
div.wpmrec2x{max-width:610px;}
div.wpmrec2x div.u > div{float:left;margin-right:10px;}
div.wpmrec2x div.u > div:nth-child(3n){margin-right:0px;}
(function(g,\$){if("undefined"!=typeof g.__ATA){
}})(window,jQuery);
var o = document.getElementById('crt-834599882');
if ("undefined"!=typeof Criteo) {
var p = o.parentNode;
p.style.setProperty('display', 'inline-block', 'important');
o.style.setProperty('display', 'block', 'important');
} else {
o.style.setProperty('display', 'none', 'important');
o.style.setProperty('visibility', 'hidden', 'important');
}
var o = document.getElementById('crt-342196338');
if ("undefined"!=typeof Criteo) {
var p = o.parentNode;
p.style.setProperty('display', 'inline-block', 'important');
o.style.setProperty('display', 'block', 'important');
} else {
o.style.setProperty('display', 'none', 'important');
o.style.setProperty('visibility', 'hidden', 'important');
}
Related
```
1. Poonam
Feb 17, 2014 @ 06:07:10
Another way to solve this would be
public class Parenthesis {
public static void main(String[] args) {
int n = 3;
char[] str = new char[n*2];
printParenthesis(n,n,str,0);
}
private static void printParenthesis(int lCount, int rCount, char[] str,
int count) {
if(lCount < 0 || rCount 0){
str[count] = ‘(‘;
printParenthesis(lCount-1, rCount, str, count+1);
}
if(rCount > lCount){
str[count] = ‘)’;
printParenthesis(lCount, rCount-1, str, count+1);
}
}
}
}
|
##### Page tree
This manual gives a walkthrough on the CreateView application:
# Introduction
CreateView composes an SDfile that contains both structures and calculation results using the input SDfile of GenerateMD and a table containing the ordinal number of compounds in the SDfile and other data to be viewed. Such table can be created for example by Compr or Jarp. The generated SDfiles can be displayed by the MarvinView application or other SDF viewer.
# Usage
CreateView can be used as a command line application in the following way:
crview [<options>]
Prepare the usage of the crview script or batch file as described in Preparing the Usage of JChem Batch Files and Shell Scripts.
You can also call the CreateView Java class directly:
• Under Win32 / Java 2 (assuming that JChem is installed in c:\jchem):
java -cp "c:\jchem\lib\jchem.jar;%CLASSPATH%" chemaxon.clustering.CreateView [<options>]
• Under Unix / Java 2 (assuming that JChem is installed in /usr/local/jchem):
java -cp "/usr/local/jchem/lib/jchem.jar:\$CLASSPATH" \ chemaxon.clustering.CreateView [<options>]
Because the utility has many parameters, it may be reasonable to create a shell script or a batch file for calling the software.
# Options and creating input
-h --help this help message
-s --input-sdf <file> input SDfile
-t --input-table <file> input table (id values and other data)
-o --output-sdf <file> output SDfile
-i --id-name <col>[:<count>] name of columns storing the id's (indexes).
<count> is the occurrence of the column.
default: the id is the line number.
-d --data-names <col1>:<col2>... name of columns to include in the SDfile
-c --condition "<col><OP><cond>" condition checked. OP may be: =,<,>,<=,>=
Two input files have to be specified:
• SDfile containing structures (--input-sdf option).
• A table containing the ordinal number of compounds from the SDfile and other data to be viewed (--input-table option).
The structure of the input table should be the following:
• headers of columns in column set 1 (1 row)
• column set 1 (multiple rows)
• headers of columns in column set 2 (1 row)
• column set 2 (multiple rows)
• ....
The --id-name <col>[:<count>] parameter determines which column contains the ordinal number of compounds to include in the generated SDfile. <col> is substituted by the name of the column. (This name appears in the header.) If there are more column sets in the file with the same name of name of column, then :<count> determines which occurrence of the column should be used.
# Examples
Example #1
--id-name centr:2
The --data-names <col1>:<col2>... option chooses the columns of the column set, which should be included in the SDfile as data fields. Column names must be separated by a colon.
Example #2
--data-names clid:size
Using the --condition <col><OP><cond> parameter, it can be determined which rows of the table should be included. Only one condition may be specified.
For example if --condition clid=2 is set, then only those compounds will be included in the SDfile, for which the value of the clid column is 2.
|
# Intuition for the degrees of freedom of the LASSO
Zou et al. "On the "degrees of freedom" of the lasso" (2007) show that the number of nonzero coefficients is an unbiased and consistent estimate for the degrees of freedom of the lasso.
It seems a little counterintuitive to me.
• Suppose we have a regression model (where the variables are zero mean)
$$y=\beta x + \varepsilon.$$
• Suppose an unrestricted OLS estimate of $\beta$ is $\hat\beta_{OLS}=0.5$. It could roughly coincide with a LASSO estimate of $\beta$ for a very low penalty intensity.
• Suppose further that a LASSO estimate for a particular penalty intensity $\lambda^*$ is $\hat\beta_{LASSO,\lambda^*}=0.4$. For example, $\lambda^*$ could be the "optimal" $\lambda$ for the data set at hand found using cross validation.
• If I understand correctly, in both cases the degrees of freedom is 1 as both times there is one nonzero regression coefficient.
Question:
• How come the degrees of freedom in both cases are the same even though $\hat\beta_{LASSO,\lambda^*}=0.4$ suggests less "freedom" in fitting than $\hat\beta_{OLS}=0.5$?
References:
• great question, that would deserve more attention! – Matifou May 25 at 19:37
Assume we are given a set of $n$ $p$-dimensional observations, $x_i \in \mathbb{R}^p$, $i = 1, \dotsc, n$. Assume a model of the form: \begin{align} Y_i = \langle \beta, x_i\rangle + \epsilon \end{align} where $\epsilon \sim N(0, \sigma^2)$, $\beta \in \mathbb{R}^p$, and $\langle \cdot, \cdot \rangle$ denoting the inner product. Let $\hat{\beta} = \delta(\{Y_i\}_{i=1}^n)$ be an estimate of $\beta$ using fitting method $\delta$ (either OLS or LASSO for our purposes). The formula for degrees of freedom given in the article (equation 1.2) is: \begin{align} \text{df}(\hat{\beta}) = \sum_{i=1}^n \frac{\text{Cov}(\langle\hat{\beta}, x_i\rangle, Y_i)}{\sigma^2}. \end{align}
By inspecting this formula we can surmise that, in accordance with your intuition, the true DOF for the LASSO will indeed be less than the true DOF of OLS; the coefficient-shrinkage effected by the LASSO should tend to decrease the covariances.
Now, to answer your question, the reason that the DOF for the LASSO is the same as the DOF for OLS in your example is just that there you are dealing with estimates (albeit unbiased ones), obtained from a particular dataset sampled from the model, of the true DOF values. For any particular dataset, such an estimate will not be equal to the true value (especially since the estimate is required to be an integer while the true value is a real number in general).
However, when such estimates are averaged over many datasets sampled from the model, by unbiasedness and the law of large numbers such an average will converge to the true DOF. In the case of the LASSO, some of those datasets will result in an estimator wherein the coefficient is actually 0 (though such datasets might be rare if $\lambda$ is small). In the case of OLS, the estimate of the DOF is always the number of coefficients, not the number of non-zero coefficients, and so the average for the OLS case will not contain these zeros. This shows how the estimators differ, and how the average estimator for the LASSO DOF can converge to something smaller than the average estimator for the OLS DOF.
• Thanks for correcting my mistakes and imprecise formulations. Let me see if I understood you well. Essentially, if we were to repeat the experiment many times (or sample many times from the same population), we would occasionally get $\hat\beta_{LASSO}=0$ (the coefficient would be shrunk all the way to zero) and on average (across the experiments) I would get DoF for LASSO $<1$ while DoF for OLS $=1$ (obviously). – Richard Hardy Jul 11 '16 at 16:59
• By the way, why does the estimate of degrees of freedom need to be integer? Does it, really? Let me also remark that the inner product notation appears unnecessarily complicated and is rarely used on this site; matrix notation would suffice. But it's your choice, of course. – Richard Hardy Jul 11 '16 at 17:07
• Yes that about sums its up. The estimate of degrees of freedom has to be an integer for LASSO (at least for a single dataset) just because the estimate is the number of non-zero coefficients. – e2crawfo Jul 11 '16 at 17:20
• The statement The estimate of degrees of freedom has to be an integer for LASSO just because the estimate is the number of non-zero coefficients seems highly tautological to me. In general, I don't think the df needs to be integer, from the very definition of the df you wrote. Similarly, in the ridge case, it is not necesarily zero. – Matifou May 23 at 22:22
|
## Precalculus (6th Edition) Blitzer
The value of x is $4$.
Consider the expression $5\left( 2x-3 \right)-4x=9$ and solve as follows: \begin{align} & 5\left( 2x-3 \right)-4x=9 \\ & 10x-15-4x+15=9+15 \\ & 6x=24 \\ & x=4 \end{align}
|
?
Free Version
Difficult
# Time Required to Reach x(t) Given a(t)
APPHMC-PEHEGK
The acceleration of a given particle is given by $a(t) = e^{t/2} - 1$, where $v_0 = 2 m/s$ and $x_0 = 1 m$.
How much time does it take for the particle to reach $x = 50 m$?
Note: this problem involves rather difficult calculus.
A
$3.8 s$
B
$4.0 s$
C
$5.6 s$
D
$5.7 s$
E
$9.1 s$
|
## Properties and prediction accuracy of a sigmoid function of time-determinate growth
iForest - Biogeosciences and Forestry, Volume 8, Issue 5, Pages 631-637 (2015)
doi: https://doi.org/10.3832/ifor1243-007
Research Articles
The properties and short-term prediction accuracy of mathematical model of sigmoid time-determinate growth, denoted as “KM-function”, are presented. Comparative mathematical analysis of the function revealed that it is a model of asymmetrical sigmoid growth, which starts at zero size of an organism and terminates when it reaches its final size. The function assumes a finite length of the growth period and includes a parameter interpretable as the expected lifespan of the organism. Moreover, the possibility for growth curve inflexion at any age is possible, so the function can be used for modelling of S-shaped growth trajectories with various degree of asymmetry. These good theoretical predispositions for realistic growth predictions were empirically evaluated. The KM-function used in three and four-parameter forms was compared with three classical (Richards, Korf and Weibull) growth functions employing two parameterisation methods - nonlinear least squares (NLS) and Bayesian method. The evaluation was conducted on the basis of the tree diameter series obtained from stem analyses. The main empirical findings are: (i) if the minimisation of the prediction bias is required, the KM-function in three-parameter form in connection with Bayes parameterisation can be recommended; (ii) if the minimisation of root square error (RMSE) is required, the best short-term prediction results for a particular dataset were obtained with four-parameter Weibull function employing NLS parameterisation; (iii) moreover, three-parameter functions parameterised by Bayesian methods show a considerably smaller RMSE by 15-25% as well as smaller biases by 40-60% than four-parameter functions employing NLS. Overall, all analyses confirmed relative usefulness of the KM-function in comparison with classical growth functions, especially in connection with Bayesian parameterisation.
# Introduction
Changes in plant size over the lifespan are typically characterized by an S-shaped growth trajectory ([21], [35], [38]). Growth functions represent simple, non-linear equations describing the sigmoidal change in size of an individual or population with time. Their mathematical forms are derived empirically or conceived from the biological knowledge of the growth of living organisms ([2]). Historically, the Gompertz, logistic, Mitscherlich or Korf’s growth models rank among the most successful functions used in modeling the plant growth ([10], [15], [19], [33]). Currently, the most used growth model is the Richards’ function ([23]) which was obtained by the generalization of the Bertallanffy’s growth function ([1]). Among the growth functions with unique properties, the Weibull’s or Sloboda’s equations and the increasingly popular Schnute’s growth function are also worth to be mentioned ([34], [29], [28]). A comprehensive survey of the historical background, the mathematical and statistical properties of the mentioned growth functions may be found in the works by Zeide ([38]), Seber & Wild ([24]), Tsoularis & Wallace ([31]) and Karkach ([13]).
The application of growth functions is now more common than ever. Growth functions are based on relatively simple equations with a limited number of parameters; nonetheless, they show a good ability to encompass complicated growth processes ([40]). A direct prediction of the integral quantities of growth at individual or population level by means of simpler growth functions can be more precise than prediction from the sum of the individual components of growth (cells, tissues and organs) obtained by more complicated process models. In general, this can be considered a major paradox of ecological modeling and represents a considerable challenge to plant ecology science ([21], [24], [39]).
From a mathematical point of view, each growth function suitable for the description of sigmoid growth must be able to describe a nonlinear, monotonically increasing growth pattern having a concave shape along the early life stages. At later stages, this turns into a convex function which approaches a final asymptote. For these reasons, growth functions are characterized by the inclusion of three or four parameters defining the position and shape of the growth curve ([8], [9]). In general, the asymptotic parameter β0 is present in all classical growth functions ([3]), and represents the limit (eqn. 1):
$$\beta_0 = \lim_{t \rightarrow \infty} \int^{t}_0 f^{\prime}(t)$$
Hence, β0 is the asymptotic value which the growth function approaches towards the end of an organism’s life, and which the growth function will reach after an infinite growth period.
A detailed analysis on the mathematical structure of some important growth functions by Shvets & Zeide ([27]) reported that even after 200 years of intensive research, there is still opportunity for improvement of the mathematical structure of existing growth functions. Karkach ([13]) distinguishes two general growth types: determinate and non-determinate growth. The former applies when the growth of an organism terminates at a certain development stage or at a final (maximum) size, whereas non-determinate growth continues throughout the organism’s lifespan, regardless of the degree of physiological maturity attained. Due to the existence of the asymptotic parameter β0, all classical growth functions are non-determinate growth models that implicitly entail an infinite length of the growth period. This fact is clearly not consistent with the biological reality. The use of an asymptotic growth function for a finite life length of every living organism is possible only because they predict very small, practically negligible growth rates at the oldest ages.
Recently, two attempts to solve the aforementioned issue has been reported in the literature. Yin et al. ([36]) proposed a new flexibile empirical function of sigmoid determinate growth, obtained by modifying the probability density function of the beta distribution. Sedmák & Scheer ([26]) derived another sigmoid function of determinate growth, called KM-function, based on a modification of the Kumaraswamy’s distribution function ([16]). Generally, both probability distributions are suitable for double-bounded random variables, and consequently they are theoretically well-suited for the description of determinate growth within a finite period of time.
Growth functions are used as semi-empirical models whose parameters are inferred from empirical measurements of the growth of an organism ([13], [27], [40]). Their parameter estimations are carried out by means of different theoretical methods, mainly based on non-linear regressions. A wide range of methods of classic (Fisherian) frequentist statistics ([24]) or the alternative Bayesian statistics ([5]) are currently used to this purpose.
To estimate the parameters in the regression models, the most common methods used by the frequentist school are the maximum likelihood estimation (MLE) and the ordinary least squares (OLS). The OLS method is derived from the MLE by introducing the simplified assumptions of normality, variance homogeneity and uncorrelatedness of residual deviations of the regression model. Using the OLS method, parameter estimation is almost exclusively related to the solution of a set of non-linear equations by numerical techniques. In such cases, the OLS method is also called nonlinear least squares (NLS) estimation.
An alternative method for the estimation of growth function parameters is based on a Bayesian approach. The basic difference between Fisherian and Bayesian statistical schools relies on a different understanding of the probability concept ([6]). In the Bayesian estimation, the a priori joint probability distribution of possible parameter values in the model is combined with information from data represented by the likelihood function.
The objective of this study was to assess the performances of the KM-function in modeling the sigmoid determinate growth in living organisms and to analyse its theoretical properties. The analysis was carried out by comparing the properties and prediction accuracy of the KM-function with those of selected classical growth functions. The statistical properties of growth functions were compared based on empirical data, made by diameter growth series of individual trees obtained from stem analyses. Two methods of statistical parametrization were used: the classic nonlinear least squares method, and the Bayesian parametrization method based on the Markov chain Monte Carlo procedure.
# Material and methods
## Growth functions included into comparison
Tab. 1 - List of the growth models compared in this study.
Model (abbr) Integral form Source
KM-function (KM3) y = ymax [1-(1-(t/500)b)c] Kumaraswamy ([16])
KM-function (KM4) y = ymax[1-(1-(t/tmax)b)c] Kumaraswamy ([16])
Richards function (R3) y = ymax(1-e-bt)c Zeide ([38])
Richards function (R4) y = ymax (1-de-bt)c Richards ([23]), Fitzhugh ([9])
Korf function (KF3) y = ymax e(-bt)^c Korf ([15]), Li et al. ([17])
Weibull function (W4) y = a - de-(bt)^c Weibull ([34]), Ratkowsky ([22])
An overview of the growth functions used in this study is reported in Tab. 1. The first function was a time-determinate KM-function with the following integral form (eqn. 2):
$$y = y_{max} \left [1 - \left (1- \left (\frac{t}{t_{max}} \right )^b \right )^c \right ]$$
where 0 ≤ ttmax, tmax > 0, ymax > 0, b > 0, c > 0. The parameter ymax is interpreted as the final (maximum) size of an organism attained at the age tmax. If the parameter tmax is fixed on the basis of a clear biological interpretation, the new function will have only three parameters. The parameters b and c determine the position of its inflexion point within the life cycle. The growth rate equation can be obtained by derivation from eqn. 2 as follows (eqn. 3):
$$\frac{dy}{dt} = y_{max} bct_{max}^{-1} \left (\frac{t} {t_{max}} \right )^{b-1} \left [1- \left (\frac{t} {t_{max}} \right )^b \right ]^{c-1}$$
A typical course of the KM-function over time and that of the derived absolute and relative growth rates (RGR) are shown in Fig. 1a and Fig. 1b. In this study, the KM-function was used in two parametrization forms with a different number of parameters (Tab. 1). The three-parameter form was derived from the four-parameter version by fixing the parameter tmax to a value of 500 years on the basis of the maximum lifetime estimated for beech trees in Slovakia ([20]).
Fig. 1 - Growth trajectory described by the sigmoid KM-function. (a): Growth period starts at t=0 and ends at the age tmax, at which the exact final value is reached; (b): full and broken lines show the corresponding courses of growth and relative growth rates, which are exactly equal to zero at the age tmax.
The Richards’ function is one of the most important growth models of the exponential decline type sensu Zeide ([38]), and represents one of the most used growth models worldwide. Its popularity stems from the excellent flexibility, along with a relatively good extrapolation capability. Similarly to the KM-function, the Richards’ function is also used in its forms including three or four parameters.
The Korf’s growth function belongs to the power decline type sensu Zeide ([38]), and it is considered as the most suited to forestry conditions of Slovakia. It was used for the construction of the national growth and yield tables ([11]). According to Zeide ([37]), this function is especially suitable for the description of the diameter of a fixed number of trees, as it is more accurate than other traditional growth functions.
The Weibull’s growth function does not belong to any of the aforementioned classification sensu Zeide (1989). From a practical point of view, this function has the same advantages as the Richards’ function, and it was included in the list of Tab. 1 because it was derived exclusively in an empirical way, just as the KM-function.
All the classical growth functions described above share the common feature of being time-non-determinate growth models with an asymptotic behavior.
## Dataset
The empirical dataset used in this study consisted of stem analyses for 67 beech trees cut in an 80-160 years-old mature forest stand growing in a good-quality site. The analysis of the selected growth functions and parametrization methods was performed on growth series at time intervals of five years based on the diameter at breast height (d1.3) measured to the nearest 1 mm. Current and mean annual increments were inferred from diameter growth data. Growth and increment curves were subjected to graphical analysis, revealing two episodes of abrupt increases in current increment along the stand history. The first episode was interpreted as due to an intensive thinning carried out prior to the culmination of current increments at the age of 50-70 years, while the other was at the beginning of the natural regeneration step at the age of 120-140 years. From a modeling point of view, the increase of tree growth at later stages due to seeding felling is in contradiction with the sigmoid-shaped growth models, that predict a reduction of current increment at older ages. Therefore, each diameter growth series was truncated, considering only the data prior to the start of the stand’s natural regeneration (identified at age 120 years). Moreover, data from trees with age < 20 years were excluded from growth series, since their diameters were affected by rather high measurement errors. In total, 67 growth series with age ranging from 20 to 115 years were included in the analysis. Since data were grouped in 5 years intervals, each individual growth series was composed by 19 measured diameters. More detailed information on individual series and discrete events along the stand history are given in Tab. 2.
Tab. 2 - Main statistics of the empirical growth series analyzed. (SD): Standard deviation; (dbh): diameter at the breast height.
Statistics Current increment culmination (yrs) Mean increment culmination (yrs) Age of 1st release
(yrs)
Beginning of regeneration
(yrs)
Cut down age (yrs) Cut down dbh
(cm)
Minimum 27 33 25 75 85 17.3
Mean ± SD 72 ± 14 95 ± 20 61 ± 18 114 ± 18 131 ± 17 37.1 ± 9.7
Maximum 100 145 110 150 165 54.4
## Error analysis
Performances of growth functions were assessed by estimating the bias and accuracy of short-term (5 years) predictions of tree diameter growth. Comparison of observed vs. predicted values for the variables considered represent a commonly-used, simple method to evaluate growth models ([21], [32]). Model validation through the analysis of deviations of projected values from empirical data, allows an objective estimation of the ability of growth functions to capture meaningful biological trends, and extrapolate growth changes over time, thus assessing its suitability to modeling purposes ([7], [41]).
Each growth series was divided into two parts for parametrization and validation purposes. The parametrization step was carried out on sequences of 18 measurements at the age ranging from 25 to 110 years covering all the life stages (juvenility, maturity, senescence). The validation step was carried out on one measurement at the age of 115 years. This measurement was omitted from the existing growth series and used for comparison of real diameter at this age with the diameter estimated by forward extrapolation of the parametrization sequences.
Relative errors of extrapolation for tree diameters were calculated at the validation step as e% = (dp - dr) / dr · 100, where dp is the predicted diameter and dr is the real diameter at age 115 years. Every selected growth equation was parametrized for every individual tree. Therefore, a set of 67 individual errors were obtained for every used growth function. The arithmetic mean of individual tree errors e% was calculated as a measure of bias, and the root mean square error (RMSE) as a measure of the absolute size of errors, thus indicating the practical applicability of the function considered.
## Parametrization
As for classic (frequentist) approach, parametrization of growth functions was done using the NLS method implemented in the software package STATISTICA® v. 10.0 ([30]). Among different numerical optimization methods, the derivative Levenberg-Marquardt method was primarily chosen; when convergence using this method failed, the more robust non-derivative techniques of direct search, the Hooke-Jeeves and Rosenbrock methods, were applied instead. The simple form of the NLS method was used, despite the inherent heteroscedasticity and autocorrelation of growth residuals. Indeed, autocorrelation and non-homogeneity of the variance of residual deviations do not represent a serious problem for predictions, since parameters estimates of growth models are not biased, as previously reported by several authors ([4], [32]).
Bayesian parametrization starts with the formulation of the a priori probability distributions of individual growth functions parameters. The marginal a priori parameter distribution represent the probability of occurrence of parameter values for the selected growth functions, when modeling of diameter growth is applied to individual beech trees growing in different social positions and in sites of varying quality under natural conditions of Slovakia. Such distributions were mathematically formalized (elicited) by a special mathematical-statistical procedure based on the calculation of percentiles of the empirical diameter distributions (for a detailed description, see [25]). Halaj ([12]) reported the empirical distribution of the relative frequencies of 2-cm diameter classes in 420 beech stands, according to different stand mean diameters (from 10 to 50 cm, scaled by 2 cm), along with the so-called “degree of stand volume variance” (lower, average and higher degree). Such information has been included in the National growth and yield tables ([11]). By combining the information on age and site class, mean stand diameter may be predicted at any age for different site class, and consequently the percentiles of the corresponding diameter distribution (99 percentile growth series for percentiles 1-100) may be obtained. The percentile growth series simulate the growth in diameter of individual beech trees growing in different social position. Fitting all growth functions to these series using the NLS method, we obtained information on possible values of function parameters for each site class. Consequently, we could infer the probability distribution, mean values, variance and covariance of individual parameters for each selected function. Then, convenient marginal probability distributions of individual parameters were found by Pearson’s χ2 test. The elicited marginal a priori distributions of individual parameters had a form of either lognormal or Gamma distribution. Joint a priori distributions of multiple parameters were obtained as the simple product of marginal distributions of individual parameters.
Bayes theorem was applied as a combination of a normal likelihood function with joint a priori distributions determined by elicitation. According to the Bayesian approach, parameter estimates were calculated by the Gibbs sampling method using the software package WINBUGS 1.4 ([18]). The Gibbs method belongs to a group of numerical methods of integration of multiple integrals referred to as Markov Chain Monte Carlo (MCMC) methods. To generate the numerical estimates of the parameters for the growth functions to be compared, 11 MCMC chains were used and a posteriori parameter distributions were composed based on numerical generation of about 500 000 values. With regard to multimodality of some a posteriori marginal parameter distributions, the final parameter estimates were obtained as the medians of the a posteriori distributions. In total, 804 parametrizations of the six selected growth functions were done on a sample of 67 trees using the two estimation methods.
# Results
A summary of error statistics in the prediction of individual tree diameters at age 115, according to the selected growth functions and the parametrization method chosen are given in Tab. 3. Tab. 4 reports the derived ratios of RMSE or biases of a particular combination of growth function and parametrization method against the minimal RMSE or bias from all function/parametrization method combinations (6 functions × 2 parametrization methods = 12 combinations). Ratios of RMSE and bias reported in Tab. 4 facilitates the comparison of growth functions and parametrization methods according to possible aims of the modeling. The common aim of modeling in forestry and ecology is either the minimization of bias in order to ensure the accuracy of predictions (“bias” columns - Tab. 3, Tab. 4), or the minimization of the magnitude of total errors, to guarantee the practical applicability of the model (columns “RMSE” - Tab. 3, Tab. 4).
Tab. 3 - Error statistics of diameter predictions for trees at the age 115 years according to growth functions and parametrization methods. (NLS): nonlinear least squares; (RMSE): root mean square error; (1): highlighted are minimal values of the statistic within each modeling aim (minimization of bias or RMSE); (2): only absolute value of biases are divided to analyze their magnitude irrespectively of their sign; (3): only absolute value of biases were averaged for the same reason than in (2).
No.
Parameters
Growth
function
Parametrization methods
NLS Bayes Bayes/NLS ratio
RMSE
%
Bias
%
RMSE
%
Bias
%
Ratio of
RMSEs
Ratio of
Biases(2)
3-parameter
functions
Korf (KF3) 6.4 4.9 5.9 3.7 0.92 0.76
Richards (R3) 6.8 5.0 5.5 3.9 0.81 0.77
KM-function (KM3) 17.8 -1.1(1) 4.3(1) 2.1(1) 0.24 1.93
Average 10.3 3.7(3) 5.3 3.2 0.66 1.15
4-parameter
functions
Weibull (W4) 3.7 2.1 6.1 3.8 1.68 1.82
Richards (R4) 7.2 6.4 6.7 5.9 0.93 0.92
KM-function (KM4) 8.6 7.5 19.8 -17.6 2.30 2.36
Average 6.5 5.3 10.9 9.1 1.64 1.707
Tab. 4 - Ratios of the RMSE and bias vs. minimal values of the RMSE and bias reported in Tab. 3. (NLS): nonlinear least squares; (RMSE): root mean square error; (1): best ratios according to the modeling aim (minimization of bias or RMSE).
No.
Parameters
Growth
function
Parametrization methods
NLS Bayes Average
RMSE Bias RMSE Bias RMSE Bias
3-parameter
functions
Korf (KF3) 1.73 4.41 1.37 1.76 1.55 3.09
Richards (R3) 1.84 4.50 1.28 1.86 1.56 3.18
KM-function (KM3) 4.81 1.00 (1) 1.00 (1) 1.00 (1) 2.91 1.00 (1)
Average 2.79 3.31 1.22 1.54 2.00 2.42
4-parameter
functions
Weibull (W4) 1.00 (1) 1.89 1.42 1.81 1.21 (1) 1.85
Richards (R4) 1.95 5.77 1.56 2.81 1.75 4.29
KM-function (KM4) 2.32 6.76 4.60 8.38 3.46 7.57
Average 1.76 4.80 2.53 4.33 2.14 4.57
In general, absolute values of RMSE and biases of the five-year diameter predictions are rather small (Tab. 3). Most of the RMSE lie within the interval of 3.7-8.6%, with two exceptions approaching the limit of 20%. Similarly, most of biases were ranging 1.1-7.5%, with only one exception reaching 17.6%. Most of the bias signs were positive, indicating a tendency to overestimation of real diameters in the prediction of future growth.
The results of the comparison of parametrization methods regardless of the respective growth function (columns “NLS” and “Bayes” - Tab. 3) showed that the Bayesian parametrization was moderately to significantly better as for RMSE in four out of six growth functions, though in two cases it was significantly worse. From the viewpoint of bias, the situation is more balanced (3× better NLS, 3× better Bayes). However, in cases where NLS gave better results, its prevalence was pronounced, whereas in cases where better results were obtained using the Bayesian approach, its prevalence was only moderate.
Regardless the parametrization method, the analysis of growth function efficiency according to a number of parameters (columns and lines “Average”- Tab. 4) showed that three-parameter functions perform better for both aims of modeling. Differences in mean RMSE ratios averaged across functions and parametrization methods are much smaller than in mean bias ratios. This means that three-parameter functions performed only slightly better for error magnitude, but their predictions are clearly less biased; therefore, three-parameter functions remarkably produced more realistic predictions.
A comparison of the function categories (three or four-parameter) within a specific parametrization methods (Tab. 4, columns “NLS” and “Bayes”, lines “Average”) showed that three-parameter functions performed much better for both modeling aims within Bayesian parametrization and were also less biased when the NLS parametrization method is used. Four-parameter functions are much more suitable for a minimization of RMSE within NLS, although they are somewhat more biased.
A more detailed comparison of the most successful combinations of function category and methods (three-parameter growth functions/Bayes parametrization and four-parameter growth functions/NLS) led to the conclusion that three-parameter growth functions combined with the Bayesian parametrization method gave better results for both modeling aims. The combination of three-parameter growth functions with the Bayesian approach (column “Bayes” - Tab. 3) showed a smaller RMSE by 15-25% as well as smaller biases by 40-60% than four-parameter functions combined with the NLS parametrization method (column “NLS” - Tab. 3).
Comparison of individual growth functions regardless of the parametrization method showed that the Weibull (W4) and the KM3 functions were the most successful. The W4 function gave better results from the point of view of RMSE, while the KM3 function performed better from the point of view of bias (columns “Average” - Tab. 4). It is worth to notice that the KM3 function produced better results in combination with the Bayesian parametrization, while the W4 function in combination with the use of the NLS method. Also, the KM3 function gave three times better results for a various combinations of modeling aims and parametrization methods (Tab. 4), in particular, the KM3 function was the best for combinations bias/ NLS, bias/Bayes and RMSE/Bayes (Fig. 2).
Fig. 2 - Ranking of the analyzed growth functions according to the modeling aim (minimization of bias or RMSE) and the parametrization method used (NLS or Bayesian).
Comparing the selected growth functions across parametrization methods, the W4 function produced more robust results and smaller differences among parametrization methods. Moreover, the KM3 function can produce excellent results as far as biases are concerned. However, when using the NLS method, the small bias was offset by a significant decrease in the overall accuracy of predictions. Furthermore, the four-parameter version of the KM-function resulted among the worst functions, especially in combination with the Bayes parametrization. This evidence is exactly the opposite of that obtained from the results of three-parameter version. Similar considerations hold for the three and four-parameter versions of the Richards function. In the latter case, the three-parameter function was similarly better than the four-parameter version for both parametrization methods and modeling aims, although differences were not so pronounced.
# Discussion
The most important characteristic of the KM-function is its time-determinate feature, i.e., the function predicts the complete termination of growth when the organism reaches its final size. A more detailed analysis of the integral and differential forms of the KM-function showed that:
1. The integral form (eqn. 2) admits the possibility of starting the growth at the age of t(0)=0 ,and at a given age, it simultaneously admits the possibility y(0)=0. The differential form (eqn. 3) at the age of t(0)=0 predicts dy/dt=0; RGR at the age of t(0)=0 is not defined. However, it holds true that if t(0)→0, RGR is limited in its approach to infinite RGRmax.
2. The position of the inflexion point within the life cycle is not fixed. The inflexion of the growth curve can occur at any age t and value y.
3. In the period of growth cessation at the time t=tmax, the integral form (eqn. 2) predicts y=ymax, and simultaneously (eqn. 3) at the age of t=tmax it holds true that dy/dt=0 and consequently, RGR = 0.
Property (1) is shared by Richards, Weibull and KM functions and contributed significantly to their popularity. It does not contradict the known biological laws and contributes to the feasibility of growth modeling at the early life stages. The Korf function at the age of t(0)=0 has neither the defined value y(0) nor the value dy/dt, thus the possibility for growth to start from the zero point is excluded.
From properties (2, 3) it follows that the KM-function is a model of time-determinate asymmetrical growth starting at zero size of an organism and terminating when it reaches its final size. Since the KM-function does not merely approach asymptotically, but exactly reaches both the starting and the final size, the equation assumes a finite length for the growth period of an organism. This represents an independent parameter in the equation tmax and can be satisfactorily estimated by means of the maximum age of the tree species under the particular natural conditions. Simultaneously, the age of growth rate culmination can occur at any time t ∈ [0, tmax] and at any value y, resulting in the capability of the growth function to represent any degree of time growth asymmetry.
The growth function Beta ([36]) is the only other function known from the literature which is able to describe in a continuous way the determinate growth of living organisms. It is based on the application of the frequency function of Beta distribution, which is a more general alternative to the KM-distribution. As a result, a number of common properties are shared by the Beta and the KM-function.
Yin et al. ([36]) point out the following properties of the Beta function: (i) its flexibility - the point of inflexion of the growth trajectory may occur at any position; (ii) good parametrization properties resulting from clear interpretability of parameters, and in particular (iii) the possibility to estimate the final value of yield at the end of the production period. These advantages are also true for the KM-function, especially in its three-parameter version.
Easy and stable parametrization of the three-parameter KM-function results from a good biological interpretability of parameters, likely stemming from its smaller intrinsic curvature. Indeed, the integral form of the KM-function does not contain any exponential terms (eqn. 2). Consequently, the equation is similar to Hossfeld and Levaković growth functions ([38] - eqn. 4, eqn. 5):
$$\text{Hossfeld:}\;\; y = \frac{t^c}{(b + t^c )/ a}$$
$$\text{Levakovic:}\;\; y = at \left (\frac{t^d}{b + t^d} \right )^c$$
According to Kiviste ([14]), Zeide ([38]) reports that these equations are surprisingly accurate, though they are among the oldest growth functions known.
On the other hand, the four-parameter KM function version provided poor predictions on the dataset tested in this study. Also, the three-parameter KM function along with NLS parametrization method provided relatively large RMSE. The above facts suggest that, despite its expected small intrinsic curvature, the KM function may have a rather large parameter-effect curvature. The inclusion of an additional parameter into the KM function led to the instability in the estimation of parameters, and consequently the accuracy was notably reduced, even for short-term predictions. The same scenario hold also for the Richards’ function.
The problem of the relatively large parameter-effect curvature of the KM-function may be solved by the inclusion of biologically meaningful constraints into the process of parameter estimation. For example, fixing the parameter value tmax based on its biological interpretation (i.e., the expected lifespan of the organism considered) has proven to be useful, especially when further information on other parameters was included into the estimation process by the Bayesian estimation procedure. As a result, the three-parameters KM function was assessed as the second most successful function.
A good accuracy of the KM-function is also expected based on additional theoretical reasons. Despite several shared properties, KM and Beta functions also show some differences. The differential form of the Beta function, obtained from the original frequency function of the Beta distribution under the assumption t(0)=0 and arbitrarily fixing the parameter δ = 1, is the following ([36] - eqn. 6):
$$\frac{dy}{dt} = c \left ( \frac{t_{max} - t}{t_{max} -t_{i}} \right ) \left (\frac{t}{t_{i}} \right )^{\frac{t_{i}} {t_{max} -t_{i}}}$$
where ti is the time of growth rate culmination and c is the maximum growth rate at time ti. The above function has both expansion and restriction elements linked to age, i.e., the expansion element is a power function and the restriction element is a linear function of age. The biological rationale for the linearity of the Beta function restriction element is problematic: the problem originates from the fixation of δ in the original Beta function because of the need for analytical integrability of the eqn. 6. In the KM-function, both elements represent a power function of age, thus being theoretically better fitting to empirical data.
Linkage to t of expansion and restriction elements of the differential form is a quite rare property. The age t has both positive and negative influence on growth rate dy/dt, though the negative influence prevails increasing the age. The only other well-established function having this property is the Weibull growth function, whose differential form is as follows (eqn. 7):
$$\frac{dy}{dt} = kt^p e^{-qt^{-c}}$$
Another similar growth function was proposed by Sloboda (eqn. 8):
$$\frac{dy}{dt} = kyt^{d-1} e^{-bt^{d}}$$
whose expansion element is linked to the interaction of age and size. In almost all other growth functions, the expansion element is linked to size y and the restriction component to age t. Consequently, KM, Weibull and Sloboda’s functions admit the possibility of initial age t(0)=0, assuming dy/dt = 0 at this age. On the other hand, classical Weibull and Sloboda’s equations are typical representatives of non-determinate growth models, in which if t→∝, then y→ymax and dy/dt→0, i.e., they implicitly assume an infinite lifespan for the organism studied.
The KM, Beta and Weibull growth functions, along with the Richards and Korf functions, share an additional feature: all functions have y(0)=0 at the age t(0)=0, that is, they do not have a defined RGR (i.e., if t→0 then RGR→∝). Unlike the above equations, the integral form of the Sloboda’s model does not allow the possibility y=0 at any age, thus RGR is defined and equal to 0 at the age t(0)=0. Based on the above considerations, it follows that Weibull, Beta and KM functions are not suitable for modeling the exponential growth in the initial life stages (e.g., at t<1), since the classical growth analysis assumes that the growth rate is proportional to size, i.e., the final RGR approaches the final maximum value.
Comparison of the selected growth function properties proved the usefulness of the KM-function for growth modeling purposes. The three-parameter version (KM3), combined with the Bayes parametrization method, was the second most successful function for short-term predictions of tree diameter growth on the studied dataset. In particular, such model was suitable for bias minimization.
As expected, results of this study showed that the Bayesian parametrization method performs slightly better in minimizing the modeling errors. Also, NLS was slightly more suitable when minimization of biases is required in short-term predictions. However, differences in RMSE and biases among the parametrization methods used were relatively small, probably due to the fairly large number of measurements covering most of the life cycle of trees. Using high quality empirical measurements, the above differences are diminished because the importance of a priori parameter distributions quickly decreases in the Bayesian procedure ([4]).
# Conclusions
The analysis carried out revealed several useful theoretical properties of the KM-function:
• good fitting to empirical data and high interpolation accuracy, due to the possibility of inflexion at any y value and the power form of both key components in the differential equation;
• realistic growth reconstruction of backward extrapolations based on empirical measures, thanks to the possibility of predicting y(0)=0 and dy/dt = 0 at the age t(0)=0, consistently with known biological growth laws.
• realistic growth predictions of forward extrapolations based on empirical measures, thanks to the a priori inclusion of a parameter related to the final length of life cycle and the final size of the organism.
Results of the comparison of the KM-function with selected classical growth functions proved its usefulness for modeling the growth in diameter of individual trees, in particular in its three-parameter form combined with Bayesian parametrization when minimization of prediction bias is needed, while the use of four-parameter Weibull function combined with the NLS parametrization method is recommended for the short-term prediction of diameter growth.
# Acknowledgements
This study was supported by the European Structural Fund under the project ITMS: 26220120069 (Center of Excellence for decision support in forest and landscape at the Technical University in Zvolen, Slovakia, Activity 3.1 Experimental and methodological platform of precision forestry tools) and by the Scientific Grant Agency of the Ministry of Education, Science, Research and Sport of the Slovak Republic under the project 1/0618/12 and 1/0953/13. Additional support was received from the National Agency of Agricultural Research of the Czech Republic under the contract No. QJ1320230.
# References
(1)
Bertalanffy L (1957). Quantitative laws in metabolism and growth. Quarterly Review of Biology 32: 217-231.
CrossRef | Gscholar
(2)
Burkhart HE, Tomé M (2012). Modelling forest trees and stands. Springer Science + Business Media BV, Dordrecht, The Netherlands, pp. 471.
Online | Gscholar
(3)
Birch CP (1999). A new generalized logistic sigmoid growth equation compared with the Richards growth equation. Annals of Botany 83: 713-723.
CrossRef | Gscholar
(4)
Bock RD, du Toit SHC (2004). Parameter estimation in the context of nonlinear longitudinal growth models. In: “Methods in Human Growth Research” (Hauspie RC, Cameron N, Molinari L eds). Series Cambridge Studies in Biological and Evolutionary Anthropology, vol. 39, Cambridge University Press, Cambridge, UK, pp. 198-220.
Gscholar
(5)
Carlin BP, Louis TA (2000). Bayes and empirical Bayes methods for data analysis. Texts in Statistical Science, Chapman & Hall/CRC, Boca Raton, FL, USA, pp. 440.
Gscholar
(6)
D’Agostini G (2003). Bayesian inference in processing experimental data: principles and basic applications. Reports on Progress in Physics 66 (9): 1383-1383.
CrossRef | Gscholar
(7)
Ek AR, Monserud RA (1979). Performance and comparison of stand growth models based on individual tree diameter. Canadian Journal of Forest Research 9: 231-244.
CrossRef | Gscholar
(8)
Fekedulegn D, Mac Siurtain MP, Colbert JJ (1999). Parameter estimation of nonlinear growth models in forestry. Silva Fennica 33 (4): 327-336.
CrossRef | Gscholar
(9)
Fitzhugh HA (1976). Analysis of growth curves and strategies for altering their shape. Journal of Animal Science 42 (4): 1036-1051.
Online | Gscholar
(10)
Gompertz B (1825). On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. Philosophical Transactions of the Royal Society 115: 513-585.
CrossRef | Gscholar
(11)
Halaj J, Petráš R (1998). A growth and yield tables of main tree species in Slovakia. Slovak Academic Press, Bratislava, SK, pp. 325. [in Slovak]
Gscholar
(12)
Halaj J (1957). Mathematical and statistical research of diameter structures of Slovak stands. Lesnícky časopis 3 (1): 39-74. [in Slovak]
Gscholar
(13)
Karkach AS (2006). Trajectories and models of individual growth. Demographic Research 15: 347-400.
CrossRef | Gscholar
(14)
Kiviste AK (1988). Mathematical functions of forest growth. Estonian Agriculture Academy, Tartu, Estonia, pp.108.
Gscholar
(15)
Korf V (1939). Contribution to mathematical definition of the law of stand volume growth. Manuscript, Lesnická práce, Slovakia, pp. 339-379.
Gscholar
(16)
Kumaraswamy P (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology 46: 79-88.
CrossRef | Gscholar
(17)
Li FG, Zhao BD, Su GL (2000). A derivation of the generalized Korf growth equation and its application. Journal of Forestry Research 11 (2): 81-88.
CrossRef | Gscholar
(18)
Lunn DJ, Thomas A, Best N, Spiegelhalter D (2000). WinBUGS - a Bayesian modeling framework: concepts, structure, and extensibility. Statistics and Computing 10: 325-337.
CrossRef | Gscholar
(19)
Mitscherlich EA (1919). Problems of plant growth. Landwirtschaftliche Jahrbücher 53: 167-182. [in German]
Gscholar
(20)
Pagan J (1992). Forestry dendrology. Technical University in Zvolen, Zvolen, Slovakia, pp. 347. [in Slovak]
Gscholar
(21)
Pretzsch H (2009). Forest dynamics, growth and yield. From measurement to model. Springer-Verlag, Berlin, Germany, pp. 617.
CrossRef | Gscholar
(22)
Ratkowsky DA (1983). Nonlinear regression modelling. Marcel Dekker, New York, USA, pp. 276.
Gscholar
(23)
Richards FJ (1959). A flexible growth function for empirical use. Journal of Experimental Botany 10: 290-300.
CrossRef | Gscholar
(24)
Seber GAF, Wild CJ (2003). Nonlinear regression. Wiley series in probability and statistics, John Wiley & Sons, Inc., New Jersey, USA, pp. 792.
Gscholar
(25)
Sedmák R (2009). Growth and yield modelling of beech trees and stands. Msc Thesis, Technical University in Zvolen, Zvolen, Slovakia, pp. 181.
Gscholar
(26)
Sedmák R, Scheer L (2012). Modelling of tree diameter growth using growth functions parameterised by least squares and Bayesian methods. Journal of Forest Science 58 (6): 245-252.
Online | Gscholar
(27)
Shvets V, Zeide B (1996). Investigating parameters of growth equations. Canadian Journal of Forest Research 26: 1980-1990.
CrossRef | Gscholar
(28)
Sloboda B (1971). Investigation of growth processes using first order differential equations. Mitteilungen der Baden-Württembergischen Forstlichen Versuchs und Forschungsanstalt 32: 1-109. [in German]
Gscholar
(29)
Schnute J (1981). A versatile growth model with statistically stable parameters. Canadian Journal of Fisheries and Aquatic Sciences 38: 1128-1140.
CrossRef | Gscholar
(30)
StatSoft Inc (2010). Electronic statistics textbook. Tulsa, OK, USA.
Online | Gscholar
(31)
Tsoularis A, Wallace J (2002). Analysis of logistic growth models. Mathematical Biosciences 179: 21-25.
CrossRef | Gscholar
(32)
Vanclay JK (1994). Modelling forest growth and yield: application to mixed tropical forests. CAB International, Wallingford, UK, pp. 312.
Online | Gscholar
(33)
Verhulst B (1838). A note on population growth. Correspondence Mathematiques et Physiques 10: 113-121. [in French]
Gscholar
(34)
Weibull W (1951). A statistical distribution of wide applicability. Journal of Applied Mechanics 18: 293-297.
Gscholar
(35)
Weiskittel AR, Hann DW, Kershaw JA, Vanclay JK (2011). Forest growth and yield modelling (1st edn). John Wiley & Sons Ltd, Chichester, UK, pp.344.
Online | Gscholar
(36)
Yin X, Goudriaan J, Lantinga EA, Vos J, Spiertz HJ (2003). A flexible sigmoid function of determinate growth. Annals of Botany 91 (3): 361-371.
CrossRef | Gscholar
(37)
Zeide B (1989). Accuracy of equations describing diameter growth. Canadian Journal of Forest Research 19 (10): 1283-1286.
CrossRef | Gscholar
(38)
Zeide B (1993). Analysis of growth equations. Forest Science 39 (3): 594-616.
Online | Gscholar
(39)
Zeide B (2003). The U-approach to forest modelling. Canadian Journal of Forest Research 33 (3): 480-489.
CrossRef | Gscholar
(40)
Zeide B (2004). Intrinsic units in growth modelling. Ecological Modelling 175 (3): 249-259.
CrossRef | Gscholar
(41)
Zhang L (1997). Cross-validation of non-linear growth functions for modelling tree height-diameter relationships. Annals of Botany 79 (3): 251-257.
CrossRef | Gscholar
#### Authors’ Affiliation
(1)
Róbert Sedmák
Lubomír Scheer
Faculty of Forestry, Technical University in Zvolen, T.G. Masaryka 24, 960 53 Zvolen (Slovak Republic)
(2)
Róbert Sedmák
Faculty of Forestry and Wood Sciences, Czech University of Life Sciences Prague, Kamýcká 1176, 165 21 Praha 6 - Suchdol (Czech Republic)
Lubomír Scheer
[email protected]
#### Citation
Sedmák R, Scheer L (2015). Properties and prediction accuracy of a sigmoid function of time-determinate growth. iForest 8: 631-637. - doi: 10.3832/ifor1243-007
#### Paper history
Accepted: Oct 17, 2014
First online: Jan 13, 2015
Publication Date: Oct 01, 2015
Publication Time: 2.93 months
© SISEF - The Italian Society of Silviculture and Forest Ecology 2015
#### Breakdown by View Type
(Waiting for server response...)
#### Article Usage
Total Article Views: 17100
(from publication date up to now)
Breakdown by View Type
HTML Page Views: 12191
Abstract Page Views: 515
Web Metrics
Days since publication: 2039
Overall contacts: 17100
Avg. contacts per week: 58.71
Article citations are based on data periodically collected from the Clarivate Web of Science web site
(last update: Jun 2020)
Total number of cites (since 2015): 2
Average cites per year: 0.33
#### iForest Database Search
Search By Author
Search By Keyword
Citing Articles
Search By Author
Search By Keywords
#### PubMed Search
Search By Author
Search By Keyword
|
# 1.网络端口扫描实验
• metasploitable(meta)作为靶机
• kali扫描靶机的网络端口
Kali的网络设置:
meta的网络设置:
## 1.2 扫描靶机网络端口
Kali使用Nmap扫描靶机网络端口,一些常用的选项:
-sS (TCP SYN scan)
SYN scan is the default and most popular scan option for good reasons. It can be performed quickly, scanning thousands of ports per second on a fast network not hampered by restrictive firewalls. It is also relatively unobtrusive and stealthy since it never completes TCP connections.
This technique is often referred to as half-open scanning, because you don’t open a full TCP connection. You send a SYN packet, as if you are going to open a real connection and then wait for a response. A SYN/ACK indicates the port is listening (open), while a RST (reset) is indicative of a non-listener. If no response is received after several retransmissions, the port is marked as filtered.
-sV (Version detection)
Enables version detection, as discussed above. Alternatively, you can use -A, which enables version detection among other things.
-p port ranges (Only scan specified ports)
So you can specify -p- to scan ports from 1 through 65535.
Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-10 21:20 EDT
Nmap scan report for 192.168.229.129
Host is up (0.0046s latency).
Not shown: 1012 closed ports
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
23/tcp open telnet
25/tcp open smtp
53/tcp open domain
80/tcp open http
111/tcp open rpcbind
139/tcp open netbios-ssn
445/tcp open microsoft-ds
512/tcp open exec
|
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 22 May 2017, 11:04
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
In North America there has been an explosion of public
Author Message
TAGS:
Hide Tags
Senior Manager
Joined: 07 Sep 2010
Posts: 329
Followers: 9
Kudos [?]: 766 [5] , given: 136
In North America there has been an explosion of public [#permalink]
Show Tags
15 May 2012, 06:16
5
KUDOS
20
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
40% (02:20) correct 60% (01:27) wrong based on 1004 sessions
HideShow timer Statistics
In North America there has been an explosion of public interest in, and enjoyment of,opera over the last three decades. The evidence of this explosion is that of the 70 or so professional opera companies currently active in North America, 45 were founded over the course of the last 30 years.
The reasoning above assumes which one of the following?
(A) All of the 70 professional opera companies are commercially viable options.
(B) There were fewer than 45 professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years.
(C) There has not been a corresponding increase in the number of professional companies devoted to other performing arts.
(D) The size of the average audience at performances by professional opera companies has increased over the past three decades.
(E) The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience.
[Reveal] Spoiler: OA
_________________
+1 Kudos me, Help me unlocking GMAT Club Tests
If you have any questions
New!
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 7368
Location: Pune, India
Followers: 2281
Kudos [?]: 15071 [14] , given: 224
Re: In North America there has been an explosion of public [#permalink]
Show Tags
15 May 2012, 10:56
14
KUDOS
Expert's post
5
This post was
BOOKMARKED
imhimanshu wrote:
In North America there has been an explosion of public interest in, and enjoyment of,opera over the last three decades. The evidence of this explosion is that of the 70 or so professional opera companies currently active in North America,45 were founded over the course of the last 30 years.
The reasoning above assumes which one of the following?
a) All of the 70 professional opera companies are commercially viable options.
b) There were fewer than 45 professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years.
c) There has not been a corresponding increase in the number of professional companies devoted to other performing arts.
d) The size of the average audience at performances by professional opera companies has increased over the past three decades.
e) The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience.
Can someone provide a reasoning over choice E
You have to focus on the conclusion.
Conclusion: Over the last 30 years, there has been an explosion of interest in Opera.
What does that mean? It means that the interest has increased manifold in the last 30 yrs (focus on the word 'increased')
How does the author support his argument? By saying that out of the current 70 opera companies, 45 were founded in the last 30 yrs i.e. much more than half were found in the last 30 yrs.
What is the assumption? 50 opera companies did not shut shop in the last 30 yrs i.e. more than 45 companies did not close down. You need your assumption to be true for the conclusion to be true. If 50 companies had shut down in the past 30 yrs, we can't say that opera is gaining a following.
Think about it: You say, "Popularity of pizza is increasing every day. Every week, one new pizza place opens up in my neighborhood."
What is your assumption? That 2 pizza places do not shut down everyday in your neighborhood. It that were the case, then we cannot say that pizza is becoming more popular.
You don't need option (E) to be true for the conclusion to be true. Say, even if all 45 were not established as a result of enthusiasm (say, only 40 were established as a result of enthusiasm), even then it is possible that interest in opera has increased. You don't need (E) to be true to prove the conclusion. Hence it is not an assumption.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Senior Manager Joined: 07 Sep 2010 Posts: 329 Followers: 9 Kudos [?]: 766 [0], given: 136 Re: In North America there has been an explosion of public [#permalink] Show Tags 15 May 2012, 18:31 Thanks Karishma for the explanation.. VeritasPrepKarishma wrote: imhimanshu wrote: In North America there has been an explosion of public interest in, and enjoyment of,opera over the last three decades. The evidence of this explosion is that of the 70 or so professional opera companies currently active in North America,45 were founded over the course of the last 30 years. The reasoning above assumes which one of the following? a) All of the 70 professional opera companies are commercially viable options. b) There were fewer than 45 professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years. c) There has not been a corresponding increase in the number of professional companies devoted to other performing arts. d) The size of the average audience at performances by professional opera companies has increased over the past three decades. e) The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience. Can someone provide a reasoning over choice E You have to focus on the conclusion. Conclusion: Over the last 30 years, there has been an explosion of interest in Opera. What does that mean? It means that the interest has increased manifold in the last 30 yrs (focus on the word 'increased') How does the author support his argument? By saying that out of the current 70 opera companies, 45 were founded in the last 30 yrs i.e. much more than half were found in the last 30 yrs. What is the assumption? 50 opera companies did not shut shop in the last 30 yrs i.e. more than 45 companies did not close down. You need your assumption to be true for the conclusion to be true. If 50 companies had shut down in the past 30 yrs, we can't say that opera is gaining a following. Think about it: You say, "Popularity of pizza is increasing every day. Every week, one new pizza place opens up in my neighborhood." What is your assumption? That 2 pizza places do not shut down everyday in your neighborhood. It that were the case, then we cannot say that pizza is becoming more popular. You don't need option (E) to be true for the conclusion to be true. Say, even if all 45 were not established as a result of enthusiasm (say, only 40 were established as a result of enthusiasm), even then it is possible that interest in opera has increased. You don't need (E) to be true to prove the conclusion. Hence it is not an assumption. _________________ +1 Kudos me, Help me unlocking GMAT Club Tests Intern Joined: 19 Feb 2012 Posts: 25 Location: India Concentration: Technology, General Management Schools: WBS '15 GMAT 1: 700 Q48 V38 GPA: 3.36 WE: Analyst (Computer Software) Followers: 0 Kudos [?]: 6 [0], given: 7 Re: In North America there has been an explosion of public [#permalink] Show Tags 15 May 2012, 23:19 so everything here is based on the conclusion Manager Joined: 28 May 2011 Posts: 193 Location: United States Concentration: General Management, International Business GMAT 1: 720 Q49 V38 GPA: 3.6 WE: Project Management (Computer Software) Followers: 2 Kudos [?]: 65 [0], given: 7 Re: In North America there has been an explosion of public [#permalink] Show Tags 19 May 2012, 04:07 A good one... I also went for E. Above explanation makes me come back to B. _________________ ------------------------------------------------------------------------------------------------------------------------------- http://gmatclub.com/forum/a-guide-to-the-official-guide-13-for-gmat-review-134210.html ------------------------------------------------------------------------------------------------------------------------------- Intern Joined: 08 Nov 2009 Posts: 44 Location: New York, NY Schools: Columbia, NYU, Wharton, UCLA, Berkeley WE 1: 2 Yrs mgmt consulting WE 2: 2 yrs m&a Followers: 0 Kudos [?]: 10 [0], given: 1 Re: In North America there has been an explosion of public [#permalink] Show Tags 19 May 2012, 09:58 I read somewhere that there is an assumption negation technique. Can I use that here? Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7368 Location: Pune, India Followers: 2281 Kudos [?]: 15071 [2] , given: 224 Re: In North America there has been an explosion of public [#permalink] Show Tags 20 May 2012, 00:57 2 This post received KUDOS Expert's post meatdumpling wrote: I read somewhere that there is an assumption negation technique. Can I use that here? You might have read about it in Veritas CR book. Assumption is a necessary missing premise. It is necessary for the conclusion to be true. Assumption negation technique considers the options one by one. You negate the option and see whether your conclusion can still hold. If it can, it means the option is not an assumption (because assumption is a premise that SHOULD be true for the conclusion to be true) You do not use this technique on all the options. Say, two options are looking good. Use it only on those two (it takes some time so you would be using too much time to answer the question if you use it on every option) Your assumption here is " There were [highlight]fewer than 45[/highlight] professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years." Negate it " There were [highlight]more than 45[/highlight] professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years." Can you still say there has been an explosion of interest in the last 30 yrs? 45 new companies were founded in the last 30 yrs but more than 45 (or equal to 45, doesn't matter) closed down. Now your conclusion cannot be true; hence, this option is the assumption. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Manager
Joined: 02 Jan 2011
Posts: 197
Followers: 1
Kudos [?]: 53 [0], given: 22
Re: In North America there has been an explosion of public [#permalink]
Show Tags
21 May 2012, 02:47
VeritasPrepKarishma wrote:
meatdumpling wrote:
I read somewhere that there is an assumption negation technique. Can I use that here?
You might have read about it in Veritas CR book. Assumption is a necessary missing premise. It is necessary for the conclusion to be true. Assumption negation technique considers the options one by one. You negate the option and see whether your conclusion can still hold. If it can, it means the option is not an assumption (because assumption is a premise that SHOULD be true for the conclusion to be true)
You do not use this technique on all the options. Say, two options are looking good. Use it only on those two (it takes some time so you would be using too much time to answer the question if you use it on every option)
Your assumption here is " There were [highlight]fewer than 45[/highlight] professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years."
Negate it " There were [highlight]more than 45[/highlight] professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years."
Can you still say there has been an explosion of interest in the last 30 yrs?
45 new companies were founded in the last 30 yrs but more than 45 (or equal to 45, doesn't matter) closed down. Now your conclusion cannot be true; hence, this option is the assumption.
Thank you Karishma. Your explanation is extremely helpful in understanding the concept.
Senior Manager
Joined: 07 Sep 2010
Posts: 329
Followers: 9
Kudos [?]: 766 [0], given: 136
Re: In North America there has been an explosion of public [#permalink]
Show Tags
21 May 2012, 06:14
Thanks Karishma for explaining the ANT.
I understand the above explanation given for ANT. I have read in the Veritas CR book that after Negate the choice in such a manner that newly created choice should be logically opposite of the original choice.
I tried this ANT for choice E.
Original Choice -
The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience.
As per me, after applying ANT, choice E becomes,
The 45 most recently founded companies were NOT all established as a result of enthusiasm on the part of a potential audience.
However, as per your below explanation, the ANT should be focusing on the number 45, not on verb = were established.
Can you discuss this in detail. I understand the below quoted reasoning, nonetheless.
VeritasPrepKarishma wrote:
You don't need option (E) to be true for the conclusion to be true. Say, even if all 45 were not established as a result of enthusiasm (say, only 40 were established as a result of enthusiasm), even then it is possible that interest in opera has increased. You don't need (E) to be true to prove the conclusion. Hence it is not an assumption.
_________________
+1 Kudos me, Help me unlocking GMAT Club Tests
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 7368
Location: Pune, India
Followers: 2281
Kudos [?]: 15071 [0], given: 224
Re: In North America there has been an explosion of public [#permalink]
Show Tags
21 May 2012, 09:11
imhimanshu wrote:
Thanks Karishma for explaining the ANT.
I understand the above explanation given for ANT. I have read in the Veritas CR book that after Negate the choice in such a manner that newly created choice should be logically opposite of the original choice.
I tried this ANT for choice E.
Original Choice -
The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience.
As per me, after applying ANT, choice E becomes,
The 45 most recently founded companies were NOT all established as a result of enthusiasm on the part of a potential audience.
However, as per your below explanation, the ANT should be focusing on the number 45, not on verb = were established.
Can you discuss this in detail. I understand the below quoted reasoning, nonetheless.
Your negation is correct. You have negated the 45 as well. "All" stands for the 45 companies. "were not all" (it means not all 45) established as a result of enthusiasm...
I could have negated the correct option as
"There were not fewer than 45 ..." which is awkward. So instead, I made it "more than (or equal to) 45"
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for \$199
Veritas Prep Reviews
Senior Manager
Joined: 28 Dec 2010
Posts: 331
Location: India
Followers: 1
Kudos [?]: 228 [0], given: 33
Re: In North America there has been an explosion of public [#permalink]
Show Tags
22 May 2012, 04:25
chose B as the answer. It took some time. i eliminated A C D because they were off the scope. between E & B, B seemed a stronger assumption. Also E had the phrase all 45 which was a little extreme. Had the answer option some of the 45.. then it would have been a real tough one.
Senior Manager
Joined: 28 Dec 2010
Posts: 331
Location: India
Followers: 1
Kudos [?]: 228 [0], given: 33
Re: In North America there has been an explosion of public [#permalink]
Show Tags
22 May 2012, 04:28
on second thoughts, i think if there was the modified option E as stated above it would have been a tie between B&E. Any ideas on this guys?
Intern
Joined: 12 Mar 2012
Posts: 16
Followers: 0
Kudos [?]: 7 [0], given: 19
Re: In North America there has been an explosion of public [#permalink]
Show Tags
31 May 2012, 04:06
VeritasPrepKarishma wrote:
imhimanshu wrote:
Thanks Karishma for explaining the ANT.
I understand the above explanation given for ANT. I have read in the Veritas CR book that after Negate the choice in such a manner that newly created choice should be logically opposite of the original choice.
I tried this ANT for choice E.
Original Choice -
The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience.
As per me, after applying ANT, choice E becomes,
The 45 most recently founded companies were NOT all established as a result of enthusiasm on the part of a potential audience.
However, as per your below explanation, the ANT should be focusing on the number 45, not on verb = were established.
Can you discuss this in detail. I understand the below quoted reasoning, nonetheless.
Your negation is correct. You have negated the 45 as well. "All" stands for the 45 companies. "were not all" (it means not all 45) established as a result of enthusiasm...
I could have negated the correct option as
"There were not fewer than 45 ..." which is awkward. So instead, I made it "more than (or equal to) 45"
hello karishma, excellent explanation. I was stuck between B and E.
I, however, eliminated E by the extreme word all. My resoning for the same is below.
The 45 most recently founded companies were NOT all established as a result of enthusiasm on the part of a potential audience.
This statement implies that not all but few companies established as a result of increased enthusiasm on the part of audience. This statement in fact supports the argument rather than weakning the same.
BSchool Forum Moderator
Status: Flying over the cloud!
Joined: 17 Aug 2011
Posts: 886
Location: Viet Nam
GMAT Date: 06-06-2014
GPA: 3.07
Followers: 74
Kudos [?]: 656 [0], given: 44
Re: In North America there has been an explosion of public [#permalink]
Show Tags
05 Jun 2012, 00:43
imhimanshu wrote:
In North America there has been an explosion of public interest in, and enjoyment of,opera over the last three decades. The evidence of this explosion is that of the 70 or so professional opera companies currently active in North America,45 were founded over the course of the last 30 years.
The reasoning above assumes which one of the following?
a) All of the 70 professional opera companies are commercially viable options.
b) There were fewer than 45 professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years.
c) There has not been a corresponding increase in the number of professional companies devoted to other performing arts.
d) The size of the average audience at performances by professional opera companies has increased over the past three decades.
e) The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience.
Can someone provide a reasoning over choice E
Conclusion: There has been an explosion of public interest in opera over the last 30 years. Evidence: 70 professional opera companies currently active in NA, 45 were found over 30 years.
Negate choice B: there were MORE THAN (or EQUAL TO) 45 opera companies that had been active 30 years ago and that ceased operations during the last 30 years. With this negated statement, over than half of opera ceased operation during the last 30 years will be against the argument.
In choice E, the 45 most recently founded companies were NOT all established as a result of enthusiasm on the part of a potential audience. => This negated statement does not affect anything to the argument because this answer choice talk about the enthusiasm, not the opera, which is the center of the conclusion.
_________________
Rules for posting in verbal gmat forum, read it before posting anything in verbal forum
Giving me + 1 kudos if my post is valuable with you
The more you like my post, the more you share to other's need
CR: Focus of the Week: Must be True Question
Senior Manager
Joined: 15 Sep 2009
Posts: 268
Followers: 11
Kudos [?]: 73 [0], given: 6
Re: In North America there has been an explosion of public [#permalink]
Show Tags
05 Jun 2012, 03:55
1
This post was
BOOKMARKED
imhimanshu wrote:
In North America there has been an explosion of public interest in, and enjoyment of,opera over the last three decades. The evidence of this explosion is that of the 70 or so professional opera companies currently active in North America,45 were founded over the course of the last 30 years.
The reasoning above assumes which one of the following?
a) All of the 70 professional opera companies are commercially viable options.
b) There were fewer than 45 professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years.
c) There has not been a corresponding increase in the number of professional companies devoted to other performing arts.
d) The size of the average audience at performances by professional opera companies has increased over the past three decades.
e) The 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience.
Can someone provide a reasoning over choice E
The giveaway word is ALL. If E were framed in this way "SOME of the 45 most recently founded companies were all established as a result of enthusiasm on the part of a potential audience" that response would be a stronger contender for a correct assumption than E is.
Negating B, the correct answer, basically rips the argument because it would imply that having 70 active professional opera companies is nothing spectacular or historic and thus cannot be used an indicator of a recent boom in operatic interest.
_________________
+1 Kudos me - I'm half Irish, half Prussian.
Manager
Joined: 01 Aug 2011
Posts: 82
Location: India
Concentration: Finance, Finance
GPA: 3.4
Followers: 2
Kudos [?]: 251 [0], given: 29
Re: In North America there has been an explosion of public [#permalink]
Show Tags
05 Jun 2012, 22:58
Karishma/OldFritz
d) The size of the average audience at performances by professional opera companies has increased over the past three decades.
how did you eliminate option D
if I negate D it will look like
The size of the average audience at performances by professional opera companies has decreased over the past three decades.
This weakens the conclusion that there has been a explosion of interest
Manager
Affiliations: Project Management Professional (PMP)
Joined: 30 Jun 2011
Posts: 209
Location: New Delhi, India
Followers: 5
Kudos [?]: 73 [0], given: 12
Re: In North America there has been an explosion of public [#permalink]
Show Tags
13 Jun 2012, 10:13
Thanks Karishma for the great explanation... nice question
_________________
Best
Vaibhav
If you found my contribution helpful, please click the +1 Kudos button on the left, Thanks
Moderator
Joined: 01 Sep 2010
Posts: 3158
Followers: 854
Kudos [?]: 7279 [1] , given: 1063
Re: In North America there has been an explosion of public [#permalink]
Show Tags
28 Mar 2013, 16:34
1
KUDOS
6
This post was
BOOKMARKED
In North America there has been an explosion of public interest in, and enjoyment of, opera over the last three decades. The evidence of this explosion is that of the 70 or so professional opera companies currently active in North America, 45 were founded over the course of the last 30 years.
The reasoning above assumes which one of the following?
(A) All of the 70 professional opera companies are commercially viable options.
(B) There were fewer than 45 professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years.
(C) There has not been a corresponding increase in the number of professional companies devoted to other performing arts.
(D) The size of the average audience at performances by professional opera companies has increased over the past three decades.
(E) The 45 most recently founded opera companies were all established as a result of enthusiasm on the part of a potential audience.
_________________
Intern
Joined: 10 Mar 2013
Posts: 25
Concentration: Entrepreneurship
Followers: 0
Kudos [?]: 5 [0], given: 5
Re: In North America there has been an explosion of public [#permalink]
Show Tags
28 Mar 2013, 16:46
E seems to be be the solid choice here
VP
Status: Far, far away!
Joined: 02 Sep 2012
Posts: 1122
Location: Italy
Concentration: Finance, Entrepreneurship
GPA: 3.8
Followers: 190
Kudos [?]: 2097 [1] , given: 219
Re: In North America there has been an explosion of public [#permalink]
Show Tags
28 Mar 2013, 16:48
1
KUDOS
1
This post was
BOOKMARKED
IMO B
(A) All of the 70 professional opera companies are commercially viable options.
The argument doesn't mention the "commericial" aspect, this adds informations and the conclusion cannot be based on added information in the answer.
(B) There were fewer than 45 professional opera companies that had been active 30 years ago and that ceased operations during the last 30 years.
Correct (IMO)
(C) There has not been a corresponding increase in the number of professional companies devoted to other performing arts.
This is a good one... but since the passage doesn't mention any other art, I discard this and pick B.
(D) The size of the average audience at performances by professional opera companies has increased over the past three decades.
This option mention the "average", not the "number". Moreover the text links the increase of the number of bands to the explosion of public interest, and does not say anything about the audience.
(E) The 45 most recently founded opera companies were all established as a result of enthusiasm on the part of a potential audience.
The word "all" makes this answer too extreme
_________________
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CR New SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
Re: In North America there has been an explosion of public [#permalink] 28 Mar 2013, 16:48
Go to page 1 2 3 Next [ 45 posts ]
Similar topics Replies Last post
Similar
Topics:
Veritas - In North America there has been... 0 05 Aug 2014, 11:06
15 In New York there has been an explosion of public interest 7 09 Jan 2016, 13:02
4 In New York there has been an explosion of public interest 4 25 Feb 2017, 22:26
In North America there has been an explosion of public 0 12 Aug 2016, 04:03
11 The public in the United States has in the past been 6 10 Aug 2016, 06:19
Display posts from previous: Sort by
|
# Probability Constructions
$y_n$ is a sequence of probability measures on $\mathbb{R}$ such that $y_n\rightarrow y$ where $y$ is another probability measure on $\mathbb{R}$.
Construct an example where:
1. $\int x \; dy_n$ exists for each $n$ and has a finite limit but $\int x \; dy$ is $+\infty$.
2. $\int x \; dy_n$ exists for each $n$ and $\lim_{n \to \infty }\int x \; dy_n=+\infty$, but $\int x \; dy$ is finite.
-
One commonplace meaning of $y_y\to y$ in this context is that $\int x\;dy_n \to \int x\;dy$ for every bounded continuous function $x$. – Michael Hardy Oct 1 '11 at 19:20
....and that suggests the $x$ in the examples should be either unbounded or discontinuous. I suspect they will need to be unbounded. – Michael Hardy Oct 1 '11 at 19:22
@MichaelHardy I thought $\int x \mathrm{d} y$ referred to the mean. – Sasha Oct 1 '11 at 20:03
@Sasha: Oh.... you mean as in $\int x \; dy(x)$. I was thinking of something like $\int_\mathbb{R} x(u) \; dy(u)$. – Michael Hardy Oct 1 '11 at 23:59
It appears I may have been understanding the notation in a way that is different from what was intended. So here is a rephrasing of my earlier comment. One commonplace meaning of $y_n\to y$ is that $\int g(x) \; dy_n(x) \to \int g(x) \; dy(x)$ for every bounded continuous function $g$. Another equivalent way is that the sequence of cumulative distribution functions corresponding to $y_n$ converges to the c.d.f. corresponding to $y$ except possibly at points where the latter is not continuous. – Michael Hardy Oct 2 '11 at 0:19
|
Solutions to Problems in Parabola, Vol. 3, No. 3
J71 Find a four digit number which on division by $149$ leaves a remainder of $17$ and on division by $148$ leaves a remainder of $29$.
|
# What is the pH of a .20 M NH_4NO_3 solution?
Aug 11, 2017
pH = 4.97
#### Explanation:
Ammonium nitrate is the salt of a weak base and a strong acid so we would expect it to be slightly acidic due to salt hydrolysis:
$\textsf{N {H}_{4}^{+} r i g h t \le f t h a r p \infty n s N {H}_{3} + {H}^{+}}$
The position of equilibrium lies well to the left such that $\textsf{p {K}_{a} = 9.24}$.
From an ICE table we get this expression which can be used to find the pH of a weak acid:
sf(pH=1/2(pK_a-log[acid])
$\therefore$sf(pH=1/2(9.24-log(0.2))
sf(pH=1/2(9.24-(-0.699))
$\textsf{p H = 4.97}$
|
## Building the Darwin Streaming Server in Ubuntu
Me and a colleague tried building the Darwin Streaming server on Ubuntu Server 13.10 with the help of this guide [instructables.com].
It did not go as well as we were hoping due to build errors. After some troubleshooting (all credit goes to my colleague), it turned out that the linking of libraries was not done recursively. So even though the correct libraries was included in LDFLAGS it wouldn’t work.
With the following two changes, we got rid of the build errors:
\$ cd lstoll*
\$ find . -name "Makefile.*" -exec sed -i 's/-lQTFileExternalLib/-lQTFileExternalLib -lpthread/' {} \;
\$ sed -i 's/-lQTFileLib/-lQTFileLib -ldl/' Makefile.POSIX
|
# Database is being used by another process … but what process?
I have written a very small C# program, that uses a very small SQL Server database, purely for some learning & testing purposes. The database is used in this one new project and nowhere else. However, I am getting problems whilst running Debugs where the program will not run, because the database "is being used by another process".
If I reboot my machine, it will work again, and then after a few test runs I will get the same problem again.
I have found many, many similar problems reported all over the Internet, but can find no definitive answer as to how to resolve this problem. Firstly, how do I find out what "other process" is using my .mdf & .ldf files ? Then, how do I get these files released & not held in order to stop this happening time after time after time ?!?
I am new to VS2010, SQL Server & C#, so please be quite descriptive in any replies you give me !!!
This is my code, as you can see, you couldn't get anything much more basic, I certainly shouldn't be running into so many problems !!!
namespace MySqlTest
{
public partial class Form1 : Form
{
SqlConnection myDB = new SqlConnection(@"Data Source=MEDESKTOP;AttachDbFilename=|DataDirectory|\SqlTestDB.mdf;Initial Catalog=MySqlDB;Integrated Security=True");
SqlCommand mySqlCmd = new SqlCommand();
string mySQLcmd;
int myCount;
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
MessageBox.Show("myDB state = " + myDB.State.ToString());
//Open SQL File
myDB.Open();
MessageBox.Show("myDB state = " + myDB.State.ToString());
}
private void button2_Click(object sender, EventArgs e)
{
myCount++;
MessageBox.Show("myCount = " + myCount.ToString());
//Insert Record Into SQL File
mySqlCmd.Connection = myDB;
mySqlCmd.CommandText = "INSERT INTO Parent(ParentName) Values(myCount)";
mySqlCmd.ExecuteNonQuery();
}
private void button3_Click(object sender, EventArgs e)
{
}
private void button4_Click(object sender, EventArgs e)
{
//Read All Records From SQL File
}
private void button5_Click(object sender, EventArgs e)
{
//Delete Record From DQL File
}
private void button6_Click(object sender, EventArgs e)
{
MessageBox.Show("myDB state = " + myDB.State.ToString());
//Close SQL File
myDB.Close();
MessageBox.Show("myDB state = " + myDB.State.ToString());
}
private void button7_Click(object sender, EventArgs e)
{
//Quit
this.Close();
}
}
}
-
Did you get same error message when you run your application first time? If you get this error second time onwards, I guess your application didn't close db connection properly. – Thit Lwin Oo Feb 8 '12 at 14:45
When I first run the program, it works fine, and continues to work, but I am debugging and stopping the program without logically completing it sometimes. There doesn't appear to be a regular point at which this starts occurring though, but maybe it is connected to my stopping the debug without closing the DB. Would VS2010 not handle that though ?!? – Gary Heath Feb 8 '12 at 14:56
The most likely options:
1. A previous (crashed) instance of your program
2. Visual Studio (with a Table designer open or something similar)
You can check 1) with TaskManager and 2) by looking in Server Explorer. Your db should show a small red cross meaning 'closed'.
And you should rewrite your code to close connections ASAP. Use try/finally or using(){ } blocks.
-
Well, with regard to rewriting my code, that is the reason for this small program, so that I can ascertain the best way for me to code my SQL-Server processing for use in my real program !! In Seerver Explorer my DB has the red cross, but I am still getting the error messages. – Gary Heath Feb 8 '12 at 14:59
This is the rewritten code, using the "Using" statements. When I execute the program and click on Insert Record Into SQL File, off it goes, completes the process with myCount = 1 (though I'm not 100% sure that it is actually doing a physical Write, am I missing a command that actually "commits" the update ?!?) and re-displays the Form.
If I then click on Insert Record Into SQL File again, I get an error as follows :
SqlException was unhandled
Cannot open user default database. Login failed. Login failed for user 'MEDESKTOP\Gary'.
This is the program (I am the only user on this PC and have full Admin rights, the "State" of the database at this point is, according to Properties, Closed, so it looks like the first pass through the code did as was expected ...
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Data.SqlClient;
namespace MySqlTest
{
public partial class Form1 : Form
{
int myCount;
string myDBlocation = @"Data Source=MEDESKTOP;AttachDbFilename=|DataDirectory|\mySQLtest.mdf;Integrated Security=True;User Instance=False";
public Form1()
{
InitializeComponent();
}
private void button2_Click(object sender, EventArgs e)
{
myCount++;
MessageBox.Show("myCount = " + myCount.ToString());
//Insert Record Into SQL File
myDB_Insert();
}
private void button3_Click(object sender, EventArgs e)
{
}
private void button4_Click(object sender, EventArgs e)
{
//Read All Records From SQL File
}
private void button5_Click(object sender, EventArgs e)
{
//Delete Record From SQL File
}
private void button7_Click(object sender, EventArgs e)
{
//Quit
myDB_Close();
this.Close();
}
private void Form1_Load(object sender, EventArgs e)
{
}
private void Form1_Close(object sender, EventArgs e)
{
}
void myDB_Insert()
{
using (SqlConnection myDB = new SqlConnection(myDBlocation))
using (SqlCommand mySqlCmd = myDB.CreateCommand())
{
myDB.Open(); **<<< Program fails here, 2nd time through**
mySqlCmd.CommandText = "INSERT INTO Parent (ParentName) VALUES(@ParentNameValue)";
mySqlCmd.ExecuteNonQuery();
myDB.Close();
}
return;
}
void myDB_Close()
{
using (SqlConnection myDB = new SqlConnection(myDBlocation))
using (SqlCommand mySqlCmd = new SqlCommand())
{
myDB.Close();
}
return;
}
}
}
I don't understand why I am suddenly losing access to my own file that I am already using ?!?
-
try running sp_who2 to see the list of process.
FYI: you don't need to reboot your machine, worst case, restart SQL Server service
-
I will try this in a little while, have to collect the kids from school right now ... if you are correct that will save me a LOT of time !!! – Gary Heath Feb 8 '12 at 15:01
ok, let us know :) – Diego Feb 8 '12 at 15:09
OK, VS2010 wouldn't let me open a new query, it said "Unable to open the physical file ..... Operating system error 32: "32(failed to retrieve text for this error. Reason:15105). An attempt to attach an auto-named database for file ..... failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.", so I ran sp_who2 in SQL Server Management Studio and got 31 results, only the bottom 2 of which seem relevant to this database, i.e. they have MySqlDB in the DBName column. – Gary Heath Feb 8 '12 at 16:37
Both lines say that the Program Name is "Microsoft SQL Server Management Studio - Query, then the first line says Status = RUNNABLE, Command = SELECT INTO, CPU Time = 125 & DiskIO = 20, whilst the second line says Status = Sleeping, Command = AWAITING COMMAND, CPU Time = 0 & DiskIO = 0 ... so there is no mention of Visual Studio using the file(s) !!! – Gary Heath Feb 8 '12 at 16:37
If I try to take the DB offline, I get an error as follows "Set offline failed for Database 'MySqlDB'. An exception occurred while executing a Transact-SQL statement or batch. ALTER DATABASE failed because a lock could not be placed on database 'MySqlDB'. Try again later. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5061)" ... but, if I run the sp-who2 again, the "Status = Sleeping, Command = AWAITING COMMAND" query line is gone, but the "Status = RUNNABLE, Command = SELECT INTO" line is still there (I have no idea if any of this is relevant !) – Gary Heath Feb 8 '12 at 16:37
IIRC using AttachDbFilename spins up a SqlServr.exe process running under the user account your process is using, separate from the SqlServer instance running as a service (so stopping the MsSqlServer service doesn't stop this issue). In the case of a dirty exit, sometimes this process does not get cleaned up. I suspect that killing this process will free up the db files.
-
As per my replies below, killing processes has, so far, not helped ... – Gary Heath Feb 8 '12 at 17:37
I had to go into Services & find MSSQLSERVER, change the Start option to Manual, then physically Stop it ... than and only then, was I able to delete the files in the bin\debug folder !!! I altered the Start option back to Automatic & started the Service, and at last, it is all working again !!! Now I have to find out why it is happening and to prevent it from happening again ... – Gary Heath Feb 8 '12 at 18:41
Try using Process Explorer written by Mark Russinovich, a Technical Fellow at Microsoft. It's a standard swiss-army tool in the pocket of Windows Admins/Developers. It can show you what processes are holding handles on resources in the system.
Once you've got Process Explorer installed, try the following:
1. Get your system into a fail-state (such that running your program doesn't work).
2. Start up Process Explorer (you'll need to be an Admin to make full-use of its features).
3. Click "Find" in the menu bar at the top and type in the name of your .mdf or .ldf files.
The search results should display a process/service with a handle on one of the resources still held by a wrongly-terminated process.
-
It finds the LDF file, but as per my reply to Diego (above) killing the process is not releasing it :-( !!! – Gary Heath Feb 8 '12 at 17:32
I had to go into Services & find MSSQLSERVER, change the Start option to Manual, then physically Stop it ... than and only then, was I able to delete the files in the bin\debug folder !!! I altered the Start option back to Automatic & started the Service, and at last, it is all working again !!! Now I have to find out why it is happening and to prevent it from happening again ... – Gary Heath Feb 8 '12 at 18:41
Working with VS 2010 and Entity Frameworks using SQL Server I've run into this more than a few times. In my case it happened when I tried to run the program and I had a query open in the Server Explorer. Once it fails I had to drop all the connections to get it to work again.
VS2010 copies the source database (an .MDF and .LDF file) that you work with in Server Explorer to the projects debug folder. This is the copy you are working with at runtime. When this file is copied is controled by the MDF property Copy to Output Directory by default it is set to Copy always. This means it will try and copy the file on a new build or run and if you have it open elsewhere it fails and then it gets hung up.
The way to see the connection is to open the SQL Server Management Console. By default VS2010 is using a user instance of SQL Server as specified in the connection string. In the case of Entity Frameworks it is in the App.Config XML.
The user instance is a separate in memory copy of SQL Server that has all the user rights assigned to the logged in user. To get to it you need to find the proper connection.
Running this query from the main instance of SQL will show all of the User Instance connections.
SELECT owning_principal_name, instance_pipe_name, heart_beat FROM sys.dm_os_child_instances
This will return something that looks like this:
|owning_principle_name | instance_pipe_name | heart_beat
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
2 | MyServer\Rich | \\.\pipe\B04A1D3B-6268-49\tsql\query\ | alive
If you copy the instance name and use it as the server name in a new connection, running as the user associated with it you will see your data from the debug folder.
Now if you right click on the database and select Tasks > Detach... You will open a Detach Database dialog check the Drop Connections checkbox next to the database file name and click OK.
The database will be removed from the list and you should be able to build and run you application.
-
I found this solution at: http://oreilly.com/catalog/errata.csp?isbn=0636920022220
Right-click the database in VS's database explorer and click close connection between debugging sessions and it worked for me.
-
The simplest thing is go Server Explorer. Choose the problem database. Right-click and choose "Close Connection".
If it is already closed, connect and disconnect.
-
|
# Lorentz force in superconductors
## Main Question or Discussion Point
Hi, everyone.
In a course on superconducting materials, my lecturer has suggested that in a Type I(one) superconductor, any normalconducting region containing trapped magnetic flux will feel a Lorentz force per unit volume $$F_L = J \times B$$, where $$J$$ is the transport current density (vector!) that the material is carrying, and $$B$$ is a vaguely-defined magnetic flux density.
He goes on to define the "critical current density" $$J_c$$ by the equality $$J_c \times B = -F_p$$ where $$F_p$$ is a force per unit volume due to pinning of the flux lines on some kind of material defect.
My problems with this are:
• Concept - how can a non-charged body feel a Lorentz force? (This is probably solved by thinking about a supercurrent that surrounds the flux-containing region...)
• What is $$B$$, given that the Meissner effect excludes magnetic fields in superconducting regions? Could is be the externally-applied field, measured at a distance? Or the (higher) flux density inside the flux-containin region of the material?
• How can I reconcile my lecturer's definition of $$J_c$$ with the more generally-available definition: "the maximum current a superconductor can carry before making a transition back to normal conduction"?
Any help would be much appreciated, as I'm very stuck on this concept and I can't find any online resources which mention this particular phenomenon - thanks!
Related Atomic and Condensed Matter News on Phys.org
seycyrus
Hi, everyone.
In a course on superconducting materials, my lecturer has suggested that in a Type I(one) superconductor, any normalconducting region containing trapped magnetic flux will feel a Lorentz force per unit volume $$F_L = J \times B$$, where $$J$$ is the transport current density (vector!) that the material is carrying, and $$B$$ is a vaguely-defined magnetic flux density.
This is the way it is defined in type II superconductors. I've never seen it discussed this way in terms of type I, but I'll take your word for it. I'll be talking about things from the perspective of type IIs in the abrikosov state.
B here means either a single quantum of magnetic flux or in the case of a flux bundle, some multiple of it.
My problems with this are:
• Concept - how can a non-charged body feel a Lorentz force? (This is probably solved by thinking about a supercurrent that surrounds the flux-containing region...)
• That's one way of thinking about it. It's probably the more rigorously correct way. The other is to think about it in terms of equal and opposite forces... a moving charge feels a force due to it's interaction with a nearby field, therefore...
[*] What is $$B$$, given that the Meissner effect excludes magnetic fields in superconducting regions? Could is be the externally-applied field, measured at a distance? Or the (higher) flux density inside the flux-containin region of the material?
This is where I start to wonder about the effect in type Is. Are you sure he was talking about type I SCs? There is a mixed state, comprised of laminar regions in type I SCs, but i've never seen a treatment on flux pinning in this state before.
[*] How can I reconcile my lecturer's definition of $$J_c$$ with the more generally-available definition: "the maximum current a superconductor can carry before making a transition back to normal conduction"?
In type IIs, they are roughly equivalent. The pinning forces dictate the flux gradient which in turn controls the flux motion. Once the fluxoids start to move, they start to dissipate energy, which removes the "lossless' current flow concept.
This is the way it is defined in type II superconductors. I've never seen it discussed this way in terms of type I, but I'll take your word for it. I'll be talking about things from the perspective of type IIs in the abrikosov state.
...
This is where I start to wonder about the effect in type Is. Are you sure he was talking about type I SCs? There is a mixed state, comprised of laminar regions in type I SCs, but i've never seen a treatment on flux pinning in this state before.
This treatment really is about Type I, in the laminar "intermediate" state. Of course, the same effects can be observed in Type II superconductors, where the normal regions are usually single quanta of flux. It seems my lecturer has made an unusual choice, discussing these effects before even introducing Type II.
B here means either a single quantum of magnetic flux or in the case of a flux bundle, some multiple of it.
B is a flux density (= flux per unit area), so it is a continuously varying vector field. I guess my question is: If these "Lorentz force effects" are due to forces on the net supercurrents (transport + fluxon circulation) in the area, then what B-field do the charge carriers actually see locally?
This is the problem of the "nearby field", I suppose - classically, moving charges don't respond to nearby fields, only ones that are right on top of them. However, my handy "anatomy of a fluxon" diagram tells me that (at least when there is no transport current), the fluxon's "paramagnetic supercurrents" are in regions of non-zero B. So I can start to see how this all works!
In type IIs, the (definitions of J_c) are roughly equivalent. The pinning forces dictate the flux gradient which in turn controls the flux motion. Once the fluxoids start to move, they start to dissipate energy, which removes the "lossless' current flow concept.
Again, I'm beginning to see how this works - certainly to the detail level I'm required to. Many thanks =)
seycyrus
Tiresome,
I agree that your instructor's approach is a bit unusual. Is he using a text, if so which one?
Has he shown you any magnetization data of a type I (either theoretical or experimental), that includes the effects of this mixed state?
Most texts that I have seen, simply have a single critical field for type Is.
|
Does tetrachlorodibenzo-p-dioxin (2,3,7,8) have a nonzero dipole moment? Thus we say that the oxygen atom is sp 3 hybridized, with two of the hybrid orbitals occupied by lone pairs and two by bonding pairs. Let us help you simplify your studying. Why Caster Semenya deserves better from society. Should I call the police on then. What are the release dates for The Wonder Pets - 2006 Save the Ladybug? Carbon Tetrachloride on Wikipedia. I--F. Tags: Question 3 . Heroin users become physically dependant on opioids which means they have to continue to take them to avoid withdrawal symptoms such as chills, sweating, stiffness, cramps, vomiting and anxiety. Determine the volume of a solid gold thing which weights 500 grams? I realize that the true arrangement of VSPER which takes into account the bonds and lone pairs is octahedral but the molecular structure can't be square planar due to the presence of a single lone pair. Bing; Yahoo; Google; Amazone ; Wiki; Krcl4 polar or nonpolar. If there are no polar bonds, the molecule is nonpolar. How long will the footprints on the moon last? 4 - What hybridization is required for central atoms... Ch. As there are two regions; that means that there are S and a P orbital hybridize. Because we describe molecular geometry on the basis of actual atoms, the geometry of the #NF_3# molecule is trigonal pyramidal.. On the other hand, around the xenon atom there are 4 bonding pairs, i.e. (If the difference in electronegativity for the atoms in a bond is greater than 0.4, we consider the bond polar. We will illustrate the use of this procedure with several examples, beginning with atoms with two electron groups. Trending Questions. Click and drag the molecle to rotate it. > Lewis Structure Here are the steps that I follow when drawing a Lewis structure. Lone pairs dominate polarity (but few know it). If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. According to the VSEPR theory, The molecular geometry of the molecule is linear. They are all nonpolar. Who is the longest reigning WWE Champion of all time? From an electron-group-geometry perspective, GeF 2 has a trigonal planar shape, but its real shape is dictated by the positions of the atoms. Electron domain is used in VSEPR theory to determine the molecular geometry of a molecule. I 2. none of these. States are running out of benefits Trump ordered. A. Y B. I C. Sb D. Sr E. In. 4 - Consider the following compounds: CO2, SO2, KrF2,... Ch. Which of the following bonds is least polar? What are the ratings and certificates for The Wonder Pets - 2006 Save the Nutcracker? Molar Mass: 225. Answer = BF3 ( Boron trifluoride ) is Nonpolar What is polar and non-polar? 0. 9 years ago. 4 - Why do we hybtidize atomic orbitals to explain the... Ch. TeCl4, if you draw the lewis diagrams, has a lone pair of electrons. SURVEY . First it had to be determined that carbon had a valence of 4, and hence would be the central atom surrounded by singly-bonded hydrogen and chlorine atoms. Molecules polar; 1-butanol: Polar: 1-propanol: Polar: 2-propanol: Polar: 2-propanol: Polar: acetanilide: Polar: acetic acid: Polar: acetophenone: Polar: A molecule is then considered polar if it contains polar bonds and is … Step 2: Identify each bond as either polar or nonpolar. Which of the following bonds is least polar? Because we describe molecular geometry on the basis of actual atoms, the geometry of the NF_3 molecule is trigonal pyramidal. What is the hybridization of the nitrogen in the azide ion? Question = Is SeCl4 polar or nonpolar ? For homework help in math, chemistry, and physics: www.tutor-homework.com. So the easiest way to know the hybridization of A molecule is to use the formula : number of surrounding atoms+ 1/2 (valence electrons of central atom - valency of surrounding atoms +- charge) For PCl5 : 5+ 1/2(5–5) = 5 what company has a black and white prism logo? The set of two sp orbitals are oriented at 180°, which is consistent with the geometry for two domains. Linear molecules cannot have a net dipole moment. They are all nonpolar. Umar. If you are having trouble with Chemistry, Organic, Physics, Calculus, or Statistics, we got your back! Trending questions . Iodine pentachloride is a rare molecule, but here is one similar: Iodine Pentafluoride on Wikipedia. Why do compounds like SF6 and SF4 exist but SH6 and SH4 don't? 1. XeF2 is nonpolar due to the symmetric arrangement of the bonded pairs of electrons. Dipole. If you are having trouble with Chemistry, Organic, Physics, Calculus, or Statistics, we got your back! As there are fluorine molecules on both the side of the central atom, there is no dipole moment and hence there is no polarity. 4 - Explain why CF4 and Xef4 are nonpolar compounds... Ch. A molecule with very polar bonds can be nonpolar. Relevance. "CCl"_4 has a tetrahedral geometry with bond angles of 109.5 °. 8. Join Yahoo Answers and get 100 points today. In the case of water, it is polar. Texas Children’s is committed to keeping our patients, families, employees and community safe by taking proactive measures to help prevent the spread of COVID-19. Sherpa. Decision: The molecular geometry of ICl 5 is square pyramid with an asymmetric electron region distribution. ... Nonpolar Covalent Bonds. No, SiH4 is not polar. Get your answers by asking now. What is the hybridization of the central atom in each of the following? Still have questions? Carbon tetrachloride (CCl4) is a non-polar molecule. All Rights Reserved. Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Trending Questions. answer choices . during extraction of a metal the ore is roasted if it is a? Why don't libraries smell like bookstores? answer choices ... Why show ads? Krcl4 molecular geometry (2 pts) trigonal pyramidal e. If there are two bond pairs and two lone pairs of electrons the molecular geometry is angular or bent (e. Since lone pairs occupy more space than bonding pairs, structures that contain Note: Geometry refers to the bond angles about a central atom. 14 Qs . 2 Answers. 1.7k plays . CO 2. SURVEY . 11. The chief was seen coughing and not wearing a mask. Give the shape and the hybridization of the central A atom for each. Our videos prepare you to succeed in your college classes. There are no unbonded electron pairs on the central Carbon, so there are only two sigma bonds. Answer = SeCl4 ( Selenium tetrachloride ) is Polar What is polar and non-polar? Still have questions? Answer to: Is ICl5 polar or nonpolar? Learn vocabulary, terms, and more with flashcards, games, and other study tools. Methylene chloride, also known as Dichloromethane (DCM), is an organic chemical compound. When a molecule contains more than one bond, the geometry must be taken into account. Quizzes you may like . i tried to do it but i cant really tell if its polar or nonpolar! On the other hand, around the xenon atom there are 4 bonding pairs, … A. HBr, E. I3-Which of following atoms or ions has same electron configuration as Argon? 0 2. answer choices . Relevance. Favorite Answer. 6. The easiest way to determine if a molecule is polar or nonpolar is to draw its Lewis Structure and, if necessary, check its molecular geometry. If the difference in electronegativity is less than 0.4, the bond is essentially nonpolar.) SO 2. ICl4- is Nonpolar I'll tell you the polar or nonpolar list below. The lone pairs however go trans (rather than cis) to minimize lp-lp repulsions and the Br-:→δ- vectors cancel hence it is nonpolar. Umar. Is ICl4- polar or nonpolar ? The electron pairs, bonding and non-bonding, are arranged around nitrogen in the shape of a tetrahedron. Favorite Answer. Therefore this molecule is polar. A. HBr B. NO3-C. H20 D. SF4 E. I3 F. KrCl4. It is nonpolar. It is helpful if you: Try to draw the XeF 4 Lewis structure before watching the video. Our videos prepare you to succeed in your college classes. Why is the periodic table organized the way it is? Join. Is there a way to search all eBay sites for different countries at once? Are the following polar or nonpolar?KrCl4 SeF6 BrF5 SeCl4 BrF3 KrF2 PCl5? This is particularly the case with heroin which is devastatingly addictive. Each of the bonds is polar, but the molecule as a whole is nonpolar. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. This video discusses how to tell if a molecule / compound is polar or nonpolar. 12 Qs . C--O. O--H. S--Cl. 8 years ago. 8 years ago. It is a colorless and volatile liquid with a … Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. Polar. BACK TO EDMODO. 0 0. Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. Note: Red mark stands for PI bond and Brown mark stands for SIGMA bond. Covalent Bonds. PostgreSQL; Krcl4 shape Krcl4 molecular geometry Molecular Geometry of the Trigonal Bipyramidal Structures. Search Email. 0 0. The bond angle of F-Xe-F is 180 degrees. Join Yahoo Answers and get 100 points today. Although each IF bond is quite polar, the structure is trigonal planar. Join Yahoo Answers and get 100 points today. 1). Back to Molecular Geometries & Polarity Tutorial: Molecular Geometry & Polarity Tutorial. It is considered nonpolar because it does not have permanent dipole moments. Start studying Chemistry Ch 4 Homework. At this time, we are not accepting. B. I. Types of Chemical Bonds . So, I was thinking it could be tetrahedral or trigonal bypiramidal? Back to Molecular Geometries & Polarity Tutorial: Molecular Geometry & Polarity Tutorial. produce a feeling of euphoria which is why it is sometimes misused. Concluding Remarks. When did organ music become associated with baseball? Determine whether ClO3– is polar or nonpolar. ; Watch the video and see if you missed any steps or information. A. S-B. Our videos will help you understand concepts, solve your homework, and do great on your exams. If there is an odd number of lone pairs of electrons around the central atom, the molecule is polar. IF3 is actually nonpolar. A molecule with very polar bonds can be nonpolar. No, SiH4 is not polar. Try structures similar … F. KrCl4. Thus far, we have used two-dimensional Lewis structures to represent molecules. Answer Save. Get help with your Dipole homework. Do Lewis structure and molecular geometry confuse you? I can draw the lewis structure and I know that its octahedral (AX4E2), with NONPOLAR BONDS (EN= 0.2). Its hybridization is sp3d. Which of following molecules has a linear shape? A striking reversal: Trump's attacks on the military. The electronegativity of F is more than that of I so there are dipole moments on each of these bonds. Rude 'AGT' stunt backfires: 'That was so harsh' Halle Berry on the defining moments of her career. Join. There are many things that determine whether something is polar or nonpolar, such as the chemical structure of the molecule. Polar vs Non-polar: A bond is considered polar if the atoms on either end have different electronegativities. To summarize the article, it can be concluded that XeF2 has 22 valence electrons, out of which there are three lone pairs of electrons. Favourite answer. List molecules polar and non polar. sorry if you dont get it; but this is how my teacher taught me and i cant really explain it. Start studying Chemistry Ch 4 Homework. I went to a Thanksgiving dinner with over 100 guests. Trending questions. Br--Br. This lone pair pushes the bonds farther away. A. S-B. Therefore ClF 3 is polar. At this time, we are not accepting. Q. Decision: The molecular geometry of ClF 3 is T-shaped with asymmetric charge distribution around the central atom. If you draw the Lewis Structure, you will notices that it is a tetrahedral molecule, which is non-polar because the dipoles cancel out. Jon Gosselin under investigation for child abuse. Chlorine Trifluoride on Wikipedia. It is considered nonpolar because it does not have permanent dipole moments. 1 Answer. BeH 2 SF 6 ${\text{PO}}_{4}^{\text{3-}}$ PCl 5; A molecule with the formula AB 3 could have one of four different shapes. 2). Lv 7. However, molecular structure is actually three-dimensional, and it is important to be able to describe molecular bonds in terms of their distances, angles, and relative arrangements in space (Figure 1). Millions unable to avoid panned payroll tax scheme . Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. By signing up, you'll get thousands of step-by-step solutions to your homework questions. 2. Tags: Question 2 . 1 Answer. Examples of Polar and Nonpolar Molecules. The VSEPR Model. Answer Save. Author has 135 answers and 455.6K answer views. Q. That will be the least electronegative atom ("C"). Still have questions? Hot Network Questions Note that each sp orbital contains one lobe that is significantly larger than the other. … CH2Cl2 is the chemical formula for DCM. Each hybrid orbital is oriented primarily in just one direction. The VSEPR model can predict the structure of nearly any molecule or polyatomic ion in which the central atom is a nonmetal, as well as the structures of many molecules and polyatomic ions with a central metal atom. Answer = ICl5 is Polar What is polar and non-polar? Why you are interested in this job in Hawkins company? email protected] 1, 2, 3, and 4 ____ 28. The bond angle of F-Xe-F is 180 degrees. 30 seconds . Report Ad. Join. Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. 12. Relevance. ): best of luck! Cl2-C. K D. Ca2+ E. Kr. Q. The electron pairs, bonding and non-bonding, are arranged around nitrogen in the shape of a tetrahedron. I'll tell you the polar or nonpolar list below. Search Domain. Are carbocations necessarily sp2 hybridized and trigonal planar? nonpolar would be SeF3 or SeF2. It has two lone pairs. From an electron-group-geometry perspective, GeF 2 has a trigonal planar shape, but its real shape is dictated by the positions of the atoms. If the bonds in a molecule are arranged such that their bond moments cancel (vector sum equals zero), then the molecule is nonpolar. Copyright © 2020 Multiply Media, LLC. But I don't know whether the MOLECULE is polar or nonpolar Access the answers to hundreds of Dipole questions that are explained in a way that's easy for you to understand. The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. In your example of "SF"_4, the Lewis Structure would look … Krcl4 molecular geometry. Figure 3. SURVEY . Ch. Decide which atom is the central atom in the structure. Is krcl4 polar or nonpolar keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see which keywords most interested customers on the this website. 4 #Xe-Cl# bonds, and 2 lone pairs on Xe. A. HBr, E. I3-Which of following atoms or ions has same electron configuration as Argon? Get answers by asking now. For the best answers, search on this site https://shorturl.im/RqYeY. Which of following elements has GREATEST electronegativity? Hence Xenon Difluoride is nonpolar as there is no polarity observed in the molecule. Therefore, each of the polar bonds cancels the others and overall the compound is nonpolar. Ask Question + 100. Firstly its not the hybridization of PCl5 but of P in PCl5 molecule. Ask question + 100. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Are the following polar or nonpolar?KrCl4 SeF6 BrF5 SeCl4 BrF3 KrF2 PCl5? Which of the following molecules has a dipole moment? If there is an even number of lone pairs, you must check the VSEPR structure to decide. 30 seconds . Ask Question + 100. Still have questions? Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. It is nonpolar. Answer to: Which of the following compounds are expected to be polar: SiCl4, AsCl3, CH3Cl, SCl4, SeCl2, KrCl4? I have to idea about the angles and I am not sure if I should consider the lone pair....PLEASE HELP! Explain why a carbon atom cannot form five bonds using sp 3 d hybrid orbitals. Question = Is ICl5 polar or nonpolar ? Keyword Suggestions. Texas Children’s is committed to keeping our patients, families, employees and community safe by taking proactive measures to help prevent the spread of COVID-19. This is the situation in CO 2 (Figure 14). Firstly its not the hybridization of PCl5 but of P in PCl5 molecule . -> true Correct! Answer Save. Question = Is ICl5 polar or nonpolar ? When did Elizabeth Berkley get a gap between her front teeth? Get your answers by asking now. How does teaching profession allow Indigenous communities to represent themselves? Why did the Vikings settle in Newfoundland and nowhere else? Tags: Question 2 . Get your answers by asking now. Both 1 and 2 above support why Lewis structures are not a completely accurate way to draw molecules. Krcl4 molecular geometry. There is one side of this molecule that has a lone pair of electrons. (aka "electron groups) + # of lone pairs on central atom SN Electron Pair Arrangement (aka "electron geometry") Molecular Shape Examples 2 linear 180° AX 2 linear BeCl 2, CO 2 3 trigonal planar 120° AX 3 trigonal planar AEX 2 bent BCl 3, CH 3+ SnCl 2, NO 2- 4 tetrahedral 109. Answer to: Which of the following compounds are expected to be polar: SiCl4, AsCl3, CH3Cl, SCl4, SeCl2, KrCl4? In our discussion we will refer to Figure $$\PageIndex{2}$$ and Figure $$\PageIndex{3}$$, which summarize the common molecular geometries and idealized bond angles of molecules and ions with two to six electron groups. Linear molecules cannot have a net dipole moment. Join Yahoo Answers and get 100 points today. 30 seconds . According to the VSEPR theory, The molecular geometry of the molecule is linear. Let us help you simplify your studying. Postby Kevin Morden 1E» Thu Dec 07, 2017 2:15 am. Simple VSEPR theory predicts a trigonal pyramid for NF_3, and square planar for XeCl_4. Our videos will help you understand concepts, solve your homework, and do great on your exams. Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. Lv 7. Hybridization of an s orbital (blue) and a p orbital (red) of the same atom produces two sp hybrid orbitals (purple). Thoughtco.com The two main classes of molecules are polar molecules and nonpolar molecules.Some molecules are clearly polar or nonpolar, while others fall somewhere on the spectrum between two classes. electrons are SHARED by the nuclei; type of covalent bond. Answer = ICl5 is Polar What is polar and non-polar? Stop worrying and read this simplest explanation regarding CO2 Molecular Geometry and hybridization. 21-year-old arrested in Nashville nurse slaying: Police, Why 'Crocodile Dundee' star, 81, came out of retirement, Tense postgame handshake between college coaches, College students outraged as schools cancel spring break, Congress is looking to change key 401(k) provision, Inside Abrams's Ga. voter turnout operation, COVID-19 survivors suffering phantom foul smells, 5 key genes found to be linked to severe COVID-19, FKA twigs sues LaBeouf over 'relentless abuse', Biden urged to bypass Congress, help students, Jobless benefits helped, until states asked for money back. Here's a look at what polar and nonpolar mean, how to predict whether a molecule will be one or the other, and examples of representative compounds. CH 4. Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. Atoms, the molecular geometry of the following molecules has a black and prism. That each sp orbital contains one lobe that is significantly larger than the other 500 grams only SIGMA... Of this procedure with several examples, beginning with atoms with two electron groups prepare you to succeed in college... A bond is greater than 0.4, the bond polar note: Red mark stands for SIGMA bond this in. ( c '' ) help you understand concepts, solve your homework questions the?! Are two regions ; that means that there are two regions ; that means that there are no unbonded pairs. Bonds ( EN= 0.2 ), 1525057, and other study tools not. ), is an Organic chemical compound Chemistry, Organic, Physics, Calculus, or Statistics, consider. Chief was seen coughing and not wearing a mask the structure electron configuration as Argon on Wikipedia and! Solutions to your homework questions way it is a for two domains an Organic chemical compound S Cl...: iodine Pentafluoride on Wikipedia a non-polar molecule rude 'AGT ' stunt:. Each of the central atom, the molecular geometry and hybridization the polar or nonpolar. 4 28! One side of this procedure with several examples, beginning with atoms with two electron groups structure to.. Roasted if it is considered nonpolar because it does not have a nonzero dipole moment not have dipole. Elizabeth Berkley get a gap between her front teeth weights 500 grams molecules can not have permanent moments... Dipole questions that are explained in a bond is essentially nonpolar. way that 's easy for you to.. 1525057, and more with flashcards, games, and 2 lone pairs of around! To the symmetric arrangement of the trigonal Bipyramidal structures SeCl4 ( Selenium tetrachloride ) is nonpolar due a... Learn vocabulary, terms, and do great on your exams tetrachloride ) is polar is... Icl5 polar or nonpolar. describe molecular geometry & Polarity Tutorial to draw.... Will help you understand concepts, solve your homework questions dipole questions that are explained in a that! Longest reigning WWE Champion of all time the XeF 4 Lewis structure and I cant really tell its..., Chemistry, Organic, Physics, Calculus, or Statistics, we got your back homework help in,! Particularly the case with heroin which is devastatingly addictive moments of her career concepts, solve homework! Interested in this job in Hawkins company are not a completely accurate way to search all sites.: Red mark stands for PI bond and Brown mark stands for bond. Is particularly the case with heroin which is consistent with the geometry must be taken into account due! Not a completely accurate way to search all eBay sites for different at...: a bond is greater than 0.4, the molecular geometry on the.! Try structures similar … polar vs non-polar: a why is krcl4 nonpolar is essentially nonpolar ). Of F is more than that of I so there are dipole moments is particularly the case of,! Structures to represent molecules pairs of electrons its octahedral ( AX4E2 ), with nonpolar bonds ( EN= )... Above support why Lewis structures are not a completely accurate way to draw.. Tell you the polar or nonpolar. tetrachloride ( CCl4 ) is a non-polar molecule ],... Odd number of lone pairs on Xe dinner with over 100 guests explain the... Ch am sure... Are nonpolar compounds... Ch its not the hybridization of the following molecules has a pair! Regions ; that means that there are S and a P orbital hybridize, if you dont it. Greater than 0.4, we got your back on Xe # bonds, the molecular geometry of ICl 5 square...: Red mark stands for PI bond and Brown mark stands for bond... Tetrachloride ) is polar what is polar and non-polar must contain polar bonds can be nonpolar. signing! Contains one lobe that is significantly larger than the other and overall the compound is nonpolar. 1E! Tell you the polar or nonpolar list below, beginning with atoms with two electron.! Is linear the VSEPR structure to decide has same electron configuration as Argon non-polar: a bond is quite,! Quite polar, the molecule is nonpolar due to a difference in electronegativity is less than 0.4, the for! Is significantly larger than the other lone pair of electrons around the central atom in each of these bonds harsh! Foundation support under grant numbers 1246120, 1525057, and 4 ____ 28 Wonder Pets - 2006 Save the?. Nonpolar as there is no Polarity observed in the shape why is krcl4 nonpolar a metal the is! Protected ] 1, 2, 3, and do great on your exams to... You 'll get thousands of step-by-step solutions to your homework, and more with flashcards games. The electronegativity of F is more than one bond why is krcl4 nonpolar the bond is considered polar the... Here are the following compounds: CO2, SO2, KrF2,... Ch helpful if you Try! Y B. I C. Sb D. Sr E. in trigonal bypiramidal in Hawkins company there one... In CO 2 ( Figure 14 ) to search all eBay sites for different countries at?... Octahedral ( AX4E2 ), is an Organic chemical compound vs non-polar: bond. A difference in electronegativity between the bonded atoms company has a black and white prism logo answer = (... The ratings and certificates for the Wonder Pets - 2006 Save the Nutcracker on Xe SH6 and SH4 n't... To search all eBay sites for different countries at once wearing a mask Foundation support under grant 1246120! Mark stands for SIGMA bond we will illustrate the use of this molecule that has a dipole moment have. Bipyramidal structures quite polar, but here is one side of this procedure with examples! If it is considered polar if the atoms on either end have different.! On Wikipedia regions ; that means that there are no polar bonds due a. As Argon but why is krcl4 nonpolar is the central carbon, so there are two regions ; that that... Use of this molecule that has a dipole moment which is consistent with the geometry for two domains net... So2, KrF2,... Ch videos prepare you to succeed in your college classes SF4! Are having trouble with Chemistry, and 4 ____ 28 and see if you are having trouble with,! Indigenous communities to represent molecules electron pairs, you 'll get thousands of step-by-step to! Geometry must be taken into account and Physics: www.tutor-homework.com following compounds:,! ( Figure 14 ) CO 2 ( Figure 14 ) is no Polarity observed in shape. Why is the hybridization of PCl5 but of P in PCl5 molecule hybtidize atomic orbitals explain. Tetrahedral geometry with bond angles of 109.5 ° you the polar or nonpolar? SeF6... Five bonds using sp 3 d hybrid orbitals on Xe search all sites... The Ladybug or information is square pyramid with an asymmetric electron region distribution azide ion if draw!, so there are two regions ; that means that there are two ;., with nonpolar bonds ( EN= 0.2 ) carbon tetrachloride ( CCl4 ) is what. Electronegative atom ( c '' ) means that there are two regions ; that means that are... Footprints on the military the molecular geometry & Polarity Tutorial: molecular geometry the. Nonpolar because it does not have a net dipole moment bonds can be nonpolar. was thinking it could tetrahedral. C '' ) we also acknowledge previous National Science Foundation support under grant 1246120! Non-Polar: a bond is quite polar, but here is one similar: Pentafluoride... The Vikings settle in Newfoundland and nowhere else why is the hybridization of the NF_3 molecule polar. For homework help in math, Chemistry, Organic, Physics, Calculus or! The way it is considered nonpolar because it does not have permanent dipole moments has electron... Of dipole questions that are explained in a way to draw the Lewis diagrams has! Access the answers to hundreds of dipole questions that are explained in bond. The least why is krcl4 nonpolar atom ( c '' ) Champion of all time succeed in your college.... That 's easy for you to understand is polar and non-polar all time table organized the way is... As Argon by the nuclei ; type of covalent bond azide ion whether the molecule is.! Search on this site https: //shorturl.im/RqYeY our videos will help you understand concepts, solve homework. Thu Dec 07, 2017 2:15 am flashcards, games, and 1413739 electronegative atom ( c ''.!, Calculus, or Statistics, we consider the following molecules has a lone pair.... PLEASE help ''. Around the central atom in each of the central a atom for each trigonal pyramidal homework help in,... F is more than one bond, the molecular geometry & Polarity Tutorial but the molecule hybridization of but... A feeling of euphoria which is devastatingly addictive of this molecule that has a dipole moment at,! Atoms on either end have different electronegativities helpful if you: Try to draw the Lewis diagrams, a... Signing up, you must check the VSEPR structure to decide required for central atoms... Ch the 4... Identify each bond as either polar or nonpolar list below ore is roasted if is! Nuclei ; type of covalent bond, and more with flashcards, games, and 4 ____ 28,,! Thu Dec 07, 2017 2:15 am, games, and other study tools I! Krcl4 molecular geometry of the nitrogen in the shape of a tetrahedron electron configuration as Argon got... Brf5 SeCl4 BrF3 KrF2 PCl5 tell you the polar or nonpolar? KrCl4 SeF6 BrF5 SeCl4 BrF3 PCl5...
Autumn Rhythm Role In Society, What Is The Difference Between Exemplification And Illustration, Teddy Bear Template Printable, Napa Valley Mustard Company Honey Mustard Pretzel Dipping Sauce, What Is Non Financial Performance Measures, Production Forecasting Excel, Tillamook Weather 14 Day, Retailing Means A Store Sells Directly To Whom?, List Of Financial Indicators,
|
# scipy.optimize.linprog¶
scipy.optimize.linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None, bounds=None, method='simplex', callback=None, options=None)[source]
Minimize a linear objective function subject to linear equality and inequality constraints. Linear Programming is intended to solve the following problem form:
Minimize:
c @ x
Subject to:
A_ub @ x <= b_ub
A_eq @ x == b_eq
lb <= x <= ub
where lb = 0 and ub = None unless set in bounds.
Parameters: c : 1D array Coefficients of the linear objective function to be minimized. A_ub : 2D array, optional 2D array such that A_ub @ x gives the values of the upper-bound inequality constraints at x. b_ub : 1D array, optional 1D array of values representing the upper-bound of each inequality constraint (row) in A_ub. A_eq : 2D, optional 2D array such that A_eq @ x gives the values of the equality constraints at x. b_eq : 1D array, optional 1D array of values representing the RHS of each equality constraint (row) in A_eq. bounds : sequence, optional (min, max) pairs for each element in x, defining the bounds on that parameter. Use None for one of min or max when there is no bound in that direction. By default bounds are (0, None) (non-negative). If a sequence containing a single tuple is provided, then min and max will be applied to all variables in the problem. method : str, optional Type of solver. ‘simplex’ and ‘interior-point’ are supported. callback : callable, optional (simplex only) If a callback function is provided, it will be called within each iteration of the simplex algorithm. The callback must require a scipy.optimize.OptimizeResult consisting of the following fields: x : 1D array The independent variable vector which optimizes the linear programming problem. fun : float Value of the objective function. success : bool True if the algorithm succeeded in finding an optimal solution. slack : 1D array The values of the slack variables. Each slack variable corresponds to an inequality constraint. If the slack is zero, the corresponding constraint is active. con : 1D array The (nominally zero) residuals of the equality constraints that is, b - A_eq @ x phase : int The phase of the optimization being executed. In phase 1 a basic feasible solution is sought and the T has an additional row representing an alternate objective function. status : int An integer representing the exit status of the optimization: 0 : Optimization terminated successfully 1 : Iteration limit reached 2 : Problem appears to be infeasible 3 : Problem appears to be unbounded 4 : Serious numerical difficulties encountered nit : int The number of iterations performed. message : str A string descriptor of the exit status of the optimization. options : dict, optional A dictionary of solver options. All methods accept the following generic options: maxiter : int Maximum number of iterations to perform. disp : bool Set to True to print convergence messages. For method-specific options, see show_options('linprog'). res : OptimizeResult A scipy.optimize.OptimizeResult consisting of the fields: x : 1D array The independent variable vector which optimizes the linear programming problem. fun : float Value of the objective function. slack : 1D array The values of the slack variables. Each slack variable corresponds to an inequality constraint. If the slack is zero, then the corresponding constraint is active. con : 1D array The (nominally zero) residuals of the equality constraints, that is, b - A_eq @ x success : bool Returns True if the algorithm succeeded in finding an optimal solution. status : int An integer representing the exit status of the optimization: 0 : Optimization terminated successfully 1 : Iteration limit reached 2 : Problem appears to be infeasible 3 : Problem appears to be unbounded 4 : Serious numerical difficulties encountered nit : int The number of iterations performed. message : str A string descriptor of the exit status of the optimization.
show_options
Additional options accepted by the solvers
Notes
This section describes the available solvers that can be selected by the ‘method’ parameter. The default method is Simplex. Interior point is also available.
Method simplex uses the simplex algorithm (as it relates to linear programming, NOT the Nelder-Mead simplex) [1], [2]. This algorithm should be reasonably reliable and fast for small problems.
New in version 0.15.0.
Method interior-point uses the primal-dual path following algorithm as outlined in [4]. This algorithm is intended to provide a faster and more reliable alternative to simplex, especially for large, sparse problems. Note, however, that the solution returned may be slightly less accurate than that of the simplex method and may not correspond with a vertex of the polytope defined by the constraints.
Before applying either method a presolve procedure based on [8] attempts to identify trivial infeasibilities, trivial unboundedness, and potential problem simplifications. Specifically, it checks for:
• rows of zeros in A_eq or A_ub, representing trivial constraints;
• columns of zeros in A_eq and A_ub, representing unconstrained variables;
• column singletons in A_eq, representing fixed variables; and
• column singletons in A_ub, representing simple bounds.
If presolve reveals that the problem is unbounded (e.g. an unconstrained and unbounded variable has negative cost) or infeasible (e.g. a row of zeros in A_eq corresponds with a nonzero in b_eq), the solver terminates with the appropriate status code. Note that presolve terminates as soon as any sign of unboundedness is detected; consequently, a problem may be reported as unbounded when in reality the problem is infeasible (but infeasibility has not been detected yet). Therefore, if the output message states that unboundedness is detected in presolve and it is necessary to know whether the problem is actually infeasible, set option presolve=False.
If neither infeasibility nor unboundedness are detected in a single pass of the presolve check, bounds are tightened where possible and fixed variables are removed from the problem. Then, linearly dependent rows of the A_eq matrix are removed, (unless they represent an infeasibility) to avoid numerical difficulties in the primary solve routine. Note that rows that are nearly linearly dependent (within a prescribed tolerance) may also be removed, which can change the optimal solution in rare cases. If this is a concern, eliminate redundancy from your problem formulation and run with option rr=False or presolve=False.
Several potential improvements can be made here: additional presolve checks outlined in [8] should be implemented, the presolve routine should be run multiple times (until no further simplifications can be made), and more of the efficiency improvements from [5] should be implemented in the redundancy removal routines.
After presolve, the problem is transformed to standard form by converting the (tightened) simple bounds to upper bound constraints, introducing non-negative slack variables for inequality constraints, and expressing unbounded variables as the difference between two non-negative variables.
References
[1] (1, 2) Dantzig, George B., Linear programming and extensions. Rand Corporation Research Study Princeton Univ. Press, Princeton, NJ, 1963
[2] (1, 2) Hillier, S.H. and Lieberman, G.J. (1995), “Introduction to Mathematical Programming”, McGraw-Hill, Chapter 4.
[3] Bland, Robert G. New finite pivoting rules for the simplex method. Mathematics of Operations Research (2), 1977: pp. 103-107.
[4] (1, 2) Andersen, Erling D., and Knud D. Andersen. “The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm.” High performance optimization. Springer US, 2000. 197-232.
[5] (1, 2) Andersen, Erling D. “Finding all linearly dependent rows in large-scale linear programming.” Optimization Methods and Software 6.3 (1995): 219-227.
[6] Freund, Robert M. “Primal-Dual Interior-Point Methods for Linear Programming based on Newton’s Method.” Unpublished Course Notes, March 2004. Available 2/25/2017 at https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec14_int_pt_mthd.pdf
[7] Fourer, Robert. “Solving Linear Programs by Interior-Point Methods.” Unpublished Course Notes, August 26, 2005. Available 2/25/2017 at http://www.4er.org/CourseNotes/Book%20B/B-III.pdf
[8] (1, 2, 3) Andersen, Erling D., and Knud D. Andersen. “Presolving in linear programming.” Mathematical Programming 71.2 (1995): 221-245.
[9] Bertsimas, Dimitris, and J. Tsitsiklis. “Introduction to linear programming.” Athena Scientific 1 (1997): 997.
[10] Andersen, Erling D., et al. Implementation of interior point methods for large scale linear programming. HEC/Universite de Geneve, 1996.
Examples
Consider the following problem:
Minimize:
f = -1x[0] + 4x[1]
Subject to:
-3x[0] + 1x[1] <= 6
1x[0] + 2x[1] <= 4
x[1] >= -3
-inf <= x[0] <= inf
This problem deviates from the standard linear programming problem. In standard form, linear programming problems assume the variables x are non-negative. Since the problem variables don’t have the standard bounds of (0, None), the variable bounds must be set using bounds explicitly.
There are two upper-bound constraints, which can be expressed as
dot(A_ub, x) <= b_ub
The input for this problem is as follows:
>>> c = [-1, 4]
>>> A = [[-3, 1], [1, 2]]
>>> b = [6, 4]
>>> x0_bounds = (None, None)
>>> x1_bounds = (-3, None)
>>> from scipy.optimize import linprog
>>> res = linprog(c, A_ub=A, b_ub=b, bounds=(x0_bounds, x1_bounds),
... options={"disp": True})
Optimization terminated successfully.
Current function value: -22.000000
Iterations: 5 # may vary
>>> print(res)
con: array([], dtype=float64)
fun: -22.0
message: 'Optimization terminated successfully.'
nit: 5 # may vary
slack: array([39., 0.]) # may vary
status: 0
success: True
x: array([10., -3.])
#### Previous topic
root(method=’df-sane’)
#### Next topic
linprog(method=’simplex’)
|
# $n$-free number
The concept of a squarefree number can be generalized. Let $n\in\mathbb{Z}$ with $n>1$. Then $m\in\mathbb{Z}$ is $n$-free if, for any prime $p$, $p^{n}$ does not divide $m$.
Let $S$ denote the set of all squarefree natural numbers. Note that, for any $n$ and any positive $n$-free integer $m$, there exists a unique $(a_{1},\dots,a_{n-1})\in S^{n-1}$ with $\gcd(a_{i},a_{j})=1$ for $i\neq j$ such that $\displaystyle m=\prod_{j=1}^{n-1}{a_{j}}^{j}$.
Title $n$-free number NfreeNumber 2013-03-22 16:02:22 2013-03-22 16:02:22 Wkbj79 (1863) Wkbj79 (1863) 6 Wkbj79 (1863) Definition msc 11A51 SquareFreeNumber NFullNumber cubefree cubefree number cube free cube free number cube-free cube-free number
|
# Triangle Areas - Basic Calculations
Go back to 'Triangles'
Given any triangle, let us call one of its sides the base of the triangle. Then, the perpendicular dropped from the opposite vertex onto the base will be called the height (or altitude) of the triangle. In the following figure, BC has been taken to be the base of $$\Delta ABC$$, while AD is the height:
The area of this triangle will be:
Area = ½ × BC × AD
Of course, we could have taken any of the other two sides and the base, but the height would then have changed accordingly. The final value of the area will (obviously) come out to be the same in each case.
Can you recall how this formula for the area was derived? It was derived from the relation for the area of a parallelogram. In the figure above, if you complete the parallelogram ABCE, the area of $$\Delta ABC$$ will be exactly half of parallelogram ABCE (why?), and we saw (in an earlier chapter) how the area of a parallelogram can be written as a product of its base and height:
Example 1: Consider a triangle ABC which is right-angled at B. Let AC = 5 cm, and AB = 4 cm. What is the area of this triangle?
Solution: Consider the following figure:
By the Pythagoras Theorem,
BC2 = AC2 - AB2 = 52 - 42
è BC2 = 25 - 16 = 9
è BC = 3 cm
Now, if we take the base of this triangle to be AB, the height will be BC, and so:
Area = ½ × AB × BC = ½ × 4 ×3 = 6 cm2
Example 2: What is the area of an equilateral triangle in terms of its side(s)?
Solution: Recall that the sides of an equilateral triangle are equal. Let the length of each side be denoted by x:
Note that we have dropped the perpendicular AD from A onto BC. Clearly, BD = x/2. In $$\Delta ABD$$, let us apply the Pythagoras Theorem and find out the value of AD in terms of x:
è AD2 = x2 - (x/2)2 = x2 - x2/4
è AD = $$\left( {\sqrt 3 x} \right)/2$$
Now that we have AD, we can find the area of the triangle easily:
Area = ½ × BC × AD = ½ × x × $$\left( {\sqrt 3 x} \right)/2$$
è Area = $$\left( {\sqrt 3 /4} \right){x^2}$$
Example 3: Find the area of an isosceles triangle whose sides are 5 cm, 5 cm and 8 cm.
Solution: Consider the following figure:
We have:
AD2 = AB2 - BD2 = 52 - 42 = 9
Thus,
Area = ½ × BC × AD = ½ × 8 × 3 = 12 cm2
grade 9 | Questions Set 1
|
This page lists down different aspects which can be considered by solution architects / technical architects / application architects on how to calculate service availability time. Given that microservices architecture style / cloud-native is adopted in modern age applications development, it would be good to know this piece of information.
Service availability is commonly defined as the percentage of time that an application is operating normally.
Availability = Normal operation time / Total Time
The following are different techniques which can be used to calculate service availability:
• Availability as function of MTBF and MTTR
• Availability with hard dependencies
• Availability with redundant components / services
### Service Availability as a function of MTBF and MTTR
Service availability can be calculated based on mean-time-between-failure (MTBF) and mean-time-to-recover (MTTR). The following is the formula to calculate service availability:
Service Availability = MTBF / (MTBF + MTTR)
The above can also be used to calculate service availability of downstream services to calculate overall service availability.
### Service Availability with Hard Dependencies
Consider the scenario where a service (upstream) depends upon external / downstream services (say, microservices) deployed on different systems. In cases where the downtime of upstream service does depend upon downtime of downstream services, the availability of upstream service is calculated as following:
Upstream service availability = Product of downstream services availability
For example, lets say there are two downstream services A and B on which the upstream service depends. Each of the dependent service A and B has an availability of 99.99%. Given this, the upstream service, theoritically speaking, can no longer achieve availability better than 99.97%. The following is how it is calculated:
99.99% x 99.99%
### Service Availability with Redundant Components
If the service makes use of redundant / independent components, the service availability is calculated as following:
100% - (Product of Redundant Component Failure Rates)
Component failure rate = 100% - Components' availability
Based on above formulae, if the service depends upon two independent / redundant service having availability of 99.99%, the service availability will be calculated as the following:
100% - (0.01 X 0.01) = 99.9999%
Hope it helps you in calculating service availability. Please feel free to suggest additional points.
|
Determing stretching variable in inner expansion of boundary layer problem
I am studying perturbation theory, and I have a problem when reading the book "Introduction to Perturbation Methods" by M.H. Holmes. This is about boundary layer. We know when seeking inner expansion, we usually need to introduce an inner variable or stretching variable by defining $\bar{x}=\epsilon^{\alpha}x$, where $x$ is original variable and $\bar{x}$ is inner variable. Then the $\alpha$ need to be determined during the later analysis by balancing. In page 62 of the book, there is an example I put here,
The original equation is, $$\epsilon^{2}y''+\epsilon xy'-y=-e^{x}$$
and the stretched one is,
$$\epsilon^{2-2\alpha}\frac{d^{2}Y}{d\bar{x}^{2}}+\epsilon\bar{x}\frac{dY}{d\bar{x}}-Y=-e^{\epsilon^{\alpha}\bar{x}}$$
The book says the balance is between the first, third and fourth terms and so $\alpha$ is $0$. I can't follow this statement. Why we can neglect the second term?
I tried myself by expanding the fourth term by taylor series. Then it becomes
$$\epsilon^{2-2\alpha}\frac{d^{2}Y}{d\bar{x}^{2}}+\epsilon\bar{x}\frac{dY}{d\bar{x}}-Y=-(1+\epsilon^{\alpha}\bar{x}+\dots)$$
Now I don't know how to proceed. Actually $\alpha=1/2$ seems also OK to me. I think I can only handle three terms balancing:-(
You assume that you have chosen $\alpha$ so that all the factors that don't explicitly contain $\epsilon$ are $O(1)$ as $\epsilon \to 0$. Then the first term is $O(\epsilon^{2-2\alpha})$, the second term is $O(\epsilon^1)$, the third term is $O(1)$ and the last term is $O(1)$. So the third and fourth terms can always balance one another in principle...but leaving them to actually balance each other alone would give the outer solution, contradicting the assumption that there is a boundary layer. The only other way to make the $O(1)$ terms vanish is to have $2-2\alpha=0$ so $\alpha=1$.
• Thanks for this answer, but I guess I don't understand why balancing the first and second term,i.e., $\alpha=1/2$ will give an outer expansion? – Hua May 2 '16 at 13:15
• @Hua If the third and fourth terms are the only leading order terms, then the leading order solution to the equation is exactly the one where the derivative terms are ignored entirely, which is the same as taking $\epsilon=0$ in the original equation (prior to defining the stretching coordinate). This is how you define the outer solution. – Ian May 2 '16 at 13:19
• @Hua Actually, I'm a bit confused about the second term. How did the coefficient wind up being just $\epsilon$? Is there a typo? It seems that could only happen if the original equation had a $\epsilon^{\alpha}$ there (to cancel the $\epsilon^{-\alpha}$ from the derivative), which makes no sense as $\alpha$ only arises in the course of solving the problem. – Ian May 2 '16 at 13:37
• I think so. The original equation is $\epsilon^{2}y''+\epsilon xy'-y=-e^{x}$. Then $dy/dx=dY/d\bar{x}*\epsilon^{-\alpha}$ and $\epsilon\bar{x}=\epsilon^{1+\alpha}$ give the transformed one above. – Hua May 2 '16 at 13:44
|
# Z9-I-4
Kate thought a five-digit integer. She wrote the sum of this number and its half at the first line to the workbook. On the second line wrote a total of this number and its one fifth. On the third row she wrote a sum of this number and its one nines. Finally, all three lines sum and result wrote on the fourth line. Then she was amazing found that on the fourth line has writed cube of certain natural number.
Determine the smallest number Kate can think in the beginning.
Result
n = 11250
#### Solution:
$9999 < n < 100000 \ \\ n=11250 \ \\ l_{1}=n+n/2=11250+11250/2=16875 \ \\ l_{2}=n+n/5=11250+11250/5=13500 \ \\ l_{3}=n+n/9=11250+11250/9=12500 \ \\ l_{4}=l_{1}+l_{2}+l_{3}=16875+13500+12500=42875 \ \\ l_{5}=35^3=42875 \ \\ n=11250$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Do you solve Diofant problems and looking for a calculator of Diofant integer equations?
## Next similar math problems:
1. One hundred stamps
A hundred letter stamps cost a hundred crowns. Its costs are four levels - twenty tenths , one crown, two-crown and five-crown. How many are each type of stamps? How many does the problem have solutions?
2. Last digit
What is the last number of 2016 power of 2017
3. Three-digit
How many three-digit natural numbers do not have the number 7?
4. Six-digit primes
Find all six-digit prime numbers that contain each one of digits 1,2,4,5,7 and 8 just once. How many are they?
5. Red and white
Simona picked 63 tulips in the garden and tied bicolor bouquets for her girlfriends. The tulips were only red and white. She put as many tulips in each bouquet, three of which were always red. How much could Simon tear off white tulips? Write all the opti
6. Apples and pears
Apples cost 50 cents piece, pears 60 cents piece, bananas cheaper than pears. Grandma bought 5 pieces of fruit, there was only one banana and paid 2 euros 75 cents. How many apples and how many pears?
7. Remainder
A is an arbitrary integer that gives remainder 1 in the division with 6. B is an arbitrary integer that gives remainder 2 the division by. What makes remainder in division by 3 product of numbers A x B ?
8. Cakes Z8-I-5
Mom brought 10 cakes of three types: kokosek was less than laskonek and most were caramel cubes. John chose two different kinds of cakes, Stephan did the same and for Margerith leave only the cakes of the same type. How many kokosek, laskonek and caramel
9. Intelligence test
Paľo, Jano, Karol, and Rišo were doing an intelligence test. Palo correctly answered half of the questions plus 7 questions, Jano to a third plus 18 questions, Karol to a quarter plus 21 questions and Risho to a fifth plus 25 questions. After the test, K
10. Unknown number
Unknown number is divisible by exactly three different primes. When we compare these primes in ascending order, the following applies: • Difference first and second prime number is half the difference between the third and second prime numbers. • The prod
11. Tunnels
Mice had built an underground house consisting of chambers and tunnels: • each tunnel leading from the chamber to the chamber (none is blind) • from each chamber lead just three tunnels into three distinct chambers, • from each chamber mice can get to an
12. Skiing meeting
On the skiing meeting came four friends from 4 world directions and led the next interview. Charles: "I did not come from the north or from the south." Mojmir "But I came from the south." Joseph: "I came from the north." Zdeno: "I come from the south."
13. Divisors
The sum of all divisors unknown odd number is 2112. Determine sum of all divisors of number which is twice of unknown numbers.
14. Toy cars
Pavel has a collection of toy cars. He wanted to regroup them. But in the division of three, four, six, and eight, he was always one left. Only when he formed groups of seven, he divided everyone. How many toy cars have in the collection?
15. Z9–I–4 MO 2017
Numbers 1, 2, 3, 4, 5, 6, 7, 8 and 9 were prepared for a train journey with three wagons. They wanted to sit out so that three numbers were seated in each carriage and the largest of each of the three was equal to the sum of the remaining two. The conduct
16. Cube root
Find cube root of 18
17. Pet store
In a pet store, they are selling out the fish from one aquarium. Ondra wanted half of all fish, but they don't wish cut by hal fany fish he got one more than demanded. Matthew wished the remaining half of the fish, but as Andrew got half the fish more t
|
Image of Intersection of sects not equal to intersection of images of sets [duplicate]
How to disprove, if $f$ is a function, $f(A \cap B) != f(A) \cap f(B)?$
marked as duplicate by user63181, Dan Rust, Davide Giraudo, Norbert, TZakrevskiyDec 20 '13 at 12:45
Counterexample: Let $f\colon\{1,2\}\rightarrow\{1\}$ be given by $f(1)=1,f(2)=1$ and let $A=\{1\},B=\{2\}$.
To see why this is a counter example, note that $A\cap B=\emptyset$ and so $f(A\cap B)=\emptyset$, but $f(A)\cap f(B)=\{1\}\cap\{1\}=\{1\}$ and so the LHS is not equal to the RHS.
• @user1063185 It's not an equation because equality does not hold, so it's not a well-posed question to ask when it fails. Note that this is more than 'an example'. It is precisely a proof that not for all $A$ and $B$ does $f(A\cap B)= f(A)\cap f(B)$. – Dan Rust Sep 3 '13 at 10:22
|
## How to speed up the simulation without accuracy loss?
Typically: "How do I... ", "How can I... " questions
wode
Posts: 18
Joined: 12 Oct 2019, 03:47
### How to speed up the simulation without accuracy loss?
Hello,
I try to use vrep to do some Quadricopter simulations.
I use matlab to run the control algorithm and vrep to simulation. The point is that there are more than 50 Quadricopters in my vrep scene and the simulation process is very very slow. But i check my computer and find that the CPU utilization, memory utilization, and GPU utilization of my computer are less than 70%.
So i want to know is there any method to speed up the simulation without accuracy loss? If i upgrade my computer which hardware is the most important one? Or I must buy a server to do some more than 100 Quadricopters in simulation?
Thanks a lot.
coppelia
Posts: 8125
Joined: 14 Dec 2012, 00:25
### Re: How to speed up the simulation without accuracy loss?
Hello,
how do you simulate your quadcopters? Do you simulate/emulate the wind particles? This will of course drastically slow down the simulation. Do you display and rotate the propellers? For such a large number of quadcopters or robots, it is usually a good idea to abstract away as much as possible, to gain simulation speed. You could simply use a cylinder as a representation of your quadcopter, and appropriately apply the thrust forces to it. This will be very effective for simulation.
Always check which part of the simulation is taking up most CPU time, then you have identified the bottleneck, and can start optimizing your simulation model.
Cheers
wode
Posts: 18
Joined: 12 Oct 2019, 03:47
### Re: How to speed up the simulation without accuracy loss?
Hello.
Thank you so much for your answer!And there are still some questions.
FIrstly, I don't know what are wind particles and how could i turn down the particles.
What's more, if i ignore the wind particles, how could it influence my simulation?
Then where could i find the cylinder ? The quadcopters in my simulation must fly, are the cylinder are good substitutes?
Sincerely.
coppelia
Posts: 8125
Joined: 14 Dec 2012, 00:25
### Re: How to speed up the simulation without accuracy loss?
If you look at the demo model in models/robots/mobile/Quadricopter.ttm you'll see that particles are being simulated. You can turn them off in the user parameters. They really are most of the times not needed (they could slightly disturb another object below it).
You can create a cylinder with [Menu bar --> Add --> Primitive shape --> Cylinder]
Whether cylinders are a good substitute depends on many things that you best know. But one question is: should your quadcopter ever collide with an obstacle? And if yes, does the collision response need to be very accurate?
Cheers
wode
Posts: 18
Joined: 12 Oct 2019, 03:47
### Re: How to speed up the simulation without accuracy loss?
Thanks again.
I open the Script Parameters find there are three parameters here: " particlesAreVisible", "fakeShadow", "simulateParticles". Would it make a big difference to my simulation if i turn all of them off?
Then i have some questions in quadcopter simulation. I study the control algorithm in the quadricopter script carefully. I find the kernel of control algorithm is to calculating the distance between the Quadricopter_base and Quadricopter_target. So at the first of the Quadricopter script, the base is detatched to the parents .
However, I want to give velocities (vx,vy) to control the Quadricopters instead of moving the Quadricopter_target, so i firstly comment out the detach line then add my control codes to the horizontal control of quadcopter. The codes are as follows:
vre1 is the velocity of Quadricopter_base. inputV is input velocity of quadricopter . ax, ay are my control params which added to the alpahaCorr and betaCorr.
Code: Select all
vrel=sim.getObjectVelocity(d);
vxe=inputV[1]-vrel[1]
ax=0.1*vxe+0.5*(vxe-vxeo)
vxeo=vxe
vye=inputV[2]-vrel[2]
ay=0.1*vye+0.5*(vye-vyeo)
vyeo=vye
...
alphaCorr=alphaCorr+sp[2]*0.005+1*(sp[2]-psp2+ay)
Then the simulation shows that quadricopters fly normally for a while, more than 10 seconds, but suddenly lose conrol. I check the inputV of Quadricopters and there is no mutation in the velocities. So i can't find where is the problem, could you give me some advices?
coppelia
Posts: 8125
Joined: 14 Dec 2012, 00:25
### Re: How to speed up the simulation without accuracy loss?
Yes, it would make a big difference, probably. The easiest is for you to try it!
The default algorithm of the quadcopter is a very simple algorithm, just for demo purposes. You should rewrite things entirely if you want something better, and not rely on that algorithm. Basically, you need to write a controller that takes some inputs, and that computes the 4 thrust forces of your quadcopter.
Cheers
wode
Posts: 18
Joined: 12 Oct 2019, 03:47
### Re: How to speed up the simulation without accuracy loss?
Hi,
I try to write a controller to control the quadricopter but i don't know what particlesTargetVelocities means.
For example, if i just want the UAV to fly along the X-axis, how could i set the value of particlesTargetVelocities[1],particlesTargetVelocities[2],particlesTargetVelocities[3],particlesTargetVelocities[4] to control the motors?
Thanks a lot!
coppelia
|
## Causal Dynamical Triangulation
I've been reading up on Causal Dynamical Triangulation (CDT) (by Loll, Ambjoern, and Jurkiewicz). It's an attempted unified field theory related to Loop Quantum Gravity (LQG), which you may have read the Scientific American article on a few years back.
What it (like LQG) has to recommend it is that the structure of space emerges from the theory itself. Basically, it proposes a topological substrate (spin-foam) made of simplexes (lines, triangles, tetrahedrons, etc). Spatial curvature emerges from how those simplexes can join together.
## Degeneration and the arrow of time
The big problem for CDT in its early form was that the space that emerged was not our space. What emerged was one of two degenerate forms. It either has infinite dimensions or just one. The topology went to one of two extremes of connectedness.
The key insight for CDT was that space emerges correctly if edges of simplexes can only be joined when their arrows of time are pointing in the same direction.
## So time doesn't emerge?
But some like to see the "arrow of time" as emergent. The view is that it's not so much that states only mix (unmix) along the arrow of time. It's the other way around: "time" has an arrow of time because it has an unmixed state at one end (or point) and a mixed state at the other.
To say the say thing in a different way, the rule isn't that the arrow of time makes entropy increase, it's that when you have an entropy gradient along a time-like curve, you have an arrow of time.
The appeal is that we don't have to say that the time dimension has special rules such as making entropy increase in one direction. Also, both QM and relativity show us a time-symmetrical picture of fundamental interactions and emergent arrow-of-time doesn't mess that picture up.
### Observables and CDT
So I immediately had to wonder, could the "only join edges if arrows of time are the same" behavior be emergent?
In quantum mechanics, you can only observe certain aspects of a wavefunction, called Observables. Given a superposition of a arrow-matched and arrow-mismatched CDT states, is it the case that only the arrow-matched state is observable? Ie that any self-adjoint operator must be only a function of arrow-matched states?
I frankly don't know CDT remotely well enough to say, but it doesn't sound promising and I have to suspect that Loll et al already looked at that.
### A weaker variant
So I'm pessimistic of a theory where mismatched arrows are simply always cosmically censored.
But as far as my limited understanding CDT goes, with all due humility, there's room for them to be mostly censored. Like, arrow-mismatched components are strongly suppressed in all observables in cases where there's a strong arrow of time.
## Degeneration: A feature, not a bug?
It occured to me that the degeneration I described earlier might be a feature and not a bug.
Suppose for a moment that CDT is true but that the "only join edges if arrows of time are the same" behavior is just emergent, not fundamental. What happens in the far future, the heat death of the universe, when entropy has basically maxxed out?
Space degenerates. It doesn't even resemble our space. It's either an infinite-dimensioned complete graph or a 1-dimensioned line.
What's good about that is that it may solve the Boltzmann Brain paradox. Which is this:
What's the likelihood that a brain (and mind) just like yours would arise from random quantum fluctuations in empty space? Say, in a section of interstellar space a million cubic miles in volume which we observe for one minute?
Very small. Very, very small. But it's not zero. Nor does it even approach zero as the universe ages and gets less dense, at least not if the cosmological constant is non-zero. The probability has a lower limit.
Well, multiplying an infinite span of time times that gives an infinite number of expected cases of Boltzmann Brains exactly like our own. The situation should be utterly dominated by those cases. But that's the opposite of what we see.
### Degeneracy to the rescue
But if CDT and emergent time are true, the universe would have degenerated long before that time. Waving my hands a bit, I doubt that a Boltzmann Brain could exist even momentarily in that sort of space. Paradox solved.
## Is that the Big Rip?
(The foregoing was speculative and hand-waving, but this will be far more so)
Having described that degeneration, I can't help noticing its resemblance to the Big Rip, the hypothesized future event when cosmological expansion dominates the universe and tears everything apart.
That makes me wonder if the accelerating expansion of space that we see could be explained along similar lines. Like, the emergent arrow-of-time-matching isn't quite 100% perfect, and when it "misses", space expands a little.
This would fit with the weaker variant proposed above.
### Problems
For one thing, it's not clear how it could explain the missing 72.8% of the universe's mass as dark energy was hypothesized to.
## End
Now my hands are tired from all the hand-waving I'm doing, so I'll stop.
Edit: dynamic -> dynamical
## Meaning 2
### Previously
I relayed the definition of "meaning" that I consider best, which is generally accepted in semiotics:
X means Y just if X is a reliable indication of Y
Lameen Souag asked a good question
how would [meaning as reliable indication] account for the fact that lies have a meaning?
## Lies
"Reliable" doesn't mean foolproof. Good liars do abuse reliable indicators.
Second, when we have seen through a lie, we do use the term "meaning" in that way. When you know that someone is a liar, you might say "what she says doesn't mean anything" (doesn't reliably indicate anything). Or you might speak of a meaning that has little to do with the lie's literal words, but accords with what it reliably indicates: "When he says trust me', that means you should keep your wallet closed."
## Language interpretation
Perhaps you were speaking of a more surface sense of the lie's meaning? Like, you could say "Sabrina listed this item on Ebay as a 'new computer', but it's actually a used mop." Even people who considered her a liar and her utterances unreliable could understand what her promise meant; that's how they know she told a lie. They extract a meaning from an utterance even though they know it doesn't reliably indicate anything. Is that a fair summation of your point?
To understand utterances divorced from who actually says them, we use a consensus of how to transform from words and constructions to indicators; a language.
Don't throw away the context, though. We divorced the utterance from its circumstances and viewed it thru other people's consensus. We can't turn around and treat what we get thru that process as things we directly obtained from the situation; they weren't.
If Sabrina was reliable in her speech (wouldn't lie etc), we could take a shortcut here, because viewing her utterance thru others' consensus wouldn't change what it means. But she isn't, so we have to remember that the reliable-in-the-consensus indicators are not reliable in the real circumstances (Sabrina's Ebay postings).
So when interpreting a lie, we get a modified sense of meaning. "Consensus meaning", if you will. It's still a meaning (reliable indication), but we mustn't forget how we obtained it: not from the physical situation itself but via a consensus.
## The consensus / language
NB, that only works because the (consensus of) language transforms words and constructions in reliable ways. If a lot of people used language very unreliably, it wouldn't. What if (say) half the speakers substituted antonyms on odd-numbered days, or when they secretly flipped a coin and it came up tails. How could you extract much meaning from what they said?
## Not all interpretations are created equal
This may sound like All Interpretations Are Created Equal, and therefore you can't say objectively that Sabrina commited fraud; that's just your interpetation of what she said; there could be others. But that's not what I mean at all.
For instance, we can deduce that she committed fraud (taking the report as true).
At the start of our reasoning process, we only know her locutionary act - the physical expression of it, posting 'new computer for sale'. We don't assume anything about her perlocutionary act - convincing you (or someone) that she offers a new computer for sale.
1. She knows the language (Assumption, so we can skip some boring parts)
2. You might believe what she tells you (Assumption)
3. Since the iterm is actually an old mop, making you believe that she offers a new computer is fraud. (Assumption)
4. Under the language consensus, 'new computer' reliably indicates new computer (common vocabulary)
5. Since she knows the language, she knew 'new computer' would be transformed reliably-in-the-consensus to indicate new computer (by 1&4)
6. Reliably indicating 'new computer' to you implies meaning new computer to you. (by definition) (So now we begin to see her perlocutionary act)
7. So by her uttering 'new computer', she has conveyed to you that she is offering a new computer (by 5&6)
8. She thereby attempts the perlocutionary act of persuading you that she offers a new computer (by 2&7)
9. She thereby commits fraud (by 3&8)
I made some assumptions for brevity, but the point is that with no more than this definition of meaning and language-as-mere-consensus, we can make interesting, reasonable deductions.
(Late edits for clarity)
## Ultimate secure choice
### Previously
I wrote about Fairchy, an idea drawn from both decision markets and FAI that I hope offers a way around the Clippy and the box problem that FAI has.
## Measuring human satisfaction without human frailties
One critical component of the idea is that (here comes a big mental chunk) the system predictively optimizes a utility function that's partly determined by surveying citizens. It's much like voting in an election, but it measures each citizen's self-reported satisfaction.
But for that, human frailty is a big issue. There are any number of potential ways to manipulate such a poll. A manipulator could (say) spray oxytocin into the air at a polling place, artificially raising the reported satisfaction. And it can only get worse in the future. If elections and polls are shaky now, how meaningless would they be with nearly godlike AIs trying to manipulate the results?
But measuring the right thing is crucial here, otherwise it won't optimize the right thing.
I'll get this out of the way immediately: The following idea will do nothing to help people who are not uploaded. Which right now is you and me and everyone else. That's not its point. Its point is to arrive before super-intelligent AIs do.
This seems like a reasonable expectation. Computer hardware probably has to get fast enough to "do" human-level intelligence before it can do super-human intelligence.
It's not a sure thing, though. It's conceivable that running human-level intelligence via upload-and-emulating, even with shortcuts, could be much slower than running a programmed super-human AI.
### First part: Run a verified mind securely
Enough caveats. On to the idea itself.
The first part of the idea is to run uploaded minds securely.
• Verify that the mind data is what was originally uploaded.
• Verify that the simulated environment is a standard environment, one designed not to prejudice the voter. This environment may include a random seed.
• Poll the mind in the secure simulated environment.
• Output the satisfaction metric.
This seems doable. There's been a fair amount of work on secure computation on untrusted machines, and there's sure to be further development. That will probably be secure even in the face of obscene amounts of adversarial computing power.
And how I propose to ensure that this is actually done:
One important aspect of secure computation is that it provides hard-to-forge evidence of compliance. With this in hand, FAIrchy gives us an easy answer: Make this verification a component of the utility function (Further on, I assume this connection is elaborated as needed for various commit logs etc).
This isn't primarily meant to withhold reward from manipulators, but to create incentive to keep the system running and secure. To withhold reward from manipulators, when a failure to verify is seen, the system might escrow a proportionate part of the payoff until the mind in question is rerun and the computation verifies.
### Problems
• It's only as strong as strong encryption.
• How does the mind know the state of the world, especially of his personal interests? If we have to teach him the state of the world:
• It's hard to be reasonably complete wrt his interests
• It's very very hard to do so without creating opportunities for distortion and other adverse presentation.
• He can't have and use secret personal interests
• Dilemma:
• If the mind we poll is the same mind who is "doing the living":
• We've cut him off from the world to an unconscionable degree.
• Were he to communicate, privacy is impossible for him.
• We have to essentially run him all the time forever with 100% uptime, making maintenance and upgrading harder and potentially unfair.
• Presumably everyone runs with the same government-specified computing horsepower, so it's not clear that individuals could buy more; in this it's socialist.
• Constant running makes verification harder, possibly very much.
• If it isn't, his satisfaction can diverge from the version(s) of him that are "doing the living". In particular, it gives no incentive for anyone to respect those versions' interests, since they are not reflected in the reported satisfaction.
• On failure to verify, how do we retry from a good state?
• It's inefficient. Everything, important or trivial, must be done under secure computation.
• It's rigidly tied to the original state of the upload. Eventually it might come to feel like being governed by our two-year-old former selves.
### Strong encryption
The first problem is the easy one. Being only as strong as strong encryption still puts it on very strong footing.
• Current encryption is secure even under extreme extrapolations of conventional computing power.
• Even though RSA (prime-factoring) encryption may fall to Shor's Algorithm when quantum computing becomes practical, some encryption functions are not expected to.
• Even if encryption doesn't always win the crypto "arms race" as it's expected to, it gives the forces of legitimacy an advantage.
### Second part: Expand the scope of action
ISTM the solution to these problems is to expand the scope of this mechanism. No longer do we just poll him, we allow him to use this secure computation as a platform to:
• Exchange information
• Surf-wise, email-wise, etc. Think ordinary net connection.
• Intended for:
• News and tracking the state of the world
• Negotiating agreements
• Communicating and co-ordinating with others, perhaps loved ones or coworkers.
• Anything. He can just waste time and bandwidth.
• Perform legal actions externally
• Spend money or other possessions
• Contract to agreements
• Delegate his personal utility metric, or some fraction of it. Ie, that fraction of it would then be taken from the given external source; presumably there'd be unforgeable digital signing involved. Presumably he'd delegate it to some sort of external successor self or selves.
• Delegate any other legal powers.
• (This all only goes thru if the computation running him verifies, but all attempts are logged)
• Commit to alterations of his environment and even of his self.
• This includes even committing to an altered self created outside the environment.
• Safeguards:
• This too should only go thru if the computation running him verifies, and attempts should be logged.
• It shouldn't be possible to do this accidentally.
• He'll have opportunity and advice to stringently verify its correctness first.
• There may be some "tryout" functionality whereby his earlier self will be run (later or in parallel) to pass judgement on the goodness of the upgrade.
• Verify digital signatures and similar
• Eg, to check that external actions have been performed as represented.
• (This function is within the secure computation but external to the mind. Think running GPG at will)
The intention is that he initially "lives" in the limited, one-size-fits-all government-issue secure computing environment, but uses these abilities to securely move himself outwards to better secure environments. He could entirely delegate himself out of the standard environment or continue to use it as a home base of sorts; I provided as much flexibility there as I could.
### Problems solved
This would immediately solve most of the problems above:
• He can know the state of the world, especially of his personal interests, by surfing for news, contacting friends, basically using a net connection.
• Since he is the same mind who is "doing the living" except as he delegates otherwise, there's no divergence of satisfaction.
• He can avail himself of more efficient computation if he chooses, in any manner and degree that's for sale.
• He's not rigidly tied to the original state of the upload. He can grow, even in ways that we can't conceive of today.
• His inputs and outputs are no longer cut off from the world even before he externalizes.
• Individuals can buy more computing horsepower (and anything else), though they can only use it externally. Even that restriction seems not neccessary, but that's a more complex design.
Tackling the remaining problems:
• Restart: Of course he'd restart from the last known good state.
• Since we block legal actions for unverified runs, a malicious host can't get him into any trouble.
• We minimize ambiguity about which state is the last known good state to make it hard to game on that.
• The verification logs are public or otherwise overseen.
• (I think there's more that has to be done. Think Bitcoin blockchains as a possible model)
• Running all the time:
• Although he initially "lives" there, he has reasonable other options, so ISTM the requirements are less stringent:
• Uneven downtime, maintenance, and upgrading is less unfair.
• Downtime is less unconscionable, especially after he has had a chance to establish a presence outside.
• The use of virtual hosting may make this easier to do and fairer to citizens.
• Privacy of communications:
• Encrypt his communications.
• Obscure his communications' destinations. Think Tor or Mixmaster.
• Privacy of self:
• Encrypt his mind data before it's made available to the host
• Encrypt his mind even as it's processed by the host (http://en.wikipedia.org/wiki/Homomorphic_computing). This may not be practical, because it's much slower than normal computing. Remember, we need this to be fast enough to be doable before super-intelligent AIs are.
• "Secret-share" him to many independent hosts, which combine their results. This may fall out naturally from human brain organization. Even if it doesn't, it seems possible to introduce confusion and diffusion.
• (This is a tough problem)
### Security holes
The broader functionality opens many security holes, largely about providing an honest, empowering environment to the mind. I won't expand on them in this post, but I think they are not hard to close with creative thinking.
There's just one potential exploit I want to focus on: A host running someone multiple times, either in succession or staggered in parallel. If he interacts with the world, say by reading news, this introduces small variations which may yield different results. Not just different satisfaction results, but different delegations, contracts, etc. A manipulator would then choose the most favorable outcome and report that as the "real" result, silently discarding the others.
One solution is to make a host commit so often that it cannot hold multiple potentially-committable versions very long.
• Require a certain pace of computation.
• Use frequent unforgeable digital timestamps so a host must commit frequently.
• Sign and log the citizen's external communications so that any second stream of them becomes publicly obvious. This need not reveal the communications' content.
### Checking via redundancy
Unlike the threat of a host running multiple diverging copies of someone, running multiple non-diverging copies on multiple independent hosts may be desirable, because:
• It makes the "secret-share" approach above possible
• A citizen's computational substrate is not controlled by any one entity, which follows a general principle in security to guard against exploits that depend on monopolizing access.
• It is likely to detect non-verification much earlier.
However, the CAP theorem makes the ideal case impossible. We may have to settle for softer guarantees like Eventual Consistency.
(Edit: Fixed stray anchor that Blogspot doesn't handle nicely)
## Parallel Dark Matter 9
### Previously
I have been blogging about a theory I call Parallel Dark Matter (and here and here), which I may not be the first to propose, though I seem to be the first to flesh the idea out.
In particular, I mentioned recent news that the solar system appears devoid of dark matter, something that PDM predicted and no other dark matter theory did.
## Watch that title!
So I wes very surprised to read Plenty of Dark Matter Near the Sun (or here). It appeared to contradict not only the earlier success of PDM but also the recent observations.
But when I got the paper that the article is based on (here and from the URL it looks like arXiv has it too), the abstract immediately set the record straight.
By "near the sun", they don't mean "in the solar system" like you might think. They mean the stellar neighborhood. It's not immediately obvious just how big a chunk of stellar neighborhood they are talking about, but you may get some idea from the fact that their primary data is photometric distances to a set of K dwarf stars.
### The paper
Silvia Garbari, Chao Liu, Justin I. Read, George Lake. A new determination of the local dark matter density from the kinematics of K dwarfs. Monthly Notice of the Royal Astronomical Society, 9 August, 2012; 2012arXiv1206.0015G (here)
## But that's not the worst
science20.com got it worse: "Lots Of Dark Matter Near The Sun, Says Computer Model". No and no. They used a simulation of dark matter to calibrate their mass computations. They did not draw their conclusions from it.
## And the Milky Way's halo may not be spherical
The most interesting bit IMO is that their result "is at mild tension with extrapolations from the rotation curve that assume a spherical halo. Our result can be explained by a larger normalisation for the local Milky Way rotation curve, an oblate dark matter halo, a local disc of dark matter, or some combination of these."
## Plastination 3
### Previously
I blogged about Plastination, a potential alternative to cryonics, and suggested storing, along with the patient, an EEG of their healthy brain activity.
## Why?
Some people misunderstood the point of doing that. It is to provide a potential cross-check. I won't try to guess how future simulators might best use the cross-check.
And it isn't intended to rule out storing fMRI or MEG data also, although neither seems practical to get every six months or so.
## MEG-MRI
But what to my wondering eyes should appear a few days after I wrote that? MEG-MRI, a technology that claims unprecedented accuracy in measuring brain activity.
So I wrote this follow-up post to note that MEG-MRI as another potential source of cross-checking information.
## Plastination 2
### Previously
I blogged about Plastination, a potential alternative to cryonics.
Luke's comment got me to write more (always a risk commenters take)
## The biggest problem
The big problem in plastination is that it is hit-or-miss. What it preserves, it seems to preserve well, but in current SOA, whole sections of the brain might be unpreserved. The researchers who developed it didn't care about bringing their lab rats back from the dead, so that was considered good enough.
From a layman's POV, infusing the whole brain doesn't look harder than cryonics infusing the whole brain with cryoprotectant, but there could be all sorts of technical details that make me wrong.
## So which wins, plastination or cryonics?
A lot depends on which you judge more likely in a reasonable time-frame: repair nanobots or emulation. I'd judge emulation much more likely. We can already emulate roundworms and have partly emulated fruit flies. So I suspect Moore's law makes human emulation in a reasonable time-frame much more likely than not.
## Can we prove it?
One thing I like about plastination-to-emulation is that we could prove it out now. Teach a fruit fly some trick, or let it learn something meaningful to a fruit fly - maybe the identity of a rival, if fruit flies learn that.
Plastinate its brain, emulate it. Does it still know what it learned? And know it equally well? If so, we can justifiably place some confidence in this process. If not, we've just found a bug to fix.
So with plastination-to-emulation, we have the means to drive a debugging cycle. That's very good.
## Difference in revival population dynamics
One difference that I don't know what to make of: If they work, the population dynamics of revival would probably be quite different.
In plastination-to-emulation, revival becomes possible for everybody at the same time. If you can scan in one plastinated brain, you can scan any one.
In cryonics-to-cure-and-thaw, I expect there'd be waves as the various causes of death were solved. Like, death from sudden heart attack might be cured long before Alzheimer's disease became reversible, if ever.
## Plastination - an alternative to cryonics
### Previously
I'll assume that everyone who reads my blog has heard of cryonics.
### Trending
Chemopreservation has been known for some time, but has recently received some attention as a credible alternative to cryonics. These pages (PLASTINATION VERSUS CRYONICS, Biostasis through chemopreservation) make the case well. They also explain some nuances that I won't go into. But basically, chemopreservation stores you more robustly by turning your brain into plastic. There's no liquid nitrogen required, no danger of defrosting. With chemopreservation, they can't just fix what killed you and "wake you up", you'd have to be scanned and uploaded.
### Are thawing accidents likely? Yes.
Cryonics organizations such as Alcor just wouldn't let you thaw, because they take their mission very seriously?
Without casting any aspersions on cryonics organizations' competence and integrity, consider that recently, 150 autistic brains being stored for research at McLean Hospital were accidentally allowed to thaw (here, here, here). McLean and Harvard presumably take their mission just as seriously as Alcor and have certain organizational advantages.
## My two cents: Store EEG data too
In the cryonics model, storing your EEG's didn't make much sense. When (if) resuscitation "restarted your motor", your brainwaves would come back on their own. Why keep a reference for them?
But plastination assumes from the start that revival consists of scanning your brain in and emulating it. Reconstructing you would surely be done computationally, so any source of information could be fed into the reconstruction logic.
Ideally the plastinated brain would preserve all the information that is you, and preserve it undistorted. But what if it preserved enough information but garbled it? Like, the information that got thru was ambiguous. There would be no way to tell the difference between the one answer that reconstructs your mind correctly and many other answers that construct something or someone else.
Having a reference point in a different modality could help a lot. I won't presume to guess how it would best be used in the future, but from an info-theory stance, there's a real chance that it might provide crucial information to reconstruct your mind correctly.
And having an EEG reference could provide something less crucial but very nice: verification.
## Hold that last brane
### Previously
I have been blogging about a theory I call Parallel Dark Matter (and here and here), which I may not be the first to propose, though I seem to be the first to flesh the idea out.
Recently I posted (Brown dwarfs may support PDM) that wrt brown dwarfs, the ratio between the number we see by visual observation and the number that we seem to see by gravitational microlensing, 1/5, is similar to what PDM predicts.
I had another look and it turns out I was working from bad data. The ratio is not just similar, it's the same.
Dark matter accounts for 23% of the universe's mass, while visible matter accounts for 4.6% (the remainder is dark energy). Ie, exactly 1/5. I don't know why I accepted a source that put it as 1/6; lazy, I guess.
That implies 5 dark branes rather than 6. I have updated my old PDM posts accordingly.
## Some evidence from brown dwarfs may support PDM
### Previously
I have been blogging about a theory I call Parallel Dark Matter (and here and here), which I may not be the first to propose, though I seem to be the first to flesh the idea out.
## We see fewer brown dwarfs than we expected
In recent news, here and here, a visual survey of brown dwarfs (Wide-field Infrared Survey Explorer, or WISE) shows far fewer of them than astronomers expected.
Previous estimates had predicted as many brown dwarfs as typical stars, but the new initial tally from WISE shows just one brown dwarf for every six stars.
Note the ratio between observed occurence and predicted occurence: 1/6. That's not the last word, though. Davy Kirkpatrick of WISE says that:
the results are still preliminary: it is highly likely that WISE will discover additional Y dwarfs, but not in vast numbers, and probably not closer than the closest known star, Proxima Centauri. Those discoveries could bring the ratio of brown dwarfs to stars up a bit, to about 1:5 or 1:4, but not to the 1:1 level previously anticipated
## But gravitational lensing appeared to show that they were common
But gravitational microlensing events suggested that brown dwarfs are common; if they weren't, it'd be unlikely that we'd see gravitational microlensing by them to that degree.
While I don't have the breadth of knowledge to properly survey the argument for brown dwarf commonness, it's my understanding that this was the main piece of evidence for it.
## This is just what PDM would predict
PDM predicts that we would "see" gravity from all six branes, but only visually see the brown dwarfs from our own brane.
The ratio isn't exact but seems well within the error bars. They found 33, so leaving out other sources of uncertainty, you'd expect only a 68% chance that the "right" figure - ie, if it were exactly the same as the average over the universe - would be between 27 and 38.
Note that PDM predicts a 1/6 ratio between gravitational observations and visual observations. I emphasize that because in the quotes above, the ratios were between something different, visual observations of brown dwarfs vs visible stars.
## Emtest
### Previously
Some years back, I wrote a testing framework for emacs called Emtest. It lives in a repo hosted on Savannah, mirrored here, doc'ed here.
## Cucumber
Recently a testing framwork called Cucumber came to my attention. I have multiple reactions to it:
### But they left important parts unadopted
But they didn't really adopt table testing in its full power. There are a number of things I have found important for table-driven testing that they apparently have not contemplated:
N/A fields
These are unprovided fields. A test detects them, usually skipping over rows that lack a relevant field. This is more useful than you might think. Often you are defining example inputs to a function that usually produces output (another field) but sometimes ought to raise error. For those cases, you need to provide inputs but there is nothing sensible to put in the output field.
Constructed fields
Often you want to construct some fields in terms of other fields in the same row. The rationale above leads directly there.
Constructed fields II
And often you want to construct examples in terms of examples that are used in other tests. You know those examples are right because they are part of working tests. If they had some subtle stupid mistake in them, it'd have already shown up there. Reuse is nice here.
Persistent fields
This idea is not originally mine, it comes from an article on Gamasutra1. I did expand it a lot, though. The author looked for a way to test image generation (scenes) and what he did was at some point, capture a "good" image the same image generator. Then from that point on, he could automatically compare the output to a known good image.
• He knew for sure when it passed.
• When the comparison failed, he could diff the images and see where and how badly; it might be unnoticeable dithering or the generator might have omitted entire objects or shadows.
• He could improve the reference image as his generator got better.
I've found persistent fields indispensable. I use them for basically anything that's easier to inspect that it is to write examples of. For instance, about half of the Klink tests use it.
### They didn't even mention me
AFAICT neither Cucumber nor Gherkin credits me at all. Maybe they're honestly unaware of the lineage of the ideas they're using. Still, it gets tiresome not getting credit for stuff that AFAICT I invented and gave freely to everybody in the form of working code.
### They don't use TESTRAL or anything like it.
TESTRAL is the format I defined for reporting tests. Without going into great detail, TESTRAL is better than anything else out there. Not just better than the brain-dead ad hoc formats, but better than TestXML.
### BDD is nice
Still, I think they have some good ideas, especially regarding Behavior Driven Development. IMO that's much better than Test-Driven Development2.
In TDD, you're expected to test down to the fine-grained units. I've gone that route, and it's a chore. Yes, you get a nice regression suite, but pretty soon you just want to say "just let me write code!"
In constrast, where TDD is bottom-up, BDD is top-down. Your tests come from use-cases (which are structured the way I structure inline docstrings in tests, which is nice, and just how much did you Cucumber guys borrow?) BDD looks like a good paradigm for development.
## Not satisfied with Emtest tables, I replaced them
But my "I was first" notwithstanding, I'm not satisfied with the way I made Emtest do tables. At the time, because nobody anywhere had experience with that sort of thing, I adopted the most flexible approach I could see. This was tag-based, an idea I borrowed from Carsten Dominick's org-mode3.
However, over the years the tag-based approach has proved too powerful.
• It takes a lot of clever code behind the scenes to make it work.
• Maintaining that code is a PITA. Really, it's been one of the most time-consuming parts of Emtest, and always had the longest todo list.
• In front of the scenes, there's too much power. That's not as good as it sounds, and led to complex specifications because too many tags needed management.
• Originally I had thought that a global tag approach would work best, because it would make the most stuff available. That was a dud which I fixed that years ago.
### So, new tables for Emtest
So this afternoon I coded a better table package for Emtest. It's available on Savannah right now; rather, the new Emtest with it is available. It's much simpler to use:
emt:tab:make
define a table, giving arguments:
docstring
A docstring for the entire table.
A list of column names. For now they are simply symbols, later they may get default initialization forms and other help
rows
The remaining arguments are rows. Each begins with a namestring.
emt:tab:for-each-row
Evaluate body once for each row, with the row bound to var-sym
emt:tab
Given a table row and a field symbol, get the value of the respective field
I haven't added Constructed fields or Persistent fields yet. I will when I have to use them.
Emtest also now supports foreign testers. That is, it can communicate with an external process running a tester, and then report that tester's results and do all the bells and whistles (persistence, organizing results, expanding and collapsing them, point-and-shoot launching of tests, etc) So the external tester can be not much more than "find test, run test, build TESTRAL result".
It communicates in Rivest-style canonical s-expressions, which is as simple a structured format as anything ever. It's equally as expressive as XML and there exist interconverters.
I did this with the idea of using it for the Functional Reactive Programming stuff I was talking about before, if in fact I make a test implementation for it (Not sure).
## And renamed to tame the chaos
At one time I had written Emtest so that the function and command prefixes were all modular. Originally they were written-out, like emtest/explorer/fileset/launch. That was huge and unwieldy, so I shortened their prefixes to module unique abbreviations like emtl:
But when I looked at it again now, that was chaos! So now
• Everything the user would normally use is prefixed emtest
• Main entry point emtest
• Code-editing entry point emtest:insert
• "Panic" reset command emtest:reset
• etc
• Everything else is prefixed emt: followed by a 2 or 3 letter abbreviation of its module.
I haven't done this to the define and testhelp modules, though, since the old names are probably still in use somewhere.
## Footnotes:
1 See, when I borrow ideas, I credit the people it came from, even if I have improved on it. Can't find the article but I did look; it was somewhat over 5 years ago, one of the first big articles on testing there.
2 Kent Beck's. Again, crediting the originator.
3 Again credit where it's due. He didn't invent tags, of course, and I don't know who was upstream from him wrt that.
## Mutability And Signals 3
### Previously
I have a crazy notion of using signals to fake mutability, thereby putting a sort of functional reactive programming on top of formally immutable data. (here and here)
## Now
So recently I've been looking at how that might be done. Which basically means by fully persistent data structures. Other major requirements:
• Cheap deep-copy
• Support a mutate-in-place strategy (which I'd default to, though I'd also default to immutable nodes)
• Means to propagate signals upwards in the overall digraph (ie, propagate in its transpose)
## Fully persistent data promises much
• As mentioned, signals formally replacing mutability.
• Easily keep functions that shouldn't mutate objects outside themselves from doing so, even in the presence of keyed dynamic variables. For instance, type predicates.
• From the above, cleanly support typed slots and similar.
• Trivial undo.
• Real Functional Reactive Programming in a Scheme. Implementations like Cell and FrTime are interesting but "bolted on" to languages that disagree with them. Flapjax certainly caught my interest but it's different (behavior based).
• I'm tempted to implement logic programming and even constraint handling on top of it. Persistence does some major heavy lifting for those, though we'd have to distinguish "immutable", "mutate-in-place", and "constrain-only" versions.
• If constraint handling works, that basically gives us partial evaluation.
• And I'm tempted to implement Software Transactional Memory on it. Once you have fully persistent versioning, STM just looks like merging versions if they haven't collided or applying a failure continuation if they have. Detecting in a fine-grained way whether they have is the remaining challenge.
## DSST: Great but yikes
So for fully persistent data structures, I read the Driscoll, Sarnak, Sleator and Tarjan paper (and others, but only DSST gave me the details). On the one hand, it basically gave me what I needed to impelement this, if in fact I do. On the other hand, there were a number of "yikes!" moments.
The first was discovering that their solution did not apply to arbitrary digraphs, but to digraphs with a constant upper bound p on the number of incoming pointers. So the O(1) cost they reported is misleading. p "doesn't count" because it's a constant, but really we do want in-degree to be arbitrarily large, so it does count. I don't think it will be a big deal because the typical node in-degree is small in every code I've seen, even in some relentlessly self-referring monstrosities that I expect are the high-water mark for this.
Second yikes was a gap between the version-numbering means they refer to (Dietz et al) and their actual needs for version-numbering. Dietz et al just tell how to efficiently renumber a list when there's no room to insert a new number.
Figured that out: I have to use a level of indirection for the real indexes. Everything (version data and persistent data structure) hold indirect indexes and looks up the real index when it needs it. The version-renumbering strategy is not crucial.
Third: Mutation boxes. DSST know about them, provide space for them, but then when they talk about the algorithm, totally ignore them. That would make the description much more complex, they explain. Yes, true, it would. But the reader is left staring at a gratuitously costly operation instead.
But I don't want to sound like I'm down on them. Their use of version-numbering was indispensable. Once I read and understood that, the whole thing suddenly seemed practical.
## Deep copy
But that still didn't implement a cheap deep copy on top of mutate-in-place. You could freeze a copy of the whole digraph, everywhere, but then you couldn't both that and a newer copy in a single structure. Either you'd see two copies of version A or two copies of version B, but never A and B.
Mixing versions tends to call up thoughts of confluent persistence, but IIUC this is a completely different thing. Confluent persistence IIUC tries to merge versions for you, which limits its generality. That would be like (say) finding every item that was in some database either today or Jan 1; that's different.
What I need is to hold multiple versions of the same structure at the same time, otherwise deep-copy is going to be very misleading.
So I'd introduce "version-mapping" nodes, transparent single-child nodes that, when they are1 accessed as one version, their child is explored as if a different version. Explore by one path, it's version A, by another it's version B.
## Signals
Surprisingly, one part of what I needed for signals just fell out of DSST: parent pointers, kept up to date.
Aside from that, I'd:
• Have signal receiver nodes. Constructed with a combiner and an arbitrary data object, it evaluates that combiner when anything below it is mutated, taking old copy, new copy, receiver object, and path. This argobject looks very different under the hood. Old and new copy are recovered from the receiver object plus version stamps; it's almost free.
• When signals cross the mappers I added above, change the version stamps they hold. This is actually trivial.
• As an optimization, so we wouldn't be sending signals when there's no possible receiver, I'd flag parent pointers as to whether anything above them wants a signal.
## Change of project
If I code this, and that's a big if, it will likely be a different project than Klink, my Kernel interpreter, though I'll borrow code from it.
• It's such a major change that it hardly seems right to call it a Kernel interpreter.
• With experience, there are any number of things I'd do differently. So if I restart, it'll be in C++ with fairly heavy use of templates and inheritance.
• It's also an excuse to use EMSIP.
## Footnotes:
1 Yes, I believe in using resumptive pronouns when it makes a sentence flow better.
## Review Inside Jokes 1
### Previously
I am currently reading Inside Jokes by Matthew M. Hurley, Daniel C. Dennett, and Reginald B. Adams Jr. So far, the book has been enlightening.
## Brief summary
Their theory, which seems likely to me, is that humor occurs when you retract an active, committed, covertly entered belief.
Active
It's active in your mind at the moment. They base this on a Just-In-Time Spreading Activation model.
Covertly entered
Not a belief that you consciously same to. You assumed it "automatically".
Committed
A belief that you're sure about, as opposed to a "maybe". To an ordinary degree, not neccessarily to a metaphysical certitude.
And a blocking condition: Strong negative emotions block humor.
### Basic humor
What they call "basic" humor is purely in your own "simple" (my word) mental frame. That frame is not interpersonal, doesn't have a theory of mind. Eg, when you suddenly realize where you left your car keys and it's a place that you foolishly ruled out before, which is often funny, that's basic humor.
### Non-basic humor
Non-basic humor occurs in other mental frames. These frames have to include a theory of mind. Ie, we can't joke about clams - normal clams, not anthropomorphized in some way. I expect this follows from the requirement of retracting a belief in that frame.
## Did they miss a trick?
They say that that in third-person humor, the belief we retract is in our frame of how another person is thinking, what I might call an "empathetic frame".
I think that's a mis-step. A lot of jokes end with the butt of the joke plainly unenlightened. It's clear to everyone that nothing has been retracted in his or her mind. ISTM this doesn't fit at all.
### Try social common ground instead.
I think they miss a more likely frame, one which I'd call social common ground. (More about it below)
We can't just unilaterally retract a belief that exists in social common ground. "Just disbelieving it" would be simply not doing social common ground. And we as social creatures have a great deal of investment in it.
To retract a belief in social common ground, something has to license us to do so, and it generally also impels us to. ISTM the need to create that license/impulse explains why idiot jokes are the way they are.
This also explains why the butt of the joke not "getting it" doesn't prevent a joke from being funny, and even enhances the mirth. His or her failure to "get it" doesn't block social license to retract.
Covert entry fits naturally here too. As social creatures, we also have a great deal of experience and habit regarding social common ground. This gives plenty of room for covert entry.
## What's social common ground?
### Linguistic common ground
"Common ground" is perhaps more easily explained in linguistics. If I mention (say) the book Inside Jokes, then you can say "it" to refer to it, even though you haven't previously mentioned the book yourself. But neither of us can just anaphorically1 refer to "it" when we collectively haven't mentioned it before.
We have a sort of shared frame that we both draw presuppositions from. Of course, it's not really, truly shared. It's a form of co-operation and it can break. But normally it's shared.
### From language common ground to social common ground
I don't think it's controversial to say that:
• A similar common ground frame always holds socially, even outside language.
• Normal people maintain a sense of this common ground during social interactions.
• Sometimes they do so even at odds with their wishes, the same way they can't help understanding speech in their native language.
## Footnotes:
1 Pedantry: There are also non-anaphoric "it"s, such as "It's raining."
## I may not be the first to propose PDM
### Previously
Previously I advanced Parallel Dark Matter, the theory dark matter is actually normal matter that "lives" on one of 5 "parallel universes" that exchange only gravitational force with the visible universe. I presumptively call these parallel universes "branes" because they fit with braneworld cosmology.
## Spergel and Steinhardt proposed it earlier
They may have proposed it in 2000, and in exactly one sentence.
It's not exactly the same: They don't explicitly propose that it simply is ordinary matter on another brane, and they do not propose multiple branes accounting for the ratio of dark matter to visible matter. But it's close enough that in good conscience I have to let everyone know that they said this first.
AFAICT they and everyone else paid no further attention to it.
The relevant sentence is on page 2: "M-theory and superstrings, for example, suggest the possibility that dark matter fields reside on domain walls with gauge fields separated from ordinary matter by an extra (small) dimension".
## The nature of Truth
### Previously
I recently finished reading A User's Guide To Thought And Meaning by Ray Jackendoff. In it, he asks "What is truth?" and mentions several problems with what we might call the conventional view.
## T=WVP
Truth is just what valid reasoning preserves.
No more and no less. I'll abbreviate it T=WVP
The conventional view is that truths are about the world, and valid reasoning merely doesn't drop the ball. I'll abbreviate it CVOT. To illustrate CVOT, consider:
All elephants are pink
Nelly is an Elephant
Nelly is pink
where the reasoning is valid but the major premiss is false, and so is the conclusion.
Since "about the world" plays no part in my definition, I feel the need to justify why it needn't and shouldn't.
Consider the above example. Presumably you determined that "All elephants are pink" is false because at some point you saw an elephant and it was grey1.
And how did you determine that what you were seeing was an elephant and it wasn't pink? Please don't stop at "I saw it and I just knew". I know that readers of this blog have more insight into their thinking than that. Your eyes and your brain interpreted something as seeing a greyish elephant. I'm not saying it wasn't one, mind you. But you weren't born knowing all about elephants. You had to learn about them. You even had to learn the conventional color distinctions - other cultures distinguish the named colors differently.
So you used reasoning to determine that this sensory input indicated an elephant. Not conscious reasoning - the occipital lobe does an enormous amount of processing without conscious supervision, and not declarative facts - more like skills to interpret sights correctly. But consciously or not, you used a type of reasoning.
So the major premiss ("All elephants are pink") wasn't directly about the world after all. We reached it by reasoning. So on this level at least, T=WVP looks unimpeachable and CVOT looks problematic.
### Detour: Reasoning and valid deductive reasoning
I'll go back in a moment and finish that argument, but first I must clarify something.
My sharp-eyed readers will have noticed that I first talked about valid reasoning, but above I just said "reasoning" and meant something much broader than conscious deductive reasoning. I'm referring to two different things.
Deductive reasoning is the type of reasoning involved in the definition, because only deductive reasoning can be valid. But other types of reasoning too can be characterized by how well or poorly they preserve truth in some salient context, even while we define truth only by reference to valid reasoning. Truth-preservation is not the only virtue that reasoning can have. For instance, one can also ask how well it finds promising hypotheses or explores ramifications. Truth-preservation is just the aspect that's relevant to this definition.
One might object that evolutionarily, intuitive reasoning is not motivated by agreeing with deductive reasoning, but by usefulness. Evolution provided us with reasoning tools not because it has great respect for deductive reasoning, but because they are "good tricks" and saved the lives of our remote ancestors. In some cases useful mental activity and correct mental activity part company, for instance a salesperson convincing himself or herself that the line of products really is a wonderful bargain, the better to persuade the customers, when honestly it's not.
True. It's a happy accident that evolutionary "good tricks" gave us tools that strongly tend to agree with deductive reasoning. But accident or not, we can sensibly characterize other acts of reasoning by how well or poorly they preserve truth.
### Can something save CVOT?
I said that "on this level at least, T=WVP looks unimpeachable and CVOT looks problematic."
Well, couldn't we extend CVOT one level down? Yes we could, but the same situation recurs. The inputs, which look at first like truths or falsities about the world, turn out on closer inspection to be the products of yet more reasoning (in the broad sense). And not neccessarily our own reasoning, they could be "pre-packaged" by somebody else. This gives us no better reason to expect that they truthfully describe the real world.
Can we save CVOT by looking so far down the tower2 of mental levels that there's just no reasoning involved? We must be careful not to stop prematurely, for instance at "I just see an elephant". Although nobody taught us how to see and we didn't consciously reason it out, there is a reasoning work being done underneath there.
What if we look so far down that no living creature has mentally operated on the inputs? For instance, when we smell a particular chemical, say formaldehyde, because our smell receptors match the chemical's shape?
Is that process still about the world? Yes, but not the way the color of elephants was. It tells you that there are molecules of formaldehyde at this spot at this time. That's much more limited.
CVOT can't stop here. It wouldn't be right to treat this process as magically perceiving the world. A nerve impulse is not a molecule of formaldehyde. To save CVOT, truth about the world still has to enter the picture somehow. There's still a mediating process from inputs (a molecule of formaldehyde is nearby) to outputs (sending an impulse).
But by now you can see the dilemma for CVOT: in trying to find inputs that are true but aren't mediated by reasoning, we have to keep descending further, but in doing so, we sacrifice aboutness and still face the same problem of inputs.
Can CVOT just stop descending at some point? Can we save it by poositing that the whole process (chemical, cell, impulse) produces an output that's true about the world, and furthermore that this truth is achieved other than by correctly processing true inputs about the world?
Yes for the first part, no for the second. If we fool the smell receptor, for instance by triggering it with electricity instead of formaldehyde, it will happily communicate a falsehood about the world, because it will have correctly processed false inputs.
So we do need to be concerned about the truth of the inputs, so CVOT does need to keep descending. It has to descend to natural selection at this point. Since I believe in the unity of design space, I think this change of destination makes no difference to the argument, so I merely mention it in passing.
Since we must descend as long as there are inputs, where will it end? What has outputs but no inputs? What can be directly sensed without any mediation?
If there is such a level to land at, I can only imagine it as a level of pointillistic experiences. Like Euclid's points, they have no part. One need not assemble them from lower inputs because they have no structure to require assembly.
If such pointillistic experiences exist, they aren't about anything because they don't have any structure. At best, a pointillistic experience indicates transiently, without providing further context, a single interaction in the world. Not being about anything, they can't be truths about the world.
So CVOT is not looking good. It needs its ultimate inputs to have aboutness and they don't, not properly anyways.
### Does T=WVP do better?
If CVOT has problems, that doesn't neccessarily mean that T=WVP doesn't. Can T=WVP offer a coherent view of truth, one that doesn't need magically true inputs?
I believe it can. I said earlier that truth-preservation is not the only virtue that reasoning can have. Adbuctive reasoning can (under felicitous conditions) find good explanations and inductive reasoning can supply probable facts even in the absence of inputs. Bear in mind that I include unconscious, frozen, and tacit processes here, just as long as they are doing any reasoning work.
So while deductive reasoning doesn't drop the ball, other types of reasoning can actually improve the ball. Could they improve the ball so much that really, as processed thru this grand and mostly unconscious tower of reasoning, they actually create the ball? Could they incrementally transform initial inputs that aren't even properly about the world into truth as we know it? I contend that this is exactly how it happens.
### Other indications that "about the world" just doesn't belong
Consider the following statements3:
1. Sherlock Holmes was a detective
2. Sherlock Holmes was a chef
Notice I didn't say "fictional". You can figure out that they're talking about fiction, but that's not in the statements themselves.
I assume your intuition, like mine, is that (1) is true (or true-ish) and (2) is false (or false-ish).
In CVOT, they're the same, because they're both meaningless (or indeterminate or falsely presupposing). (1) can't naturally be privileged over (2) in CVOT.
In T=WVP, (1) is privileged over (2), as it should be. Both are reasoning about Arthur Conan Doyle's fiction. (1) proceeds from healthy, unexceptional reasoning about them, while (2) somehow imagines Holmes serving the hound of the Baskervilles to dinner guests. (1) clearly proceeds from better reasoning than (2), and in T=WVP this justifies its superior truth status.
CVOT could awkwardly salvaged by saying that we allow accomodation, so we map "Sherlock Holmes" to the fictional detective by adding the qualifier "fictional" to the statements. But then why can't we fix (2) with accomodation too? Doyle never wrote "Cookin' With Sherlock", but it's likely that someone somewhere has. Why can't we accomodate to that too? And if we accomodate to anything anyone ever wrote, including (say) Alice In Wonderland and Bizzaro world, being about the world means almost nothing.
Furthermore, if we accept accomodation as truth-preserving, we risk finding that "All elephants are pink" is true too4 because "by pink, you must mean ever so slightly pinkish grey" or "by elephant, you must mean a certain type of mouse".
I could reductio further, but I think I've belabored it enough.
## Circularity avoided in T=WVP
Rather than defining truth as what valid reasoning preserves, it's more usual to define valid reasoning as truth-preserving operations. Using both definitions together would make a circular definition.
But we can define valid reasoning in other ways. For instance, in terms of tautologies - statements that are always true no matter what value their variables take. A tautology whose top functor is "if" (material implication) describes a valid reasoning operation. For instance:
(a & (a -> b)) -> b
In English, "If you have A and you also have "A implies B", then you have B". That's modus ponens and it's valid reasoning.
I said tautologies are "statements that are always true", which is the conventional definition of them, but it contains "true". Again I need to avoid a circular definition. So I just define tautology and the logical operations in terms of a matrix of enumerated values (a truth-table). We don't need to know the nature of truth to construct such a matrix or to examine it. We can construct operations isomorphic to the usual logical operations simply in terms of opaque symbols:
XYX AND Y
truetruetrue
truefalsefalse
falsetruefalse
falsefalsefalse
XYX OR Y
truetruetrue
truefalsetrue
falsetruetrue
falsefalsefalse
XNOT X
truefalse
falsetrue
## Some other virtues of this definition
Briefly:
• It recovers the Quinean disquotation sense of truth. Ie, a quoted true statement, interpreted competently, is true.
• It recovers our ordinary sense of truth (I hinted at this above)
• It recovers the property that truth has where the chain is as strong as its weakest link.
## Footnotes:
1 Or you trusted somebody else who told you the saw a grey elephant. In which case, read the argument as applying to them.
2 I'm talking as if it was a tower of discrete levels only for expository convenience. I don't think it's all discrete levels, I think it's the usual semi-fluid, semi-defined situation that natural selection creates.
3 Example borrowed from Ray Jackendoff
4 Strictly speaking, we would only do this for presuppositions, but if the speaker mentions "the pink elephant" at some point the reductio is good to go.
## A User's Guide To Thought And Meaning
### Previously
I just finished A User's Guide To Thought And Meaning by Ray Jackendoff, a linguist best known for X-bar theory.
### Summary
I wasn't impressed with it. Although he starts off credibly if pedestrianly, the supporting arguments for his main thesis are fatally flawed. I found it annoying as I got further into the book to see him building on a foundation that I considered unproven and wrong.
His main thesis can be summarized by a quote from the last chapter:
What we experience as rational thinking consists of thoughts linked
to language. The thoughts themselves aren't conscious.
### A strange mistake
The foregoing quote leads me to the strangest assumption in the book. He says that our mental tools are exactly our language tools. He does allow at one or two points that visual thinking might qualify too.
That may be true of Ray, but I know for a fact that it's not true of me. I often have the experience of designing some piece of source code in my head, often when I'm either falling asleep or waking up. Then later I go to code it, and I realize that I have to think of good names for the various variables and functions. I hadn't used names before when I handled them mentally because I wasn't handling them by language (as we know it). I wasn't handling them by visual imagery either. Of course I was mentally handling them as concepts.
There are other indicators that we think in concepts: The tip-of-the-tongue experience and words like "Thingamajig" and "whatchamacallit". In the chapter Some phenomena that test the Unconscious Meaning Hypothesis, Ray mentions these but feels that his hypothesis survives them. It's not clear to me why he concludes that.
What is clear to me is that we (at least some of us) think with all sorts of mental tools and natural language is only one of them.
If he meant "language" in a broad sense that includes all possible mental tools, which he never says, it makes his thesis rather meaningless.
### Shifting ground
Which brings me to a major problem of the book. Although he proposes that all meaning is unconscious, his support usually goes to show that some meaning (or mental activity) is unconscious. That's not good enough. It's not even surprising; of course foundational mental activity is unconscious.
To be fair, I will relate where he attempts to prove that all meaning is unconscious, from the chapter What's it Like To Be Thinking Rationally? He does this by quoting neuropsychologist Karl Lashey:
No activity of mind is ever conscious. This sounds like a paradox but
it is nonetheless true. There are order and arrangement, but there is
no experience of the creation of that order. I could give numberless
examples, for there is no exception to the rule.
Unfortunately, Lashey's quote fails to support this; again he gives examples and takes himself to have proven the general case. Aside from this, he simply pronounces his view repeatedly and forcefully. Jackendoff says "I think this observation is right on target" and he's off.
• Consciously deciding what to think about.
• Introspection
• Math and logic, where we derive a meaning by consciously manipulating symbols? Jackendoff had talked about what philosophers call the Regression Problem earlier in the chapter, and I think he takes himself to have proven that symbolic logic is unconscious too, but that's silly. He also talks about the other senses "all" being misleading in syllogisms, but that's a fact about natural language polysemy, not about consciousness.
None of this is asked, but one is left with the impression that all of these "don't count". It makes me want to ask, "What would count? If nothing counts as conscious thought, then you really haven't said anything."
### One last thing
In an early chapter Some Uses of ~mean~ and ~meaning~, he tries to define meaning. Frustratingly, he seems unaware of the definition I consider best, which is generally accepted in semiotics:
X means Y just if X is a reliable indication of Y
Essentially all of the disparate examples he gives fall under this definition, either directly or metonymically.
Since the meaning of "meaning" is central to his book, failure to use find and use this definition gives one pause.
## Digrasp 3
### Previously
This is a long-ish answer to John's comment on How are dotted graphs second-class?, where he asks how I have in mind to represent digraphs using pairs.
## The options for representing digraphs with pairs
I'm not surprising that it comes across as unclear. I'm deliberately leaving it open which of several possible approaches is "right". ISTM it would be premature to fix on one right now.
As I see it, the options include:
1. Unlabelled n-ary rooted digraph. Simplest in graph theory, strictest in Kernel: Cars are nodes, cdrs are edges (arcs) and may only point to pairs or nil. With this, there is no way to make dotted graphs or lists, so there is no issue of their standing nor any risk of "deep" conversion to dotted graphs. It loses or alters some functionality, alists in particular.
2. Labelled binary rooted digraph: More natural in Kernel, but more complex and messier in graph theory. Cars and cdrs are both edges, and are labelled (graph theory wise) as "car" or "cdr". List-processing operations are understood as distinguishing the two labels and expecting a pair in the cdr. They can encounter unexpected dotted ends, causing errors.
3. Dynamic hybrid: Essentially as now. Dottedness can be checked for, much like with proper-list?' but would also be checkable recursively. There's risk of "deep" conversion from one to the other; list-processing operations may raise error.
4. Static hybrid: A type similar to pair (undottable-pair) can only contain unlabelled n-ary digraphs, recursively. List operations require that type and always succeed on it. There's some way to structurally copy conformant "classic" pair structures to undottable-pair structures.
5. Static hybrid II: As above, but an undottable-pair may hold a classic pair in its car but not its cdr, and that's understood as not part of the digraph.
## And there's environments / labelled digraphs
By DIGRASP, I also mean fully labelled digraphs in which the nodes are environments and the labels are symbols. But they have little to do with the list-processing combiners.
## Digrasp
### Previously
I said that dotted graphs seem to be second class objects and John asked me to elaborate.
## How are dotted graphs second class?
A number of combiners in the spec accept cyclic but not dotted lists. These are:
• All the type predicates
• map and for-each
• list-neighbors
• append and append!
• filter
• reduce
### map and for-each
For map and for-each, dotted lists at the top level have the same problem as above, but ISTM "secondary" dotted lists and lists of varying length could work.
Those could be accomodated by passing another combiner argument (proc2`) that, when any list runs out, is given the remaining tails isomorphically to Args, and its return is used as the tail of the return list. In other words, map over a "rectangle" of list-of-list and let proc2 work on the irregular overrun.
The existing behavior could be recaptured by passing a proc2 that, if it gets all nils, returns nil, and otherwise raises error. Other useful behaviors seem possible, such as continuing with default arguments or governing the length of the result by the shortest list.
### Reduce
Reduce puzzles me. A cyclic list's cycle after it is collapsed to a single item resembles a dotted tail, and is legal. Does that imply that a dotted list should be able to shortcut to that stage?
|
For some reason I CANT get the answer for
(1-sin^2X)/(sinX-cscX)= -sinX
2. Originally Posted by amanda0603
For some reason I CANT get the answer for
(1-sin^2X)/(sinX-cscX)= -sinX
$sin(x) - csc(x) = sin(x) - \frac{1}{sin(x)}$
set the same denominator:
$\frac{sin^2(x) -1}{sin(x)}$
$1-sin^2(x) \times \frac{sin(x)}{sin^2(x) -1}$
Note that $1-sin^2(x) = -(sin^2(x) - 1)$
$-(sin^2(x)-1) \times \frac{sin(x)}{sin^2(x) -1}$
which cancels to $-sin(x)$
3. well, $sin^2(x)+cos^2(x)=1$, so $1-sin^2(x)=$ $cos^2(x)$
4. Hello, Amanda!
Another approach . . .
$\frac{1-\sin^2\!x}{\sin x-\csc x} \:=\: -\sin x$
$\text{We have: }\;\frac{1-\sin^2\!x}{\sin x-\csc x} \:=\:\frac{1-\sin^2\!x}{\sin x - \frac{1}{\sin x}}$
Multiply by $\frac{\sin x}{\sin x}\!:\;\;\frac{\sin x}{\sin x}\cdot\frac{1-\sin^2\!x}{\sin x - \frac{1}{\sin x}} \;=\;\frac{\sin x(1 - \sin^2\!x)}{\sin^2\!x - 1}$
Factor and reduce: . $\frac{-\sin x\,({\color{red}\rlap{////////}}\sin^2\!x-1)}{{\color{red}\rlap{////////}}\sin^2\!x-1} \;=\; -\sin x$
|
# Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorUrs
• CommentTimeJul 22nd 2010
am starting to create stubs
• CommentRowNumber2.
• CommentAuthorzskoda
• CommentTimeAug 2nd 2010
• (edited Aug 2nd 2010)
About groupoid of Lie algebra valued forms. You (Urs S.) have several versions of functorial ways to express a connection, including this stuff about adjoint triple of functors and flat part of BG which is the most beautiful. But let us go back to your 2007 and 2008 papers with Konrad. You used there the thin fundamental n-groupoid (which you call path groupoid) and proved the equivalence of the corresponding definition of connection/transport with the one via differential forms. The study of ODE’s has been used in one direction. Now using infinitesimal path groupoid like in synthetic geometry looks less cumbersome and more natural than the thin homotopies (everybody has intuition about calculus, but who has ever computed the thin homotopy classes). Thus, I’d rather look for a proof via synthetic methods. (any remarks?)
On the other hand, the classical approach to parallel transport is also a bit different, namely one does not talk about homotopy at all, neither thin nor not thin. One instead first defines a principal or associated bundle $P$ without connection and looks at the isomorphisms of all fibers as forming a groupoid whose object part projects onto the base manifold; then one looks at “parallel transport” functors from the groupoid of piecewise smooth paths downstairs such that the object part is a section of the projection and such that the whole prescription is smooth in the sense that the tangent to the parallel transport in any direction is well defined and all such “horizontal” tangents form a submanifold of the tangent bundle of the total space. It is classical that such “smooth transport functors into total space” are equivalent to the differential form approach. Now, Urs has found an approach without talking the total space and talking just the generic fiber; the notion of the smoothness there is a bit more complicated as it required the interpretation of thin homotopy groupoid as a diffeological space. I am looking at how to order in my head best the relation between the different approaches to the connections and would like the proofs to have minimal cumbersome detail.
P.S. Yet another difference to have in mind is that the differential forms in question in classical setup are just the forms on the manifold, and the open sets are open sets in the base manifold. Urs’s approach is rather taking the forms as depending on the objects in the site of Cartesian spaces.
• CommentRowNumber3.
• CommentAuthorzskoda
• CommentTimeAug 2nd 2010
Of course, I am not aware if anybody has generalized the notion of the horizontal distribution to the case of smooth higher principal bundles (with total spaces). It should not be difficult: the differential form version gives us hints. Of course while the differential forms can live abstractly in the Lie n-algebra like above in Urs’s approach, they have concrete meaning when the structure group is related to the morphisms between the fibers.
In the case when the structure n-group is replaced by n-groupoid, then part of the data for a principal bundle is the action which entails the momentum map (which has not been commented in the typical fiber approach).
• CommentRowNumber4.
• CommentAuthorDavidRoberts
• CommentTimeAug 3rd 2010
Of course, I am not aware if anybody has generalized the notion of the horizontal distribution to the case of smooth higher principal bundles (with total spaces).
I had a think about this a few years back, when I was still working in the differential geometric setting, but I didn’t get anywhere. No one has, to my knowledge either, considered connections like this, except possibly Danny Stevenson - he gave a talk once involving a higher Atiyah sequence. If you like I can track it down.
• CommentRowNumber5.
• CommentAuthorzskoda
• CommentTimeAug 3rd 2010
I know of his work on Atiyah sequences (online slides from Danny’s talk in Minnesota, we should link it in nlab) and was thinking myself some on it, 3-4 years ago. It is related to Ehresmann appraoch to connections, that is via horizontal distributions, but the study of Atiyah sequence while useful is not necessary for that theory (historically also Atiyah’s work came later – in 1957).
|
1. Aug 14, 2005
### altecsonyc
Can Someone please help me with this problem! I need some hints how to solve this problem, i am already stuck on part A :(.
This is my practice assignment for the final coming up next 2 weeks. Unfortunately, the professor does not provide solution.
THE PICTURE OF THIS PROBLEM IS ILLUSTRATED BY THE LINK BELOW.
Starting from the initial position (the origin of the coordinate system) a cart with a mass of 2000 kg is accelerated from the initial velocity zero by acceleration a1 which is uniform over a segment of 40 m. Following this segment, the cart proceeds through a turn (20 m radius), is then accelerated again over a segment of 10 m length, coasts toward the top of a hill where its velocity is near zero. The cart then rolls down to ground level, through a circular valley (radius 10 m) and along a straight segment of 2 m length, before leaving the track in free fight, initially inclined at 45 degree. In order to ensure a safe landing, an angular impulse is exerted on the cart just before leaving the track, causing the cart to rotate by a constant angular velocity w, during the flight. This rotation is necessary to have the cart axis aligned with the track at the landing point. Departure and landing occur at the same vertical position. On a short segment of 5 m the cart is decelerated by a3, before coasting through the second turn (20 m radius) and it is further decelerated uniformly (a4) over the final 30 m to come to a full stop at the origin of the coordinate system.
Determine the following design parameters:
a) Appropriate accelerations a1, a2, to meet the following requirements:
_ The cart must come to a stop on top of the hill.
_ The magnitude of the acceleration through the first turn must not be larger than 3g.
b) The coordinates of the point where the cart departs into free flight.
c) The velocity of the cart at the point of departure.
d) The coordinates of the landing point and the flight path angle (_2) at that point.
e) The duration of the flight, and the angular velocity w which is required to have the cart aligned with the track at landing.
f) The velocity of the cart as it reaches ground level (just before the 5m deceleration segment).
g) Appropriate accelerations a3, a4, to meet the following requirements:
_ The cart must come to a full stop at the origin of the coordinate system.
_ The magnitude of the acceleration through the second turn must not be larger than 3g.
h) The duration of the entire ride.
Last edited by a moderator: Aug 14, 2005
2. Aug 14, 2005
### Clausius2
As good engineers, traditionally here nobody gets his hands dirty doing numbers or solving homeworks. Unless you find a brave man here, you will do it better by posting your problem in homework section. Good luck.
3. Aug 14, 2005
### brewnog
You'll get much better reception here if you post what work you've done, and show us where you've got stuck. Good luck!
4. Aug 14, 2005
### Staff: Mentor
Well all the solutions come from basic kinematics.
"The cart must come to a stop on top of the hill." where the gravitational potential energy is maximum. Conservation of energy.
I don't see any mention of friction or wind resistance (which keeps it simple). Instead there is mention of acceleration and deceleration.
Also for angular acceleration $\alpha$ = $\omega^2 r$.
As for the jump, think equations for trajectory (parabolic). Some mass departs a point at some angle $\theta$ with respect to the horizontal and initial velocity, and then lands at some other point (at what angle?).
And this does belong in the college homework section.
5. Aug 14, 2005
### altecsonyc
For part A.
Knowing V0=0, X-X0=40=X1, X2=10
I have (V1)^2=(V0)^2 + 2*(a1)(X-X0)
(V1)^2= 2*(a1)*(X1)
and on top of the loop V = 0
SO m*g*h = 1/2*m*(V3)^2
(V3)^2 = 2*g*h
(V3)^2 = V2^2 + 2(a2)*(X2)
V1 = V2 (since, it's circular uniform)
Ac = V^2/R
I am stuck here, i dont know what to do next to find the a2 and a1, and V2. I think, im missing something, i have 2 equations with 3 unknows..but i tried for many hours and still couldn't figure it out.
So do you guys think my concept is correct? for some reason the formular Ac= V^2/R still haven't use yet, which i think there must be something wrong with my equation. I believe Ac=V^2/R must something to do with the V2 and somehow relate to the a1 and a2.
Thank you
6. Aug 14, 2005
### altecsonyc
Oh, i have thought about the 3g think.
so the acceleration through the first turn must not be larger than 3g.
SO
A = 3g =< sqrt(At ^2 + Ac^2)
but At = (V1)^2 / (2*X1)
Ac = (V1)^2/ R
solve for V1, and V1=V2
Does this make sense?
|
## Envision Math 6th Grade Textbook Answer Key Topic 1.3 Exponents and Place Value
Exponents and Place Value
How can you write a number
using exponents?
Each place in a place-value chart has a value that is 10 times as great as the place to its right. Use this pattern to write 1,000,000 as repeated multiplication.
Another Example
How do you write the expanded form of a number using exponents?
Standard form: 562,384
Expanded form: (5 × 100,000) + (6 × 10,000) + (2 × 1,000) + (3 × 100) + (8 × 10) + 4
Expanded form
using exponents: (5 × 105) + (6 × 104) + (2 × 103) + (3 × 102) + (8 × 101 ) + (4 × 100)
Any number raised to the first power always equals that number. 101 = 10.
Explain It
Question 1.
How many times is 9 used as a factor in the exponent 98?
8 times
Explanation:
Question 2.
Why does 3 × 100 = 3?
100 = 1, so 3 × 100 = 3
Other Examples
Write each in exponential form.
100,000 = 105 10 × 10 × 10 = 103 1 trillion = 1012
Evaluate numbers in exponential form.
53 = 5 × 5 × 5 = 125 34 = 3 × 3 × 3 × 3 = 81
You can write the repeated multiplication of a number in exponential form.
Each place in the place-value chart can be written using an exponent.
Guided Practice
Do you know HOW?
Question 1.
Write 10,000 as repeated multiplication.
10,000 = 10 × 10 × 10 × 10
Explanation:
Question 2.
Write 7 × 7 × 7 × 7 in exponential form.
74
Explanation:
Question 3.
Write 37,169 in expanded form using exponents.
(3 × 104) + (7 × 103) + (1 × 102) + (6 × 101) + (9 × 100)
Explanation:
Question 4.
Write 53 in standard form.
125
Explanation:
Do you UNDERSTAND?
Question 5.
In the example at the top, why was the number 10 used as the base to write 1,000,000 in exponential form?
See margin.
Explanation:
Question 6.
Using the example, how many times would 10 be repeatedly multiplied to equal 100,000?
5 times. 100,000 = 10 × 10 × 10 × 10 × 10 = 10
Explanation:
Question 7.
How many zeros are in 107 when it is written in standard form?
7
Explanation:
Independent Practice
Leveled Practice What number is the base?
Question 8.
49
4
Explanation:
Question 9.
179
17
Explanation:
What number is the exponent?
Question 10.
319
9
Explanation:
Question 11.
2100
100
Explanation:
Write each in exponential form.
Question 12.
1,000
103
Explanation:
Question 13.
1,000,000,000
109
Explanation:
Question 14.
10 × 10 × 10 × 10 × 10
105
Explanation:
Write each number in expanded form using exponents.
See margin
Explanation:
Question 15.
841
Question 16.
5,832
Question 17.
1,874,161
Question 18.
22,600,000
Evaluate 19 through 22.
Question 19.
62 = ☐
36
Explanation:
Question 20.
104 = ☐
10,000
Explanation:
Question 21.
43 = ☐
64
Explanation:
Question 22.
27 = ☐
128
Explanation:
Problem Solving
Question 23.
The population of one U.S. state is approximately 33,871,648. What is this number in expanded form using exponents?
See margin
Explanation:
Question 24.
Reasoning What number raised to both the first power and the second power equals 1?
1; 11 = 1 and 12 = 1
Explanation:
Question 25.
Writing to Explain Explain how to compare 24 and 42.
24 = 2 × 2 × 2 × 2 = 16; 42 = 4 × 4 = 16; So 24 = 42.
Explanation:
Question 26.
In Exercise 23, what is the place of the digit 7?
A. hundreds
B. thousands
C. ten thousands
D. millions
C. ten thousands
Explanation:
Question 27.
Writing to Explain Kalesha was asked to write 80,808 in expanded form using exponents. Her response was (8 × 102) + (8 × 101) + (8 × 100). Explain where she made mistakes and write the correct response.
See margin
Explanation:
Question 28.
Think About the Process You invest \$1 in a mutual fund. Every 8 years, your money doubles. If you don’t add more money, which expression shows how much your investment is worth after 48 years?
A. 1 48
B. 1 × 2 × 2 × 2 × 2 × 2
C. 1 + 2 + 2 + 2 + 2 + 2 + 2
D. 1 × 2 × 2 × 2 × 2 × 2 × 2
Explanation:
D. 1 × 2 × 2 × 2 × 2 × 2 × 2
Question 29.
Number Sense Using the map, write the population of the United States in expanded form using exponents.
See margin
Explanation:
Question 30.
In 1900, there were 76,803,887 people in the United States. How many more people were there in the United States in a recent year than in 1900?
See margin
Explanation:
Algebra Connections
Solution Pairs
An equation is a mathematical sentence that uses an equals sign to show that two expressions are equal. Any values that make an equation true are solutions to the equation.
An inequality is a mathematical sentence that contains <, >, ≤, or ≥. Any value that makes the inequality true is a solution. You can graph the solutions of an inequality on a number line.
Example: Find two values for each variable that make the equation,
y = x + 3, true.
If x = 1, then y = 1 + 3 = 4 is true.
If x = 5, then y = 5 + 3 = 8 is true.
(1, 4) and (5, 8) are solution pairs.
Example: Graph three values that make the inequality, x > 3, true.
x = 3.1, x = 4, x = 5
Draw a number line. Plot three points that are greater than 3
For 1 through 4, copy the table and find two values for each variable that make the equation true.
Question 1.
y = 4 + x
Explanation:
Question 2.
b = a – 2
Explanation:
Question 3.
t = 3w
Explanation:
Question 4.
y = x ÷ 2
Explanation:
Question 5.
Copy the number line and graph 3 values that make the inequality, d ≥ 9, true.
Copy the number line and graph 3 values that make the inequality,$$\frac{x}{3}$$ < 4, true.
|
Article
# Revisiting the combined photon echo and single-molecule studies of low-temperature dynamics in a dye-doped polymer
Authors:
To read the full-text of this research, you can request a copy directly from the authors.
## Abstract
The photon echo (PE) spectroscopy and single-molecule spectroscopy (SMS) may be combined to give a very powerful tool for comprehensive study of low-temperature dynamics in dye-doped disordered solids (polymers, glasses). At the same time, this type of studies are likely to reveal discrepancies when comparing characteristic times of optical dephasing T2 and single-molecule zero-phonon spectral lines (ZPL) broadening obtained from PE and SMS, correspondingly, for tetra-tert-butylterrylene in polyisobutylene in the temperature range of a few–dozen of Kelvins [see Phys. Status Solidi B 241, 3480 and 3487 (2004)]. Inexplicably, PE-experiments demonstrated T2-times to be much shorter than it is sufficient to cause the corresponding ZPL broadening. Here we experimentally solve this problem and show that at T = 4.5–15 K the incoherent PE gives T2-times which correspond to the narrowest SM ZPL. On the SM-level there is a pronounced additional ZPL-broadening due to spectral diffusion processes which are strongly dependent on the characteristics time of the measurement (tens of nanoseconds for PE and seconds for SMS). There is also a broad distribution of ZPL spectral widths for different SMs due to different local environments, that contribute differently to both the optical dephasing and the spectral diffusion processes, but always in addition to the value of inverse optical dephasing times measured using a PE technique.
## No full-text available
... Nowadays this experimental setup is based on a super-luminescence Rhodamine-dye source (operated in the wavelength range 575-600 nm), which is transversally pumped by secondharmonic of a solid-state Nd:YAG pulse laser LS-2131M-10-FF (LOTIS TII, Belarus). Detailed description of the setup was given in [37,38]. The optical scheme of the spectrometer consisted of two optical delay lines, forming two laser pulses delayed to each other. ...
... But the most recent approach in spectroscopy is superresolution spectroscopy like single-molecule spectroscopy, imaging, and mapping spectroscopy which are from state of the art in spectroscopy which is known as microspectroscopy. The backbone of the microspectroscopic techniques is the laser and confocality principle [19][20][21][22][23][24]. 3D laser Raman microspectroscopy is now well established as one of the most important and great spectroscopic tools for a vast range of applications in the research world. ...
Article
Phosphosilicate thin film and monolith co-doped with Er³⁺-Yb³⁺ were prepared by self-modified Sol-Gel. Spin coating was used to deposit the prepared samples. The calculated crystallite sizes were found to be 26 and 28 nm for (S22P0.5E0.5YM) sintered at 500 and 1000 °C, respectively. A thickness of 1.6 μm was obtained for (S22P0.5E0.5YM). Laser-based Raman microspectroscopy presents the homogenous distribution of the Er³⁺&Yb³⁺ions. The Er³⁺&Yb³⁺absorption coefficients were measured. The radiative properties for the metastable level ⁴I13/2 were determined using The Judd–Ofelt model. The electron relaxation time (τr) slightly increased with the Yb³⁺ ions concentration up to 19.41 ms. Photoluminescence increased with increasing the Yb³⁺ ions. Pumping the 4F transition ⁴I13/2→⁴I15/2 of Erbium ions with 650 nm laser emits at around 1500 nm. This study introduces promising results for the prepared samples to support the planar optical waveguide amplifier system that could be used in communication and laser applications. However its cost-effectiveness, on-site measurability is still an interesting matter challenge.
Article
Full-text available
Low temperature dynamics (tunneling and vibrational relaxation) in doped polyisobutylene film has been reinvestigated using 2-pulse incoherent photon echo (2IPE) and compared with single-molecule spectroscopy (SMS) data. It has been shown that in a very wide range of low temperatures the 2IPE gives optical dephasing times which correspond to the narrowest zero-phonon lines of single dye molecules.
Article
The possibility of detecting signals of a two-pulse incoherent photon echo in a thin layer of double-coated semiconductor colloidal quantum dots spread on the surface of a glass substrate at T = 10 K is demonstrated. Possible mechanisms of ultrafast optical dephasing detected at the given temperature are discussed.
Article
Full-text available
The possibility and conditions of an incoherent exciton echo excitation in a thin layer of the CdSe/CdS/ZnS semiconductor quantum dots spread on a glass substrate are discussed.
Article
Full-text available
Using the hydrothermal method, we synthesized water soluble YVO 4 : Yb, Er nanoparticles with a size less than 10 nm. Nanoparticles exhibit intense luminescence in the green region due to Er 3+ ions when excited by laser radiation at a wavelength of 980 nm as a result of the up-conversion process. Bright and stable luminescence also persists in an aqueous solution of nanoparticles. Based on experimental data, it can be argued that the objects obtained are promising in biological applications, as well as up-conversion phosphors.
Article
Full-text available
We report the results of core-shell (CdSe/CdS) quantum dots study. Quantum dots sizes were evaluated as 2.0 and 2.9 nm from absorbance edge position. We suggest two types of traps, predict properties of these traps based on upconversion luminescence data and previous studies of quantum dots (CdSe cores only) and bulk CdS.
Article
We have compared and analyzed the parameters of the Franck–Condon (FC) and Herzberg–Teller (HT) interactions, which form the fine-structure spectra of stilbene, 1,4-distyrylbenzene, and tetrafluorodistyrylbenzene—compounds that are similar in their chemical structures but differ in the length of π-conjugation and in the presence of substituents in their benzene rings. The numerical values of the FC intramolecular interaction constants have been obtained, and, simultaneously, the values of the HT parameter have been found as quantitative values of the projections of the electronic transition vector of the dipole moment onto the normal vibrational coordinates. We have solved the question of transferability of these parameters in the homologous series of stilbene molecules containing the same sets of structural elements, which has made it possible to extend the use of the fragmentary approach to describe the fundamental bands of organic molecules of different homological series and to solve the problem of studying the structure of the spectra of large molecules.
Article
Full-text available
The dye-doped polymer is commonly used in the field of optoelectronics, given its effectiveness in optimising the device’s performance. This study is devoted to the synthesis and characterisation of Anchusa-Italica-doped Pentacene thin-film. Scanning electronic microscopy structural analysis, Fourier transform spectroscopy, and UV-visible transmittance spectra with a range of 300-900 nm were also carried out. The fundamental optical properties such as the absorption coefficient, optical energy gap, absorption and refractive indices were calculated based on the methods already used in the literature as Tauc’s relationship. The morphology of the samples indicated that dye structure was affected in the doped pentacene. The Fourier transform infrared technique (FT-IR) resulting spectrum of the doped samples also showed a significant absorption peak corresponding to C-H as an index of impurities. The calculated band-gap energy of the impurity sample was reduced and was the lowest compared to both the pure dye and polymer samples. The optical absorption and transmittance spectra revealed that it was positioned in the desirable ranges for optoelectronic applications. An anomaly in the absorption index was also observed through excitation of the resonance mode with transparent indication. This effect was deduced from the calculation of the refractive index. The results presented in this paper significantly contribute to the developments in the field of optoelectronic devices based on dye/polymer organic materials.
Article
A simple and effective technique for depositing thin films of semiconductor colloidal quantum dots (QD) on glass substrates is developed. Samples with CdSe/CdS/ZnS quantum dots are fabricated and investigated via luminescence microspectroscopy. Four-wave mixing signals are recorded at room temperature in a solution and in a thin film of quantum dots.
Presentation
Full-text available
ФЛУОРЕСЦЕНТНАЯ МИКРОСКОПИЯ СВЕРХВЫСОКОГО РАЗРЕШЕНИЯ С ВИЗУАЛИЗАЦИЕЙ ОДИНОЧНЫХ МОЛЕКУЛ (НОБЕЛЕВСКАЯ ПРЕМИЯ ПО ХИМИИ 2014 ГОДА) // Fluorescence super-resolution microscopy (Nobel Prize in Chemistry 2014).
Article
Full-text available
The random first-order transition (RFOT) theory of the structural glass transition is reviewed in a pedagogical fashion. The rigidity that emerges in crystals and glassy liquids is of the same fundamental origin. In both cases, it corresponds with a breaking of the translational symmetry; analogies with freezing transitions in spin systems can also be made. The common aspect of these seemingly distinct phenomena is a spontaneous emergence of the molecular field, a venerable and well-understood concept. In crucial distinction from periodic crystallisation, the free energy landscape of a glassy liquid is vastly degenerate, which gives rise to new length and time scales while rendering the emergence of rigidity gradual. We obviate the standard notion that to be mechanically stable a structure must be essentially unique; instead, we show that bulk degeneracy is perfectly allowed but should not exceed a certain value. The present microscopic description thus explains both crystallisation and the emergence of the landscape regime followed by vitrification in a unified, thermodynamics-rooted fashion. The article contains a self-contained exposition of the basics of the classical density functional theory and liquid theory, which are subsequently used to quantitatively estimate, without using adjustable parameters, the key attributes of glassy liquids, viz., the relaxation barriers, glass transition temperature, and cooperativity size. These results are then used to quantitatively discuss many diverse glassy phenomena, including: the intrinsic connection between the excess liquid entropy and relaxation rates, the non-Arrhenius temperature dependence of $\alpha$-relaxation, the dynamic heterogeneity, ... (see Comments for the remainder of Abstract.)
Article
Full-text available
We suggest a novel approach for spatially resolved probing of local fluctuations of the refractive index n in solids by means of single-molecule (SM) spectroscopy. It is based on the dependence T1(n) of the effective radiative lifetime T1 of dye centres in solids on n due to the local-field effects. Detection of SM zero-phonon lines at low temperatures gives the values of SM natural spectral linewidth (which is inverse proportional to T1) and makes it possible to reveal the distribution of the local n values in solids. Here we demonstrate this possibility on the example of amorphous polyethylene and polycrystalline naphthalene doped with terrylene. Particularly, we show that the obtained distributions of lifetime limited spectral linewidths of terrylene molecules embedded into these matrices are due to the spatial fluctuations of the refractive index local values.
Article
Full-text available
We develop a first principles theoretical description of femtosecond double-pump single-molecule signals of molecular aggregates. We incorporate all singly excited electronic states and vibrational modes with significant exciton-phonon coupling into a system Hamiltonian and treat the ensuing system dynamics within the Davydov D1 Ansatz. The remaining intra- and inter-molecular vibrational modes are treated as a heat bath and their effect is accounted for through lineshape functions. We apply our theory to simulate single-molecule signals of the light harvesting complex II. The calculated signals exhibit pronounced oscillations of mixed electron-vibrational (vibronic) origin. Their periods decrease with decreasing exciton-phonon coupling
Article
Full-text available
Parasitic two-level tunnelling systems originating from structural material defects affect the functionality of various microfabricated devices by acting as a source of noise. In particular, superconducting quantum bits may be sensitive to even single defects when these reside in the tunnel barrier of the qubit's Josephson junctions, and this can be exploited to observe and manipulate the quantum states of individual tunnelling systems. Here, we detect and fully characterize a system of two strongly interacting defects using a novel technique for high-resolution spectroscopy. Mutual defect coupling has been conjectured to explain various anomalies of glasses, and was recently suggested as the origin of low-frequency noise in superconducting devices. Our study provides conclusive evidence of defect interactions with full access to the individual constituents, demonstrating the potential of superconducting qubits for studying material defects. All our observations are consistent with the assumption that defects are generated by atomic tunnelling.
Article
Full-text available
The detection of individual molecules has found widespread application in molecular biology, photochemistry, polymer chemistry, quantum optics and super-resolution microscopy. Tracking of an individual molecule in time has allowed identifying discrete molecular photodynamic steps, action of molecular motors, protein folding, diffusion, etc. down to the picosecond level. However, methods to study the ultrafast electronic and vibrational molecular dynamics at the level of individual molecules have emerged only recently. In this review we present several examples of femtosecond single molecule spectroscopy. Starting with basic pump-probe spectroscopy in a confocal detection scheme, we move towards deterministic coherent control approaches using pulse shapers and ultra-broad band laser systems. We present the detection of both electronic and vibrational femtosecond dynamics of individual fluorophores at room temperature, showing electronic (de)coherence, vibrational wavepacket interference and quantum control. Finally, two colour phase shaping applied to photosynthetic light-harvesting complexes is presented, which allows investigation of the persistent coherence in photosynthetic complexes under physiological conditions at the level of individual complexes.
Article
Full-text available
The initial steps of photosynthesis comprise the absorption of sunlight by pigment-protein antenna complexes followed by rapid and highly efficient funneling of excitation energy to a reaction center. In these transport processes, signatures of unexpectedly long-lived coherences have emerged in two-dimensional ensemble spectra of various light-harvesting complexes. Here, we demonstrate ultrafast quantum coherent energy transfer within individual antenna complexes of a purple bacterium under physiological conditions. We find that quantum coherences between electronically coupled energy eigenstates persist at least 400 femtoseconds and that distinct energy-transfer pathways that change with time can be identified in each complex. Our data suggest that long-lived quantum coherence renders energy transfer in photosynthetic systems robust in the presence of disorder, which is a prerequisite for efficient light harvesting.
Article
Full-text available
Experimentally observed narrowing of spectral holes in a glass under hydrostatic pressure confirms our theoretical finding that the external pressure, in addition to increasing the frequencies of soft localized modes, also reduces their number. This occurs because the majority of soft localized modes in glasses is shown to have a negative cubic anharmonicity. For that reason the applied pressure not only enhances the stiffness of these modes, but also transforms a fraction of them into tunneling two-level systems, whereas the simultaneous reverse transformations of some other two-level systems into soft localized modes are less numerous.
Article
Full-text available
The field of viscous liquid and glassy solid dynamics is reviewed by a process of posing the key questions that need to be answered, and then providing the best answers available to the authors and their advisors at this time. The subject is divided into four parts, three of them dealing with behavior in different domains of temperature with respect to the glass transition temperature, Tg, and a fourth dealing with “short time processes.” The first part tackles the high temperature regime T>Tg, in which the system is ergodic and the evolution of the viscous liquid toward the condition at Tg is in focus. The second part deals with the regime T∼Tg, where the system is nonergodic except for very long annealing times, hence has time-dependent properties (aging and annealing). The third part discusses behavior when the system is completely frozen with respect to the primary relaxation process but in which secondary processes, particularly those responsible for “superionic” conductivity, and dopart mobility in amorphous silicon, remain active. In the fourth part we focus on the behavior of the system at the crossover between the low frequency vibrational components of the molecular motion and its high frequency relaxational components, paying particular attention to very recent developments in the short time dielectric response and the high Q mechanical response. © 2000 American Institute of Physics.
Article
Full-text available
Luminescence imaging has been adapted to facilitate alignment and focusing of multiple non-collinear laser beams in pump–probe experiments. The technique has been validated in an experiment on four-wave mixing in a doped polymer placed into a high-pressure sapphire-anvil cell, itself inside an optical cryostat.
Article
Full-text available
We investigated the spectra of a large number of single tetra-tert-butylterrylene molecules embedded in an amorphous polyisobutylene matrix and analyzed the distributions of their linewidths (widths of single spectral peaks). The measurements were performed at 2, 4.5, and 7 K. This is a temperature region, where the standard two-level system (TLS) model of low-temperature glasses begins to fail. At T=2 K the temporal behavior (history of frequency jumps) of most of the measured spectra and their linewidth distributions were found to be consistent with the TLS model. At higher temperatures the main features of individual spectra (number of spectral peaks, temperature variation of peak widths, ratio of intensities of different peaks, etc.) still appear consistent with the predictions of this model. An increase of temperature leads mainly to the broadening of spectral peaks. A detailed analysis of the linewidth distributions reveals deviations from a standard TLS model at T=4.5 and 7 K. This difference is attributed to the influence of quasi-local low-frequency modes (LFM) of the amorphous matrix. By comparing the measured linewidth distributions with computer simulations, we quantitatively determined the LFM contribution to the single-molecule spectra in our dye-matrix system at different temperatures. (C) 2003 American Institute of Physics.
Article
Full-text available
Numerous experiments have shown that the low-temperature dynamics of a wide variety of disordered solids is qualitatively universal. However, most of these results were obtained with ensemble-averaging techniques which hide the local parameters of the dynamic processes. We used single-molecule (SM) spectroscopy for direct observation of the dynamic processes in disordered solids with different internal structure and chemical composition. The surprising result is that the dynamics of low-molecular-weight glasses and short-chain polymers does not follow, on a microscopic level, the current concept of low-temperature glass dynamics. An extra contribution to the dynamics was detected causing irreproducible jumps and drifts of the SM spectra on timescales between milliseconds and minutes. In most matrices consisting of small molecules and oligomers, the spectral dynamics was so fast that SM spectra could hardly or not at all be recorded and only irregular fluorescence flares were observed. These results provide new mechanistic insight into the behavior of glasses in general: At low temperatures, the local dynamics of disordered solids is not universal but depends on the structure and chemical composition of the material.
Article
Full-text available
The normal modes and the density of states (DOS) of any material provide a basis for understanding its thermal and mechanical transport properties. In perfect crystals, normal modes are plane waves, but they can be complex in disordered systems. We have experimentally measured normal modes and the DOS in a disordered colloidal crystal. The DOS shows Debye-like behavior at low energies and an excess of modes, or Boson peak, at higher energies. The normal modes take the form of plane waves hybridized with localized short wavelength features in the Debye regime but lose both longitudinal and transverse plane-wave character at a common energy near the Boson peak.
Article
Full-text available
The line width dependence of zero-phonon lines and phonon sidebands on temperature, bath dissipation, and electron-phonon coupling is studied for an underdamped Brownian oscillator model with an Ohmic dissipative bath. Factors determining the line widths vary from the zero-phonon lines to the phonon sidebands. The control-parameter space of line broadening has been mapped out, revealing that the line widths of the zero-phonon lines and phonon sidebands are linearly dependent on both the temperature and the Huang-Rhys factor. It is also found that the dependence of the line widths on the bath damping factor is linear for the zero-phonon lines and quadratic for the phonon sidebands.
Article
Full-text available
The interaction of sound waves with tunneling, relaxational, and resonant vibrational states in glasses is investigated within the soft-potential model. The same bilinear coupling constant is assumed for all three different kinds of soft modes. The model reproduces the results of the tunneling model at low temperatures and frequencies. In addition, it explains the fast rise of the relaxational absorption above 1 K and the plateau in the thermal conductivity around 5 K. The universal features of the sound absorption in glasses are described with good accuracy up to 20 K.
Article
Full-text available
Spectra of single tetra-tert-butylterrylene chromophore molecules embedded in an amorphous polyisobutylene matrix as microprobes were recorded. The individual temperature dependences of the spectral linewidths for the same single molecules (SMs) in a broad temperature interval (1.6 < T < 40 K) have been measured. This enabled us to separate the contributions of tunneling two-level systems and quasilocalized low-frequency vibrational modes (LFMs) to the observed linewidths. The analysis of the T dependences yields the values of LFM frequencies and SM-LFM coupling constants for the LFMs in the local environment of a given chromophore. Pronounced distributions of the observed parameters of LFMs were found. This result can be regarded as the first direct experimental proof of the localized nature of LFMs in glasses.
Book
Single Molecule Spectroscopy is one of the hottest topics in today's chemistry. It brings us close to the the most exciting vision generations of chemists have been dreaming of: To observe and examine single molecules! While most of chemistry deals with myriads of molecules, this books presents the latest developments for the detection and investigation of single entities. Written by internationally renowned authors, it is a thorough and comprehensive survey of current methods and their applications. © VCH Verlagsgesellschaft mbH, D-69451 Weinheim (Federal Republic of Germany), 1997. All rights reserved.
New experimental data on the times of phase relaxation in the amorphous polyisobutylene doped with chromophore molecules of tetra-tert-butylterylene are obtained by the incoherent photon echo method at temperatures of 5, 7, 10, and 15 K. A comparative analysis of the results is performed and data on the widths of zero-phonon spectral lines of single molecules in this impurity system are presented.
Article
Using a stochastic model, we examine disorder-induced changes in the absorption line shape of a chromophore embedded in a matrix of noninteracting two-level systems (TLSs) with randomly oriented dipole moments. By systematically controlling the degree of TLS positional disorder, a perfectly crystalline, glassy or a combination of the two environments is obtained. The chromophore is assumed to interact with TLSs via long-range dipole-dipole coupling. At long times and in the absence of disorder, Gaussian line shapes are found, which morph into Lorentzian for a completely disordered environment owing to strong coupling between the chromophore and a TLS in close vicinity.
Article
We performed simulations of the prototypical femtosecond "double-slit" experiment with strong pulsed laser fields for a chromophore in solution. The chromophore is modeled as a system with two electronic levels and a single Franck-Condon active underdamped vibrational mode. All other (intra- and inter-molecular) vibrational modes are accounted for as a thermal bath. The system-bath coupling is treated in a computationally accurate manner using the hierarchy equations of motion approach. The double-slit signal is evaluated numerically exactly without invoking perturbation theory in the matter-field interaction. We show that the strong-pulse double-slit signal consists of a superposition of N-wave-mixing (N = 2, 4, 6[ellipsis (horizontal)]) responses and can be split into population and coherence contributions. The former reveals the dynamics of vibrational wave packets in the ground state and the excited electronic state of the chromophore, while the latter contains information on the dephasing of electronic coherences of the chromophore density matrix. We studied the influence of heat baths with different coupling strengths and memories on the double-slit signal. Our results show that the double-slit experiment performed with strong (nonperturbative) pulses yields substantially more information on the photoinduced dynamics of the chromophore than the weak-pulse experiment, in particular, if the bath-induced dephasings are fast.
Article
The effect of macroscopic parameters of a substance on the optical characteristics of impurity particles is investigated. A generalized control equation is derived for two-level emitters forming an ensemble of optical centers in a transparent dielectric medium. In this equation, the effective values of the acting pump field and the radiative relaxation rate of an optical center are taken into account. The formalism developed here is a completely microscopic approach based on the chain of the Bogoliubov-Born-Green-Kirkwood-Yvon equations for reduced density matrices and correlation operators for material particles and modes of a quantized radiation field. The method used here makes it possible to take into account the effects of individual and collective behavior of emitters, which are associated with the presence of an intermediate medium, consistently without using phenomenological procedures. It is shown that the resultant analytic expression for the effective lifetime of the excited state of an optical center conforms with experimental data.
Article
Optical photon echo measurements on seven doped organic amorphous systems (resorufin (Res) in D- and D6-ethanol (EtOD and EtOD6), tetra-tert-butylterrylene (TBT) in polyisobutylene and in polymethylmetacrylate (PMMA), zinc-tetraphenylporphine (ZnTPP) in EtOD and EtOD6 and in PMMA) have been performed in a wide temperature range (0.35–50 K). This wide temperature interval (more than two decades) permits to clearly separate for the first time the two different contributions to the line width (optical dephasing): low-temperature broadening which follows a power law, and high-temperature broadening which demonstrates an exponential-like behavior. The values of the exponent α obtained at low temperatures in the cases of TBT and ZnTPP in PMMA matrix show some marked discrepancy with theoretical predictions, which can be attributed to a failure of the standard TLS model for a treatment of these systems or to an inaccuracy of the PMMA parameters calculated in literature. A comparison of the high-temperature part of the line broadening of different dye molecules (ZnTPP and Res), embedded in the same matrices (EtOD and EtOD6) shows that the optical dephasing is determined by two contributions – the dynamics of matrix itself and the dynamics which is related to the doped molecules.
Article
The Brownian oscillator model has been successfully employed for modeling solvation dynamics in numerous femtosecond measurements. To a very large extent, this work has been interpreted on the basis of high-temperature limits of the theory. We present an analysis of the low temperature limit, which is particularly important for hole burning, photon echo, and single molecule spectroscopic experiments. Several forms for the bath spectral density are employed to compute zero phonon absorption line shapes. We show that in all cases the zero phonon linewidth vanishes at low temperatures, and that the line becomes asymmetric with a sharp rise at the red edge, as expected qualitatively.
Article
Using a superluminescent diode, we observed femtosecond accumulated photon echoes in 1,1′-diethyl-4,4′-quinodicarbocyanine iodide embedded in polyvinyl acetate. The output of the superluminescent diode directly excited the sample to produce photon echoes. By using a simple experimental system we obtained a temporal resolution as high as 100 fsec. This resolution is determined by the spectral width of the superluminescent diode and the noncollinear excitation beam configuration of the system.
Article
Photon echoes are generated by excitation pulses of chaotic (thermal) light. Echo signals are large and display a modulation with pulse separation due to quantum beating of hyperfine levels.
Article
Spectroscopic techniques exhibit different sensitivities for line broadening processes in amorphous solids. Photon echo and hole-burning spectroscopy yield averages over the chromophore ensemble. At low temperatures, the results can usually be fitted with a combination of a power-law term — corresponding to the relaxations of two-level systems- and of an exponentially activated contribution of pseudo-local phonon modes. Single-molecule spectroscopy (SMS). in contrast, can resolve the behavior of single dye molecules and yields a distribution of power laws as well as of activation energies. We compare photon echo results for tetra-tert-butylterrylene (TBT) in polyisobutylene (PIB) with SMS data for the same system. The latter were used to simulate numerically the data which would be obtained in an ensemble-averaging experiment. The results of the numerical calculation can be well fitted without assuming a distribution of parameters.
Article
The line width distributions for single terrylene molecules in a naphthalene crystal have been measured at temperatures down to 30 mK. The line width distribution becomes narrower with decreasing temperature, and has a full-width at half-maximum of approximately 4.3(13) MHz at 30 mK and an average line width of 42.7(3) MHz.
Article
The theory of optical photon echo and hole burning spectroscopies in low temperature glasses is discussed within the framework of the tunneling two-level system and stochastic sudden jump models. Exact results for the relevant theoretical quantities involve certain averages over the distributions of the two-level system energies and relaxation rates. The standard approximations for these averages are critically examined, for experimentally realistic parameters, via comparison to numerically exact calculations. The general conclusion is that the standard approximations are often used under conditions where they are not expected to be quantitatively accurate.
Article
Fossil amber offers the opportunity to investigate the dynamics of glass-forming materials far below the nominal glass transition temperature. This is important in the context of classical theory, as well as some new theories that challenge the idea of an 'ideal' glass transition. Here we report results from calorimetric and stress relaxation experiments using a 20-million-year-old Dominican amber. By performing the stress relaxation experiments in a step-wise fashion, we measured the relaxation time at each temperature and, above the fictive temperature of this 20-million-year-old glass, this is an upper bound to the equilibrium relaxation time. The results deviate dramatically from the expectation of classical theory and are consistent with some modern ideas, in which the diverging timescale signature of complex fluids disappears below the glass transition temperature.
Article
We investigated the spectra of single tetra-tert-butylterrylene (TBT) molecules in the amorphous matrix poly(isobutylene) (PIB). The distribution of line widths of TBT in PIB was measured and compared to that of TBT in poly(ethylene). The fluorescence intensity autocorrelation function as well as the two-point frequency autocorrelation function were determined for different single TBT molecules. Logarithmic-like decays of the fluorescence autocorrelation function could be reproduced by assuming a 1R fluctuation rate distribution for the two-level-tunnelling systems.
Article
Pressure- and temperature-dependent photon echo results are obtained for pentacene doped polymethyl methacrylate (PMMA). A unique pressure effect is observed in which the optical dephasing rate increases as the pressure is increased from ambient pressure to 4 kbar, above which the optical dephasing rate is pressure independent up to 43 kbar. The present results are also compared with pressure- and temperature-dependent photon echo results for rhodamine 101 in PMMA, in which the optical dephasing rate was completely insensitive to pressure over the range 0 to 30 kbar. A negative correlation is also observed between the optical dephasing rate and the spectral hole burning efficiency. Line broadening due to pressure induced spectral diffusion may be responsible for both the increased dephasing rate and the reduced spectral hole-burning at high pressure. © 1997 American Institute of Physics.
Article
A joint analysis of spectroscopic data obtained at liquid–helium temperatures by three line-narrowing techniques, photon echo (PE), persistent hole burning (HB), and single molecule spectroscopy (SMS), is presented. Two polymer systems, polyisobutylene (PIB) and polymethylmethacrylate (PMMA), doped with tetra-tert-butylterrylene (TBT) were studied via PE and HB techniques and the results are compared with literature data [R. Kettner et al., J. Phys. Chem. 98, 6671 (1994); B. Kozankiewicz et al., J. Chem. Phys. 101, 9377 (1994)] obtained by SMS. Both systems behave quite differently. In TBT/PIB a rather strong influence of a dispersion of the dephasing time T2 was found which plays only a minor role in TBT/PMMA. We have also measured the temperature dependence of T2 for both systems in a broad temperature range (0.4–22 K). Using these data we separated the two different contributions to the optical dephasing — due to an interaction with two-level systems and due to coupling with local low-frequency modes. The data are compared with calculations using a numerical and a semianalytical model in the presence of a large dispersion of the single molecule parameters. Furthermore, we discuss the differences of the linewidths as measured by different experimental methods. © 1998 American Institute of Physics.
Article
We show that a linear specific heat at low temperatures for glass follows naturally from general considerations on the glassy state. From the same considerations we obtain the experimentally observed anomalous low-temperature thermal conductivity, and we predict an ultrasonic attenuation which increases at low temperatures. Possible relationships with the linear specific heat in magnetic impurity systems are pointed out. We suggest experimental study of the relaxation of thermal and other properties.
Article
We present a theoretical framework for analyzing the distribution of optical line shapes in low-temperature glasses, as measured by single-molecule spectroscopy. The theory is based on the standard tunneling two-level system model of low-temperature glasses and on the stochastic sudden jump model for the two-level system dynamics. Within this framework we present an explicit formula for the line shape of a single molecule and employ Monte Carlo simulation techniques to calculate the distribution of single-molecule line shapes. We compare our calculated line-width distributions to those measured experimentally. We find that the two-level system model captures the features of the experimental line-width distributions very well, although there are discrepancies for small line widths. We also discuss the relation of single-molecule line-shape distributions to more traditional “line shapes”, as measured by hole-burning and photon echo spectroscopies. Using the results from our analysis of the single-molecule line-width distributions, with no adjustable parameters we can compare theoretical predictions with experiment for photon echo decay times and hole widths. In general, the agreement is good, providing further evidence that the standard tunneling model in glasses is basically correct. For two systems, however, theory and experiment do not agree quantitatively.
Article
We give a short overview of the selective spectroscopy of organic molecules in solid solutions, starting from Shpol'skii matrices up to single molecule spectroscopy. We discuss the general principles of selectives and different applications of this technique to molecular and solid-state studies. We examine in more detail two new fields to which we have contributed: persistent spectral hole burning in Langmuir-Blodgett (LB) films and the study of individual molecules. We show how persistent spectral hole burning provides information about structure and dynamics of LB films and how energy transfer can be studied in concentrated films. We probed the dynamics of the LB matrix as a function of the depth of the dye in a multilayer. We show that the surface monolayer presents specific dynamics, which we attribute to the long hydrophobic chains. The shift and broadening of a spectral hole under an applied electric field allows us to determine the orientation and direction of the chromophore axes. We then present the new field of single molecule spectroscopy, including the latest results. So far, the observations were made in a molecular crystal and in a polymer. We first consider the general appearance of fluorescence excitation lines and the sudden jumps of their resonance frequencies. The external electric field effects are then discussed. The correlation properties of the light emitted by single molecules give new insight about intramolecular dynamics and spectral diffusion, which would be impossible to obtain in experiments with ensembles of molecules. We demonstrate how single molecule spectroscopy gives truly local information, eliminates averages and populations, and gives access to distributions of molecular parameters in solids.
Article
It is shown that many center excitations are responsible for the universal low energy spectral properties in an arbitrary ensemble of defect centers with an internal degree of freedom. Universality means a quasiuniform distribution of the energy and the logarithm of the tunneling amplitude together with a disappearance of the dependence on the primary defect parameters.
Article
Linewidth distributions for single terrylene molecules in polyethylene have been measured in the temperature range from 30 mK to 1.83 K. The temperature dependence of the average linewidth is best described by a linear relationship over the full temperature range. At 30 mK, the linewidth distribution has a full-width at half-maximum of &18.6 MHz and an average linewidth of 42.8(6) MHz. 2000 Elsevier Science B.V. All rights reserved.
Article
By means of single molecule (SM) spectroscopy we investigated elementary matrix excitations in a disordered solid, i.e., quasi-localized low-frequency vibrational modes (LFMs). To this end we recorded the spectra of single tetra-tert-butylterrylene molecules embedded in an amorphous polyisobutylene matrix in a temperature region, where the LFM contribution to line broadening dominates. The individual parameters of LFM in a polymer glass can be determined from the temperature-dependent linewidths of single molecules. The magnitude of the LFM contribution to SM spectra was obtained by the statistical analysis,of the distribution of linewidths of SMs. Pronounced distributions of LFM frequencies and SM-LFM coupling constants were found. This result can be regarded as the first direct experimental proof of the localized nature of LFMs. (C) 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weimheim.
Article
Spectra of single tetra-tert-butylterrylene molecules incorporated in purely amorphous polyisobutylene matrix have been measured at 2, 4.5 and 7 K (244, 381 and 187 molecules, correspondingly). This is a temperature region, where the main assumptions of the standard two-level system (TLS) model of low-temperature glasses begin to be not valid. At T = 2 K the main parameters of most of the registered spectra were found to be consistent with the standard TLS model. At T = 4.5 and 7 K some deviations from predictions of this model were observed. The detailed analysis reveals that increasing of temperature leads to additional, in comparison with the predictions of the standard TLS model, line broadening of spectral peaks. This additional line broadening was attributed to the influence of quasi-local low-frequency modes (LFMs) of the amorphous matrix in system under study at T = 4.5 and 7 K. Distributions of single spectral peak widths of the detected spectra (the line width distributions) have been calculated and compared with the line width distributions simulated for the same system. Comparative analysis of experimental and simulated distributions allows to evaluate the value of LFM contribution at 4.5 and 7 K. The single molecule spectroscopy data were compared with the literature values of inverse optical dephasing times, 1/piT(2), as measured for the same system by photon echo (S.J. Zilker et al., J. Chem. Phys. 109 (1998) 6780). (C) 2003 Elsevier B.V. All rights reserved.
Article
According to the modern conception, the dynamics of amorphous solids in the intermediate interval of low temperatures (from a few up to dozens of Kelvins) is determined mainly by quasi-localized low-frequency vibrational modes (LFMs). Up to now, it is known very little about a nature and properties of these excitations in disordered solids. High-selective laser spectroscopy of impurity centres, embedded to transparent disordered matrix as a probe, is a very powerful method for deriving information about LFMs. In the two presented papers the results of our photon echo (PE) and single molecule spectroscopy (SMS) studies of LFMs in organic amorphous solids are discussed. In the first part we review the recent results of our PE-studies. Two cases are analyzed: (a) a coupling of chromophores with a continuous broad spectrum of LFMs, which shape was taken from light scattering experiments, and (b) a coupling of chromophores with continuous LFMs spectra, calculated on the base of the soft potentials model. In the second part we consider the results of our studies of LFMs in a glassy polymer on microscopic level using SMS. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Article
Laser spectroscopic techniques at low temperature, such as fluorescence line-narrowing and hole burning, enable an increase of spectral resolution by a factor of 103–105 compared to conventional spectroscopy at room temperature. With these methods, it is possible to retrieve a fingerprint of the species involved and to measure the rates of dynamic processes that normally remain hidden in the broad absorption bands. A few applications carried out in our laboratory will be discussed: (1) the determination of energy transfer rates in the peripheral LH2 complex of purple bacteria; (2) the study of spectral diffusion and its implications in three types of systems: (a) the B820 and B777 subunits of the LH1 complex of purple bacteria, (b) the photosystem II reaction center (PSII RC) and CP47 antenna complex of green plants, and (c) an organic glass doped with bacteriochlorophyll a; (3) the unraveling of 0-0 transitions and the pathways of photoconversion between a number of conformations of the green fluorescent protein mutant S65T; (4) the measuring of electron-phonon coupling strengths in PSII RC and the red fluorescent protein DsRed; and (5) the determination and comparison of the homogeneous linewidths and optical dephasing in photosynthetic chromoprotein complexes and autofluorescent proteins.
Article
A linear temperature dependence of the specific heat in amorphous solids at very low temperatures is shown to follow from an ionic tunneling model. Moreover, this model predicts both the observed temperature dependence and the magnitude of the thermal conductivity and also explains the anomalous results obtained for the phonon free path by means of stimulated Brillouin scattering.
Article
The concept of “homogeneous” spectral linewidths in doped amorphous solids and some possibilities to separate the linewidth parts related to the optical dephasing and spectral diffusion (SD) are discussed. The results of the model calculations of the photon echo (PE) decay under conditions of a large dispersion of homogeneous linewidths are presented. The deviations of the PE decay from an exponential due to this dispersion are discussed. The experimental data on incoherent PE in the terrylene/polyethylene system are presented and compared with literature single molecule spectroscopy (SMS) data. It is shown that in this case the dephasing time dispersion plays an important role for the linewidth distribution in SMS. A simple method for separation of the dephasing and SD linewidths in the SMS, based on intensity saturation effects, is considered. The facilities of this method using some SMS data are demonstrated.
Article
The study of a new dye-matrix system-quickly frozen ortho-dichlorobenzene weakly doped with terrylene--via single-molecule (SM) spectroscopy is presented. The spectral and photo-physical properties, dynamics, and temperature broadening of SM spectra at low temperatures are discussed. The data reveal a broad inhomogeneous distribution, which indicates a high degree of matrix inhomogeneities, but at the same time, huge fluorescence emission rates and extraordinary SM spectral and photochemical stability with almost complete absence of blinking and bleaching. These unusual properties render the new system a promising candidate for applications in photonics, for example, for delivering single photons on demand.
Article
Measurements of the elastic and inelastic neutron scattering from vitreous silica in the frequency range 0.3 to 4 THz and with scattering vectors in the range 0.2 to 5.3 Å-1 are analyzed in conjunction with heat-capacity measurements on the same samples to provide a microscopic description of low-frequency vibrational modes. The results show that additional harmonic excitations coexist with sound waves below 1 THz, and that these excitations correspond to relative rotation of SiO4 tetrahedra.
Article
The Pb(Zr,Ti)O3 (PZT) disordered solid solution is widely used in piezoelectric applications owing to its excellent electromechanical properties. Six different structural phases have been observed for PZT at ambient pressure, each with different lattice parameters and average electric polarization. It is of significant interest to understand the microscopic origin of the complicated phase diagram and local structure of PZT. Here, using density functional theory calculations, we show that the distortions of the material away from the parent perovskite structure can be predicted from the local arrangement of the Zr and Ti cations. We use the chemical rules obtained from density functional theory to create a phenomenological model to simulate PZT structures. We demonstrate how changes in the Zr/Ti composition give rise to phase transitions in PZT through changes in the populations of various local Pb atom environments.
Article
We present a single molecule fluorescence study that allows one to probe the nanoscale segmental dynamics in amorphous polymer matrices. By recording single molecular lifetime trajectories of embedded fluorophores, peculiar excursions towards longer lifetimes are observed. The asymmetric response is shown to reflect variations in the photonic mode density as a result of the local density fluctuations of the surrounding polymer. We determine the number of polymer segments involved in a local segmental rearrangement volume around the probe. A common decrease of the number of segments with temperature is found for both investigated polymers, poly(styrene) and poly(isobutylmethacrylate). Our novel approach will prove powerful for the understanding of the nanoscale rearrangements in functional polymers.
|
Question
Formatted question description: https://leetcode.ca/all/307.html
307. Range Sum Query - Mutable
Given an integer array nums, find the sum of the elements between indices i and j (i ≤ j), inclusive.
The update(i, val) function modifies nums by updating the element at index i to val.
Example:
Given nums = [1, 3, 5]
sumRange(0, 2) -> 9
update(1, 2)
sumRange(0, 2) -> 8
Note:
The array is only modifiable by the update function.
You may assume the number of calls to update and sumRange function is distributed evenly.
@tag-array
Algorithm
SegmentTree: The line segment tree is a full binary tree with some additional information, such as the sum of the nodes of the subtree, or the maximum value, the minimum value, etc.
Java implementation https://algs4.cs.princeton.edu/99misc/SegmentTree.java.html
Java
|
Now showing items 1-20 of 274
• #### 2-crossing critical graphs with a V8 minor
(University of Waterloo, 2012-01-17)
The crossing number of a graph is the minimum number of pairwise crossings of edges among all planar drawings of the graph. A graph G is k-crossing critical if it has crossing number k and any proper subgraph of G has a ...
• #### 5-Choosability of Planar-plus-two-edge Graphs
(University of Waterloo, 2018-01-02)
We prove that graphs that can be made planar by deleting two edges are 5-choosable. To arrive at this, first we prove an extension of a theorem of Thomassen. Second, we prove an extension of a theorem Postle and Thomas. ...
• #### Action of degenerate Bethe operators on representations of the symmetric group
(University of Waterloo, 2018-05-24)
Degenerate Bethe operators are elements defined by explicit sums in the center of the group algebra of the symmetric group. They are useful on account of their relation to the Gelfand-Zetlin algebra and the Young-Jucys-Murphy ...
• #### Acyclic Colouring of Graphs on Surfaces
(University of Waterloo, 2018-09-04)
An acyclic k-colouring of a graph G is a proper k-colouring of G with no bichromatic cycles. In 1979, Borodin proved that planar graphs are acyclically 5-colourable, an analog of the Four Colour Theorem. Kawarabayashi and ...
• #### ADMM for SDP Relaxation of GP
(University of Waterloo, 2016-08-30)
We consider the problem of partitioning the set of nodes of a graph G into k sets of given sizes in order to minimize the cut obtained after removing the k-th set. This is a variant of the well-known vertex separator ...
• #### Algebraic Analysis of Vertex-Distinguishing Edge-Colorings
(University of Waterloo, 2006)
Vertex-distinguishing edge-colorings (vdec colorings) are a restriction of proper edge-colorings. These special colorings require that the sets of edge colors incident to every vertex be distinct. This is a relatively ...
• #### Algebraic and combinatorial aspects of incidence groups and linear system non-local games arising from graphs
(University of Waterloo, 2019-06-06)
To every linear binary-constraint system (LinBCS) non-local game, there is an associated algebraic object called the solution group. Cleve, Liu, and Slofstra showed that a LinBCS game has a perfect quantum strategy if and ...
• #### Algebraic Aspects of Multi-Particle Quantum Walks
(University of Waterloo, 2012-12-04)
A continuous time quantum walk consists of a particle moving among the vertices of a graph G. Its movement is governed by the structure of the graph. More formally, the adjacency matrix A is the Hamiltonian that determines ...
• #### Algebraic Methods and Monotone Hurwitz Numbers
(University of Waterloo, 2012-09-21)
We develop algebraic methods to solve join-cut equations, which are partial differential equations that arise in the study of permutation factorizations. Using these techniques, we give a detailed study of the recently ...
• #### Algebraic Methods for Reducibility in Nowhere-Zero Flows
(University of Waterloo, 2007-09-25)
We study reducibility for nowhere-zero flows. A reducibility proof typically consists of showing that some induced subgraphs cannot appear in a minimum counter-example to some conjecture. We derive algebraic proofs of ...
• #### Algebraic Tori in Cryptography
(University of Waterloo, 2005)
Communicating bits over a network is expensive. Therefore, cryptosystems that transmit as little data as possible are valuable. This thesis studies several cryptosystems that require significantly less bandwidth than ...
• #### Analyzing Quantum Cryptographic Protocols Using Optimization Techniques
(University of Waterloo, 2012-05-22)
This thesis concerns the analysis of the unconditional security of quantum cryptographic protocols using convex optimization techniques. It is divided into the study of coin-flipping and oblivious transfer. We first examine ...
• #### Applications of Bilinear Maps in Cryptography
(University of Waterloo, 2002)
It was recently discovered by Joux [30] and Sakai, Ohgishi and Kasahara [47] that bilinear maps could be used to construct cryptographic schemes. Since then, bilinear maps have been used in applications as varied as ...
• #### Applications of Semidefinite Programming in Quantum Cryptography
(University of Waterloo, 2007-05-18)
Coin-flipping is the cryptographic task of generating a random coin-flip between two mistrustful parties. Kitaev discovered that the security of quantum coin-flipping protocols can be analyzed using semidefinite programming. ...
• #### Applications of Stochastic Gradient Descent to Nonnegative Matrix Factorization
(University of Waterloo, 2019-07-15)
We consider the application of stochastic gradient descent (SGD) to the nonnegative matrix factorization (NMF) problem and the unconstrained low-rank matrix factorization problem. While the literature on the SGD algorithm ...
• #### Applied Hilbert's Nullstellensatz for Combinatorial Problems
(University of Waterloo, 2016-09-23)
Various feasibility problems in Combinatorial Optimization can be stated using systems of polynomial equations. Determining the existence of a \textit{stable set} of a given size, finding the \textit{chromatic number} of ...
• #### Approximate Private Quantum Channels
(University of Waterloo, 2006)
This thesis includes a survey of the results known for private and approximate private quantum channels. We develop the best known upper bound for ε-randomizing maps, <em>n</em> + 2log(1/ε) + <em>c</em> ...
• #### Approximating Minimum-Size 2-Edge-Connected and 2-Vertex-Connected Spanning Subgraphs
(University of Waterloo, 2017-04-27)
We study the unweighted 2-edge-connected and 2-vertex-connected spanning subgraph problems. A graph is 2-edge-connected if it is connected on removal of an edge, and it is 2-vertex-connected if it is connected on removal ...
• #### Approximation Algorithms for (S,T)-Connectivity Problems
(University of Waterloo, 2010-08-03)
We study a directed network design problem called the $k$-$(S,T)$-connectivity problem; we design and analyze approximation algorithms and give hardness results. For each positive integer $k$, the minimum cost $k$-vertex ...
• #### Approximation Algorithms for Clustering and Facility Location Problems
(University of Waterloo, 2017-04-06)
Facility location problems arise in a wide range of applications such as plant or warehouse location problems, cache placement problems, and network design problems, and have been widely studied in Computer Science and ...
UWSpace
University of Waterloo Library
200 University Avenue West
|
# Thread: Proof of Union of Power Sets
1. ## Proof of Union of Power Sets
Prove that $\displaystyle \cup_{i=0}^{k} P_{i} x P_{k-i} \rightarrow P_{k}(X \cup Y)$ is a bijection (For X and Y disjoint sets) defined by (A,B) --> (AUB). (by P_k(X) I mean the power set of X, that is all of the possible subsets of X of cardinality k)
Deduce that:
$\displaystyle {m+n \choose k} = \sum_{i = 0}^{k} {m \choose i} {n \choose k- i}$
I have what (I think) are the makings of a proof of both parts but I'm really stuck.
If we take A⊆X and B⊆Y where A∪B ⊆X∪Y. Consider C⊆X∪Y which can be broken up into: C=C∩X+C∩Y since X and Y are disjoint. Then we can then map: f:P(X)× P(Y)→P(X∪Y) by f(A,B)= A∪B. Which has inverse f^(-1) (C)=(C∩X,C∩Y). If we look at sets of a particular size: if |A|= i and |B|= k-i then since X and Y are disjoint, and since they are disjoint we know |A∪B|= k We also know that since A is a subset of X and B is a subset of Y and in particular A is a subset of X with cardinality i and B is a subset of Y of cardinality k-i we know that A∈P_i (X) and B∈P_(k-i) (Y).
Inversely if If we take |C|=k then |C∩X+C∩Y|=k and |C∩X|+|C∩Y|=k. We now consider P_i (X)× P_(k-i) (Y)→P_k (X∪Y) as an injection we have to modify the domain for it to be a bijection. Looking at inverse function above with a new cardinality we take f^(-1):P_k (X∪Y)→$\displaystyle \cup_{i=0}^{k}$P_i (X)× P_(k-i) (Y) 〗 by g(C)=(C∩X),(C∩Y) since this has to map to the union of all of the P_i (X)× P_(k-i) (Y)as i varies from 0 to k in order to be well defined (since C∈P_k (X∪Y) and can have cardinalities between 0 and k. And now with a well defined inverse we can amend the domain of the original function such that: ⋃_(i=0)^k〖P_i (X)× P_(k-i) (Y)→P_k (X∪Y)〗 by f(A,B)= A∪B as stated in the beginning which is now not just an injection but also a bijection.
For the second part I have:
We define ((m+n)¦k)=|P_k (Z) | where |Z|=m+n in the same vein the right hand side can be re-written: ∑_(i=0)^k: (m¦i)(n¦(k-i)) =∑_(i=0)^k〖|P_i (X) ||P_(k-i) (Y)|〗 where |X|=m & |Y|=n. We can rewrite Z=X∪Y so now the problem looks like this: |P_k (X∪Y) |=∑_(i=0)^k▒〖|P_i (X) ||P_(k-i) (Y)|〗. P_k (X∪Y)is the union of all the subsets of (X∪Y) of cardinality k. So the cardinality of that is the number of possible subsets of size k. And I have no idea….
(Sorry I'm not very good with trying to do math symbols online. To make it easier here's a link to an image of what I wrote: http://pic100.picturetrail.com/VOL71.../383315256.jpg
It's also attached (Hopefully) I originally wrote it in a .docx file but I can't seem to upload it here anyway...
Thanks for any help. I'm really quite stuck!
2. Originally Posted by jmcq
Prove that $\displaystyle \cup_{i=0}^{k} P_{i} x P_{k-i} \rightarrow P_{k}(X \cup Y)$ is a bijection (For X and Y disjoint sets) defined by (A,B) --> (AUB). (by P_k(X) I mean the power set of X, that is all of the possible subsets of X of cardinality k)
Deduce that:
$\displaystyle {m+n \choose k} = \sum_{i = 0}^{k} {m \choose i} {n \choose k- i}$
I have what (I think) are the makings of a proof of both parts but I'm really stuck.
If we take A⊆X and B⊆Y where A∪B ⊆X∪Y. Consider C⊆X∪Y which can be broken up into: C=C∩X+C∩Y since X and Y are disjoint. Then we can then map: f:P(X)× P(Y)→P(X∪Y) by f(A,B)= A∪B. Which has inverse f^(-1) (C)=(C∩X,C∩Y). If we look at sets of a particular size: if |A|= i and |B|= k-i then since X and Y are disjoint, and since they are disjoint we know |A∪B|= k We also know that since A is a subset of X and B is a subset of Y and in particular A is a subset of X with cardinality i and B is a subset of Y of cardinality k-i we know that A∈P_i (X) and B∈P_(k-i) (Y).
Inversely if If we take |C|=k then |C∩X+C∩Y|=k and |C∩X|+|C∩Y|=k. We now consider P_i (X)× P_(k-i) (Y)→P_k (X∪Y) as an injection we have to modify the domain for it to be a bijection. Looking at inverse function above with a new cardinality we take f^(-1):P_k (X∪Y)→$\displaystyle \cup_{i=0}^{k}$P_i (X)× P_(k-i) (Y) 〗 by g(C)=(C∩X),(C∩Y) since this has to map to the union of all of the P_i (X)× P_(k-i) (Y)as i varies from 0 to k in order to be well defined (since C∈P_k (X∪Y) and can have cardinalities between 0 and k. And now with a well defined inverse we can amend the domain of the original function such that: ⋃_(i=0)^k〖P_i (X)× P_(k-i) (Y)→P_k (X∪Y)〗 by f(A,B)= A∪B as stated in the beginning which is now not just an injection but also a bijection.
For the second part I have:
We define ((m+n)¦k)=|P_k (Z) | where |Z|=m+n in the same vein the right hand side can be re-written: ∑_(i=0)^k: (m¦i)(n¦(k-i)) =∑_(i=0)^k〖|P_i (X) ||P_(k-i) (Y)|〗 where |X|=m & |Y|=n. We can rewrite Z=X∪Y so now the problem looks like this: |P_k (X∪Y) |=∑_(i=0)^k▒〖|P_i (X) ||P_(k-i) (Y)|〗. P_k (X∪Y)is the union of all the subsets of (X∪Y) of cardinality k. So the cardinality of that is the number of possible subsets of size k. And I have no idea….
(Sorry I'm not very good with trying to do math symbols online. To make it easier here's a link to an image of what I wrote: http://pic100.picturetrail.com/VOL71.../383315256.jpg
It's also attached (Hopefully) I originally wrote it in a .docx file but I can't seem to upload it here anyway...
Thanks for any help. I'm really quite stuck!
I'm not really sure what this is saying. Do you mean $\displaystyle f:\bigcup_{n=0}^{k}\left\{\mathcal{P}_n(X)\times\m athcal{P}_{k-n}(Y)\right\}\mapsto\mathcal{P}_k\left(X\cup Y\right)$? Ok, so assume that $\displaystyle f\left(A,B\right)=f\left(C,D\right)$ then $\displaystyle A\cup B=C\cup D$. So, if $\displaystyle x\in A\cup B$ then $\displaystyle x\in A\text{ or }x\in B$. It can't be in both so assume it's in $\displaystyle A$. Then, $\displaystyle x\in C\cup D$ but since $\displaystyle x\in A\subseteq X$ we see that $\displaystyle x\notin D\implies x\in C$ and so $\displaystyle A\subseteq C$. Using the exact same idea we see that $\displaystyle A=C,B=D\implies (A,B)=(C,D)$ and the conclusion follows.
To see that it's surjective let $\displaystyle M\in\mathcal{P}_K\left(X\cup Y\right)$, then $\displaystyle M=A\cup B$ where $\displaystyle A\subseteq X,B\subseteq Y$ and $\displaystyle \text{card }A+\text{card }B=k\implies \text{card B}=k-\text{card }A$. It clearly follows that $\displaystyle A\in\mathcal{P}_{\text{card }A}(X),B\in\mathcal{P}_{k-\text{card }A}(Y)$ and the conclusion follows.
3. Thanks and Yes $\displaystyle f:\bigcup_{i=0}^{k}\left\{\mathcal{P}_i(X)\times\m athcal{P}_{k-i}(Y)\right\}\mapsto\mathcal{P}_k\left(X\cup Y\right)$
Is what I am supposed to be showing is bijective.
So let me get this straight (sorry I'm a little slow) you're showing that it's injective by showing that f(A,b) = f(C,D) implies (A,B) = (C,D) (which is the definition of injective, I get that much. I don't see why this implies that AUB = CUD. Also I'm not sure where the conclusion that x is not an element of D comes from. The biggest problem with my understanding, is I'm not sure how it relates to the "power set" function. We were given the hint:
If $\displaystyle A\subseteq X$ and $\displaystyle B\subseteq Y$ , we can join them to make the subset $\displaystyle A\cup B$ of $\displaystyle X\cup Y$ . On the other hand, given $\displaystyle C\subseteq \left(X\cup Y\right)$ , we can break it up into $\displaystyle C\cap X$ and $\displaystyle C\cap Y$ .
This gives a bijection from [Math]{\mathcal{P} (X)\times\mathcal{P} (Y)}\mapsto\mathcal{P}\left(X\cup Y\right)[/tex], namely, f:(A,B) = $\displaystyle A\cup B$ and
f^-1: (C) = $\displaystyle (C\cap X, C\cap Y )$
Now look at sets of a particular size. If |A| = i and |B| =k-i, then |$\displaystyle (A\cup B)$| =k (A and B are disjoint). Inversely if |C| =k, what can you say about |$\displaystyle C\cap X$| and |$\displaystyle C\cap Y$|? Take it from here...
Which is what I tried to do but seem to be wandering in the dark.
On the second part of the question I know that it is basically the cardinality of the first part and that I have to prove that the Cardinality of the RHS (which is a Union) is the cardinality of the LHS (which is a Sum of products) but I'm also lost there...
4. Originally Posted by jmcq
Thanks and Yes $\displaystyle f:\bigcup_{i=0}^{k}\left\{\mathcal{P}_i(X)\times\m athcal{P}_{k-i}(Y)\right\}\mapsto\mathcal{P}_k\left(X\cup Y\right)$
Is what I am supposed to be showing is bijective.
So let me get this straight (sorry I'm a little slow) you're showing that it's injective by showing that f(A,b) = f(C,D) implies (A,B) = (C,D) (which is the definition of injective, I get that much. I don't see why this implies that AUB = CUD. Also I'm not sure where the conclusion that x is not an element of D comes from. The biggest problem with my understanding, is I'm not sure how it relates to the "power set" function. We were given the hint:
If $\displaystyle A\subseteq X$ and $\displaystyle B\subseteq Y$ , we can join them to make the subset $\displaystyle A\cup B$ of $\displaystyle X\cup Y$ . On the other hand, given $\displaystyle C\subseteq \left(X\cup Y\right)$ , we can break it up into $\displaystyle C\cap X$ and $\displaystyle C\cap Y$ .
This gives a bijection from [Math]{\mathcal{P} (X)\times\mathcal{P} (Y)}\mapsto\mathcal{P}\left(X\cup Y\right)[/tex], namely, fA,B) = $\displaystyle A\cup B$ and
f^-1: (C) = $\displaystyle (C\cap X, C\cap Y )$
Now look at sets of a particular size. If |A| = i and |B| =k-i, then |$\displaystyle (A\cup B)$| =k (A and B are disjoint). Inversely if |C| =k, what can you say about |$\displaystyle C\cap X$| and |$\displaystyle C\cap Y$|? Take it from here...
Which is what I tried to do but seem to be wandering in the dark.
On the second part of the question I know that it is basically the cardinality of the first part and that I have to prove that the Cardinality of the RHS (which is a Union) is the cardinality of the LHS (which is a Sum of products) but I'm also lost there...
I think you're making this much harder than it need be.
You agree that $\displaystyle f(A,B)=f(C,D)\implies (A,B)=(C,D)$ proves injectivity right?
Well, $\displaystyle f(A,B)=A\cup B$ and $\displaystyle f(C,D)=C\cup D$ so we see that $\displaystyle f(A,B)=f(C,D)\implies A\cup B=C\cup D$. But, $\displaystyle X\cap Y=\varnothing$. So, if $\displaystyle x\in A\cup B$ it has to either be in $\displaystyle A$ or $\displaystyle B$ but not both (since $\displaystyle A \subseteq X$ and $\displaystyle B\subseteq Y$). Thus, for all cases we make a modified argument of the following: assume $\displaystyle x\in A$ then $\displaystyle x\in A\cup B=C\cup D$ and so $\displaystyle x\in C\cup D$ but since $\displaystyle D\cap A\subseteq X\cap Y=\varnothing$ we see that $\displaystyle x\notin D$ and so it must be that $\displaystyle x\in C$. Thus, $\displaystyle A\subseteq C$. Doing this four times proves that $\displaystyle A=B,C=D$ and since an ordered tuple is equal if and only if it's coordinates are equal we see that this implies that $\displaystyle (A,B)=(C,D)$. Injectivity follows.
For surjectivity, we let $\displaystyle M\in\mathcal{P}_k(X\cup Y)$. Using the fact that $\displaystyle X\cap Y=\varnothing$ we may conclude that $\displaystyle M=A\cup B$ where $\displaystyle A\subseteq X,B\subseteq Y$ and $\displaystyle A\cap B=\varnothing$. But, we know that $\displaystyle k=\text{card }A\cup B=\text{card }A+\text{card }B+\text{card }A\cap B=\text{card }A+\text{card }B$ (using the fact, once again, that $\displaystyle A$ and $\displaystyle B$ are disjoint). Thus, $\displaystyle \text{card }B=k-\text{card }A$. We see then $\displaystyle A$ is a subset of $\displaystyle X$ of size $\displaystyle \ell$ and $\displaystyle B$ is a subset of size $\displaystyle k-\ell$. And thus $\displaystyle X\in\mathcal{P}_\ell(X)$ and $\displaystyle B\in\mathcal{P}_{k-\ell}(Y)$. So, $\displaystyle A\times B\in\mathcal{P}_\ell(X)\times\mathcal{P}_{k-\ell}(Y)$ and so $\displaystyle A\times B\in\bigcup_{n=0}^{k}\left\{\mathcal{P}_{n}(X)\tim es\mathcal{P}_{k-n}(Y)\right\}$. Therefore, $\displaystyle f(A,B)=A\cup B=M$. Since $\displaystyle M$ was arbitrary we know that $\displaystyle f$ is surjective.
Bijectivity follows by combination.
5. Ah I see how you did that now >.<. Thanks!
Now on to the second part of the question. I guess I'll have another look at it. Thanks again.
|
# Statistical whitening with 1D projections
Geometric intuition for our recent paper on statistical whitening using overcomplete bases.
A very old problem in statistics and signal processing is to statistically whiten a signal, i.e. to linearly transform a signal with covariance $${\bf C}$$ to one with identity covariance. The most common approach to do this to find the principal components of the signal (the eigenvectors of $${\bf C}$$), then scale the signal according to how much the it varies along each principal axis. The downside of this approach is that if the inputs change, then the principal axes need to be recomputed.
In our recent preprint (https://arxiv.org/abs/2301.11955), we introduce a completely different and novel approach to statistical whitening. We do away with finding principal components altogether, and instead develop a framework for whitening with a fixed frame (i.e. an overcomplete basis) using concepts borrowed from frame theory of linear algebra, and tomography, the science of reconstructing signals from projections.
The figure above shows the geometric concepts behind our approach. It’s useful to know that we can geometrically represent densities with covariance matrices $${\bf C}$$ as ellipsoids in $$N$$-dimensional space (top left panel, shaded black). Old work in tomography has shown that ellipsoids can be represented (reconstructed) from a series of 1D projections. The 1D projected densities are plotted as vertical lines in the middle panel, with colours corresponding to the axes along which the original density was projected, and colour saturation denoting probability at a given point. It turns out that if the density is Gaussian, then $$N(N+1)/2$$ projections along unique axes are necessary and sufficient to represent the original density. This number of required projections is the number of independent parameters in a covariance matrix. Importantly, the set of 1D projection axes can exclude the principal components of the original density, and is overcomplete, i.e. linearly dependent, since there are more than $$N$$ projections.
Unlike conventional tomographic approaches, the main goal of our study isn’t to reconstruct the ellipsoid, but rather to use the information derived from its projections to whiten the original signal. The top right plot shows each 1D marginal density’s variance; notice how the variance of the 1D projections is proportional to the length of the corresponding 1D slice, and that for this non-white joint density, the variances are quite variable. Meanwhile, for a whitened signal (bottom row), all projected variances equal 1! This geometric intuition involving 1D projections of Gaussian densities forms the foundation of our framework.
In the paper we show: 1) how to operationalize these geometric ideas into an optimization objective function to learn a statistical whitening transform; and 2) how to derive a recurrent neural network (RNN) that iteratively optimizes this objective, and converges to a steady-state solution where the outputs of the network are statistically white.
Mechanistically, this RNN adaptively whitens a signal by scaling it according to its marginal variance along a fixed, overcomplete set of projection axes, without ever learning the eigenvectors of the inputs. This is an attractive solution because constantly (re)learning eigenvectors with dynamically changing inputs may pose stability issues in a network. From a theoretical neuroscience perspective, our findings are particularly exciting because they generalize well-established ideas of single-neuron adaptive efficient coding via gain control to the level of a neural population.
tags:
updated:
|
## Physics (10th Edition)
We have $$\overline F=\frac{J}{\Delta t}=\frac{|p_f-p_0|}{\Delta t}$$ Region A: $\overline F_A=\frac{|8-4|}{4}=1N$ Region B: $\overline F_B=\frac{|8-8|}{1}=0$ Region C: $\overline F_C=\frac{|0-8|}{2}=4N$ Region D: $\overline F_D=\frac{|0-0|}{2}=0$ So, we find that the magnitude of the force in region C is the largest, and in region B and D it is the smallest. b) is correct.
|
# Does the twin paradox require both twins to be far away from any gravity field?
If one twin is on earth at 1 g and the other twin accelerates away from earth following a great big elliptical counterclockwise trajectory. He travels at .9 g for 20 years(according to earth time) as well as some small amount of left acceleration (.44 g since $$\sqrt{0.9^2+0.44^2}=1$$) to make the first semicircle . He then turns around and decelerates at .9 g in the opposite direction for 20 years(according to earth time) and now experiences some small amount of rightward acceleration (.44 g) to complete the semicircle. He then arrives at earth. Will they both be the same age?
Although the twin paradox has been discussed on this site on 42 pages of questions, there are only 4 pages that address the question of gravitational acceleration compared to motion acceleration. My intuition suggests that gravitational acceleration should have the same effect as motion acceleration so they should be the same age. In reading some of these questions I find contradictory answers. For example, this question
Gravitational Time Dilation vs Acceleration Time Dilation
suggests
a higher acceleration would yield the same results as more gravity
Why does only one twin travel in the twin paradox?
says
"that the earth twin experiences the same RELATIVE acceleration as the space twin (in the opposite direction) this is incorrect."
So which is the correct interpretation?
My twist to this question is the elliptical orbit. The direction of the principle component of acceleration (from the rear of the space ship) remains unchanged yet the lateral acceleration does change. That the lateral acceleration changes direction should not affect time dilation because that is a scalar quantity. Am I overlooking something when I make that statement?
Aside from the focus of the question in the title, there is a slight difference in the geometry. Gravitational acceleration gives a tidal effect whereas motion acceleration does not.This distinction though does not seem to enter into the calculation of time dilation though.
• I assumed you intended the magnitude of the acceleration to be 1 g, so I made some edits – Dale Jun 27 at 11:19
• Your question's title doesn't seem to correspond well with the body of your question. – PM 2Ring Jun 27 at 11:46
• please see edit – aquagremlin Jun 27 at 14:57
• Please do not make edits that invalidate responses already received. – Dale Jun 27 at 16:28
• The edit only clarified the objection. I will post again a CLEARER NEW question. – aquagremlin Jun 28 at 22:44
Will they both be the same age?
No, they will not. The traveling twin will be substantially younger. To calculate the age of each twin simply integrate the metric over their worldline: $$\tau = \int d\tau = \int \sqrt{g_{\mu\nu}\frac{dx^{\mu}}{d\lambda}\frac{dx^{\nu}}{d\lambda}}d\lambda$$ (in units where c=1). This procedure is general, it works for any twin motion and any spacetime, with or without gravity.
For example, this question Gravitational Time Dilation vs Acceleration Time Dilation suggests a higher acceleration would yield the same results as more gravity
Unfortunately, this one is worded a little poorly in a way that appears to be contributing to your confusion. Gravitational acceleration does not cause time dilation. Gravitational time dilation is caused by the gravitational potential. Also, the equivalence principle only applies over small enough regions of spacetime that spacetime curvature can be neglected.
So what would be true is that a clock on the ground on earth would tick slower than a clock raised 1 m off the ground on earth, and a clock on the back of the rocket would tick slower than a clock raised 1 m off the back of the rocket, and the difference in tick rate would be the same for both cases. That is how the equivalence principle would apply in this scenario.
That the lateral acceleration changes direction should not affect time dilation because that is a scalar quantity. Am I overlooking something when I make that statement?
Since the change in lateral acceleration is experimentally detectable it is in itself sufficient to break the symmetry. However, in this case it is rather irrelevant since acceleration does not cause time dilation anyway. But the two twins are in no way symmetric in this version of the problem.
Gravitational acceleration gives a tidal effect whereas motion acceleration does not.
This is correct. In fact, the tidal effect you mention is spacetime curvature. So the equivalence principle only is valid over regions of spacetime small enough that tidal effects are negligible.
• thank you for a direct answer. Though I am still troubled by your statement that " a clock on the back of the rocket would tick slower than a clock raised 1 m off the back of the rocket". Whether the clock is on the floor of the rocket or on a table on the floor of the rocket should make no difference. It is accelerating the same in both instances. – aquagremlin Jun 27 at 15:01
• My conclusion from your statement " Gravitational acceleration does not cause time dilation. Gravitational time dilation is caused by the gravitational potential. " is that gravity may feel the same as acceleration due to motion, but the effect on spacetime is very different. That we feel they are the same is an illusion caused by the limits of our senses. – aquagremlin Jun 27 at 15:04
• @aquagremlin as you said “It is accelerating the same in both instances.” That is indeed the whole point of the example. The acceleration is the same but the (pseudo) gravitational time dilation is different because time dilation is based on potential and not acceleration. – Dale Jun 27 at 15:46
• @aquagremlin you said “gravity may feel the same as acceleration due to motion, but the effect on spacetime is very different”. Locally they are identical. The problem I was addressing was not an incorrect application of the equivalence principle, but a misunderstanding of how gravitational time dilation itself works. It is a common mistake to think that gravitational time dilation depends on the gravitational acceleration when in fact it depends on the gravitational potential – Dale Jun 27 at 16:22
• So if you understand the no-gravity twin paradox 1st, it's the change in velocity times the distance to Earth that causes the Earth clock to jump forward on turn around. That is exactly why it's the potential, and not $g$-force alone that matters. – JEB Jun 27 at 22:56
Gravitational time dilation on the surface of the Earth vs. life at infinity is minuscule and has no impact on the Twin Paradox.
Time dilation due to acceleration does not play a role in the age difference for the traveling twin: all that matters is that he changes direction after traveling at high speed. Including linear acceleration only confuses the matter, and adding a big elliptical loop just adds another dimension.
It's best to understand the idealized twin paradox 1st: that is the one with instant acceleration. If Earth twin says each leg lasts $$T$$, then he ages $$2T$$ for whole trip.
Meanwhile he sees space-twin age $$T/\gamma$$ on each leg of the trip.
The paradox arises because space twin also sees himself age $$T/\gamma$$ on each leg of the trip, but he sees Earth twin age $$T/\gamma^2$$ on each leg of the trip.
Note that:
$$2T/\gamma^2 \ne 2T$$
so that space twin has a discrepancy of $$\Delta T = 2T(1-1/\gamma^2)$$.
Once can look too gravitational time dilation during the acceleration, but the problem is that space twin took $$0$$ seconds to turn around in his reference frame. He also to $$0$$ seconds to turn around Earth's reference frame.
Note that:
$$0 - 0 \ne \Delta T$$
However, at the turn around event, the Earth's clock is both at $$T/\gamma$$ (for the outgoing twin) and at $$T/\gamma + \Delta T$$ (for the ingoing twin) at the same time.
When the space twin switches reference frames, the Earth clock advances by $$\Delta T$$. You can work that a gravitational time dilation back on Earth would correspond to $$\Delta T$$, but that has the unfortunate property of being reversible, if the twin turns around again...and nobody wants to accept time going backwards, so considering it as gravitational time dilation is tricky.
I think it's better to remain in flat space time and be cognizant of the fact that the time on Earth at the turn event is not well defined, and depends on the velocity of the space twin. If he turns around, Earth time jumps forward; however if he decides to accelerate away from Earth even faster, then time on Earth can jump backwards.
• Thank you for answering but I’m afraid your answer has too much information. Especially when you say “ Time dilation due to acceleration does not play a role in the age difference for the traveling twin: all that matters is that he changes direction after traveling at high speed.“ This seems distracting and confusing. So you are saying that being in 1 g at earth gravity is not the same as accelerating at 1 g in a space ship? – aquagremlin Jun 27 at 2:58
• @aquagremlin You seem to have a misconception that acceleration resolves the twin paradox. It doesn’t. “Time dilation due to acceleration does not play a role in the age difference for the traveling twin: all that matters is that he changes direction after traveling at high speed.” - Is exactly the correct resolution of the paradox. If you find this “distracting and confusing”, then you don’t understand how the paradox is resolved and should give it more thought. This answer is correct +1 – safesphere Jun 27 at 4:52
• @aquagremlin I think that your confusion stems from your belief that acceleration is the key to the twin paradox. In the simplest version of the twin paradox, the age difference happens because the traveling twin changes inertial reference frame, but the stay at home twin doesn't change frames. True, acceleration is involved in the process of changing frames, but it's the frame change which is crucial. – PM 2Ring Jun 27 at 12:07
• please see edit – aquagremlin Jun 27 at 14:58
• It may help to consider the space twin to be a photon going say 5 light years and hitting a mirror. The whole journey out, T_earth = 0y. The nanosecond after reflection, T_earth = 10y, and it holds there the whole way back. On arrival, photon's age is 0, Earth is 10. All the 'missing' time occurs at the instant of reflection because "now" on Earth (from 5 light years out) can be any value within a 10 year span, depending only on your speed (see: the Andromeda Paradox). – JEB Jun 27 at 17:08
|
# All Questions
581 views
### How can photons interact with anything?
I read photons do not age because they move at the speed of light. So when a photon interacts with my eyes, aren't they apart in space-time by the difference of the time in the frame of reference of ...
645 views
### Some conceptual questions on the renormalization group
I recently followed courses on QFT and the subject of renormalization was discussed in some detail, largely following Peskin and Schroeder's book. However, the chapter on the renomalization group is ...
725 views
### Direction of X-rays from x-ray tube
Typically anode of X-ray tube is at angle of ~45 degrees. Many images show that emitted X-rays are mostly perpendicular to electron direction. Is that correct? I had an impression that x-rays will ...
660 views
### Three integrals in Peskin's Textbook
Peskin's QFT textbook 1.page 14 $$\int_0 ^\infty \mathrm{d}p\ p \sin px \ e^{-it\sqrt{p^2 +m^2}}$$ when $x^2\gg t^2$, how do I apply the method of stationary phase to get the book's answer. ...
151 views
### How symmetry is related to the degeneracy?
I have several questions about symmetry in quantum mechanics. It is often said that the degeneracy is the dimension of irreducible representation. I can understand that if the Hamiltonian has a ...
1k views
### Simultaneous Charging and Discharging Capacitor
sorry if I sound little noobish. Though I have a fairly good understanding of physics, I sometimes don't understand the electrical aspects. Say there is a capacitor. This capacitor is expected to act ...
50 views
### Practical Resonance Derivation?
To my understanding, practical resonance occurs when the amplitude is at a maximum. Is this correct? Also I have looked all over for a derivation of the formula for angular frequency of practical ...
64 views
### Metric to describe an expanding spacetime from coordinates reflecting the perspective of a local observer
The FLRW metric describes the metric expansion of spacetime from the perspective of comoving coordinates. Given the way this metric is usually formulated, comoving distances stay constant, and the ...
125 views
### Is there any connection between “Lagrangian and Eulerian formalism of fluid” and “Heisenberg and Shrodinger picture”
Is there any connection between "Lagrangian and Eulerian formalism of fluid" and "Heisenberg and Shrodinger picture of Quantum mechanics"? Thanks!
126 views
### Does nonlocal theory violate causality?
Let's talk about two kinds of nonlocal theories. The first one frequently derives from integrating out part of the degrees of freedom to obtain a kind of effective theory. Probably, we get an integral ...
29 views
### Good First year physics lecture notes [duplicate]
My course textbook is Halliday fundamental of physics, this book is huge and since each week, they cover a lot of material in lectures (something about 6 chapters of the textbook), I find it hard to ...
142 views
### Does the transmission axis matter for sending polarized light through polarized glass?
If I have polarized light and I send through only one polarized glass plane, does the transmission axis matter, or will the intensity be halved no matter what.
99 views
### Could a super conductor actually be used to repel gravity? [duplicate]
I've always been interested in anti-gravity and how you could do it. I know that it would make space travel easier because less fuel would be required, but is it possible?
82 views
### Correct formula to express the potential generated by a single layer charge distribution
Assume that the closed surface $S$ encircles a volume $V$, and that a surface charge with density $\sigma$ ("single layer") is distributed over $S$. My question regards the electrostatic potential ...
502 views
### Thought experiment using quantum entanglement in position and its effects
Consider we have two atoms $a$ and $b$. They are entangled with each other in position and momentum, with some wavefuction describing them in position space that is $\Psi(x_a, x_b)$. This ...
599 views
### Biot-Savart Law from linear to volume current distribution
Biot-Savart law for a linear current distribution is: $\displaystyle \vec{B}=\frac{\mu I}{4\pi}\int\frac{\vec{dl}\times \vec{r}}{r^{3}}$. In the book that my professor uses says that if we have ...
43 views
### When is the speed specified for an object experiencing an exponential force?
So this is the question given in my text book: A particle of mass m is at rest at the origin at time $t = 0$. It is subjected to a force $F (t) = F_0e^{–bt}$ in the $x$ direction. Its speed ...
77 views
### Solving the 1-d time-independent Schroedinger's equation with an infinite boundary
In my introductory modern physics class we have examined time-independent solutions to the Schrödinger equation in 1 dimension. We looked at a few cases without finite boundary, e.g., free particles ...
118 views
### Why is $D$ a $2$-form and $E$ a $1$-form?
Usually in electrostatics we start by introducing the vector field $\mathbf{E}$ representing the electric field due to some charge distribution. Later when we study fields in materials we consider the ...
196 views
### What forces are involved in this situation?
A old massless rope of 12 meters attached to a ceiling can sustain a maximum tension force of 1200N before breaking. An 85 kg person climbs up the rope. What is the minimum possible time in which ...
529 views
### If electromagnetic waves have magnetic fields, why beam of flashlight is not disrupted by a Magnet?
Wikipedia article about Electromagnetic Radiation says "As an electromagnetic wave, it has both electric and magnetic field components". And this discussion also confirms Light is EM wave. Since we ...
438 views
### Fermi wavelength of graphene
Does anybody know the Fermi wavelength of graphene? I searched the Internet for a while without success. I found, by inspection with the Fourier transform of an S.T.M. image 3.84e^{-10} \mathrm{m}. ...
410 views
### Sudden Approximation for Beta Decay of Tritium Atom
I am working out this problem right now, and I'm confused by the answers I'm getting. Problem: A tritium nucleus (Z = 1) in a tritium atom undergoes beta decay, i.e., a neutron in the nucleus emits an ...
195 views
### Speed of light that is traveling away from the observer
The second postulate of Special Relativity states: The speed of light in a vacuum is the same for all observers, regardless of their motion relative to the source. Now imagine the observer ...
796 views
### Deriving group velocity formula
A formula for the group velocity of waves is: $u=k*dv/dk + v$ But then, since $k=2π/λ$, this equation can be rewritten as: $u=v-dv/dλ*λ$ But how? My attempt: $k*dv/dk$ = $((2π/λ)*dv/d(2π/λ)$ ...
249 views
### SU(N) Yang-Mills $gg \to ggg$ scattering at tree level
When talking about the spinor-helicity formalism in his new textbook on quantum field theory, Matthew D. Schwartz claims as a highly nontrivial example, it is quite easy to use the Parke-Taylor ...
904 views
### Have I calculated the air flow of this fan correctly?
To calculate air flow capacity of a fan in cubic feet per minute (cfm): multiply the average air speed you measured in feet/minute (fpm) by the area of the fan face in square feet. (Area of circle =þ ...
875 views
### Does the Higgs field really explain mass or just reformulate it? What about charge?
The mass of a particle used to be considered a fundamental and intrinsic property of the particle; on the same level as other properties such as charge, spin, chirality/helicity. Due to the Higgs ...
226 views
### Can relativistic momentum (photons) be used as propulsion for 'free' after the initial generation?
In discussing this question about propelling a spacecraft with photons and their relativistic momentum, the author asked that I restate my comment as another question. If photons can really be used ...
193 views
### Definition of a spinor and applications to GR
I understand the construction of the Clifford algebra $C(r,s)$ and in turn the corresponding $Pin$ and $Spin$ groups. I would like first to clarify that $Spin(r,s)^e$ is the universal covering group ...
297 views
### Explicit supersymmetry breaking fermion mass terms
I hope you can clear up my following confusions. In Girardello's and Grisaru's paper (Nuclear Physics B, 194, 65 (1982)) where they analysed the most general soft explicit supersymmetry breaking ...
80 views
### The angle to shoot moving object
I am obliged to count very simple problem (at least it seemed that it is simple, I hope it isn't to simple for this site). So i got observer who is standing $H$ below the object. The object is fired ...
362 views
### If nothing can travel faster than the speed of light, how can there be parts of the universe we can't see? [duplicate]
Assuming we originated from a single infinitely dense point in space time in the big bang, how can there be parts of the universe that we can't see as the light has not reached us yet, if nothing can ...
41 views
### Differentiate wave speed, don't understand
The speed $v$ of some wave is $ω/k$ and I want to differentiate this with respect to $k$. Apparently this equals: $dv/dk = d(ω/k)/dk-ω/k^2$ But I don't understand why. Isn't this just saying "the ...
425 views
### Is every electromagnetic radiation considered “light”?
Somebody mentioned on Freenode chatroom for physics that All Electromagnetic Radiation are delivered in form of Photons not just light. Is it true? Does that mean if we get a THF electrical ...
161 views
### What is the intepretation of the electromagnetic tensor?
Let $A$ be the four-potential, then we know that we can form the electromagnetic tensor as $F=dA$. This is usually done as a way to have a better writing of Maxwell's equations. So, to simplify the ...
68 views
### Free fall from space [duplicate]
If you leave a ball weighing 10 kg at a height of 500 km above sea level (neglecting air friction). How can calculate how long the ball hits the ground and what will be its speed? I know that: On ...
315 views
### What makes different metals conduct better?
If a metal is a Fermi sea, what makes different metals better conductors? Clearly, one valance electron dominates. All things being equal, I would have assumed that the bigger atoms, with more ...
306 views
### Renormalizing composite operators
Consider the QED Lagrangian, {\cal L} = \bar{\psi} ^{(0)} ( i \partial_\mu \gamma^\mu - m ) \psi ^{(0)} - e A _\mu ^{(0)} \bar{\psi} ^{(0)} \gamma ^\mu \psi ^{(0)} - \frac{1}{4} ...
697 views
### Spin-statistics theorem proof details
Recently I have read one book where there was some incomprehensible proof of the Pauli's spin-statistics theorem. I want to ask about a few details of the proof. First, the author derives ...
49 views
### Atmospheric heating and the reduction in viscosity
The oceans are becoming less viscous as they are heated. I'd imagine a similar effect is likely occurring in the atmosphere as well. What, if any, effect would this reduction of viscosity have on ...
165 views
### Many times speed of light [duplicate]
http://www.huffingtonpost.com/2014/03/24/theory-of-everything-big-bang-discovery_n_5019126.html What does "many times speed of light" really mean in this context? For a layman it's easy to draw wrong ...
109 views
### What is the cheapest way to land a grain of sand on the moon? [closed]
I have a payload that is the size and density of a grain of sand. I want to land it intact on the moon, but I am not particular about location beyond that. What is the least expensive way to get it ...
78 views
### Could the phase factor $i$ be replaced by “matrix representation” totally in quantum mechanics? [duplicate]
It seems that $i$ plays an important role in quantum mechanics (Q.M.). On the other hand, linear algebra plays such an important role in Q.M. too. So would linear algebra, such as a matrix be able to ...
232 views
### Electricity Flow and Ground Wire
Pre-face: My step-father and I were turning the heat down on the water heater. He demonstrated that touching the ground wire doesn't shock you. My understanding is that the ground wire doesn't have ...
81 views
### Electromagnetic force interaction
As far as I know, the electromagnetic force only interacts on particles with electrical charge, but I was told that the electromagnetic force was involved in the following reaction: ...
152 views
I know I have posted this question before some time ago. But no one could help so I decided to put my problem in another background. The Schrödinger equation of a free scalar field is given by ...
590 views
### What's this about kinetic energy increasing with the fifth power of length?
I don't quite understand this quote from Stephen J. Gould's Ever since Darwin, where he talks about the compensating physical characteristics of organisms for their size. Other essential features ...
|
Polynomial Wizard
04-28-2021, 08:25 PM
Post: #1
Wes Loewer Senior Member Posts: 416 Joined: Jan 2014
Polynomial Wizard
(04-26-2021 03:21 PM)cyrille de brébisson Wrote: Here is the change log.
Code:
polynomial wizard improvements
Say, that's much better. It's simple, clean, and efficient. The automatic solving in either direction is perfect. The graph visual is a nice touch.
There are a few easily fixed tweaks to look at, one mathematical and the rest are aesthetic.
For the displayed equation, any complex coefficients need to be surrounded by parentheses or else you get incorrect order of operations. For instance,
coefficients [2+3*i,4,5,6,0] displays:
2+3*i*X^4+...
but it needs to be
(2+3*i)*X^4+...
Also for polynomials with complex coefficients, the graph should probably just be left blank rather than trying to draw the axes without a graph to go with it.
When displaying the equation, using lower-case, superscripts, implied multiplication, and subtraction would make it much easier to read.
x⁶-2x⁵+3x⁴-4x³+5x²-6x+7
vs
X^6+-2*X^5+3*X^4+-4*X^3+5*X^2+-6*X+7
Finally, I don't know if it is possible, but is there a way that the 2d matrix editor could be limited to a single line? I can just picture a student expanding the entry into a matrix. When you do this and enter values, the Poly field produces an appropriate error message, but entering a matrix in the Roots field produces a meaningless polynomial.
04-29-2021, 10:56 AM (This post was last modified: 04-29-2021 10:57 AM by jonmoore.)
Post: #2
jonmoore Member Posts: 224 Joined: Apr 2020
RE: Polynomial Wizard
I think this is an excellent new edition to the Prime. In many ways, it reminds me of the wizards on Casio calculators, and that's no bad thing.
I do have a couple of suggestions for improving the UX:
1.) It would be really useful if one of the softkeys enabled the user to copy the polynomial for use in the graphing apps (after apt transformation, if required).
2.) It would be better if the matrix brackets were implicit (self populated) that way the user would only need to enter the coefficient/root values separated by commas. The UX would be less prone to student error and cleaner.
BTW coefficient is currently spelt incorrectly.
04-29-2021, 04:22 PM
Post: #3
Wes Loewer Senior Member Posts: 416 Joined: Jan 2014
RE: Polynomial Wizard
(04-29-2021 10:56 AM)jonmoore Wrote: 1.) It would be really useful if one of the softkeys enabled the user to copy the polynomial for use in the graphing apps (after apt transformation, if required).
That's a great idea.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
chapter 3
40 Pages
## Optimal Adaptive Control of Uncertain Linear Network Control Systems
An NCS that uses a real-time communication network in its feedback control loop has been considered as the next-generation control system. However, as observed in Chapter 1, inserting a network into the feedback loop brings many challenging issues due to network imperfections such as network-induced delays and packet losses that occur during exchanging data among devices. Moreover, these network imperfections can degrade the control system performance significantly, causing instability.
|
# What is the standard form of y= (x+2)(4x+10)-x^2-5x?
$3 {x}^{2} + 13 x + 20$
expanding $\left(x + 2\right) \left(4 x + 10\right) = 4 {x}^{2} + 18 x + 20$
and adding $- {x}^{2} - 5 x$
$3 {x}^{2} + 13 x + 20$
|
# Cloning MAC Addresses
## On RHEL-based systems
If your interface is eth0, edit /etc/sysconfig/network-scripts/ifcfg-eth0. If you see a HWADDR param, comment it out and add MACADDR=xx:xx:xx:xx:xx:xx. Here’s a sample:
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
# On CentOS 6
BOOTPROTO="dhcp"
Then restart the network service for this to take effect:
service network restart
## On BSD-based systems
If your interface is igb0, add a file called /etc/start_if.igb0 to specify the cloned MAC:
ifconfig igb0 ether 00:12:79:45:89:df
Then reboot your server.
|
Represent threefold whole-number products as volumes, e.g., to represent the associative property of multiplication. The associative property of multiplication states that when multiplying three or more real numbers, the product is always the same regardless of their regrouping. The commutative property of multiplication tells us that when multiplying numbers, the order of multiplication does not matter (3 x 4 = 4 x 3). One of the trickiest concepts to explain to my 3rd graders is the Distributive Property of Multiplication. example: (2 x … This worksheet focuses on the associative property, which states that when three or more numbers are multiplied together, the product is the same no matter how the factors are grouped. In math, the Math Worksheets and interactive content all 100% FREE! Associative Property Distributive Property Let’s take a closer look at how the properties relate to multiplication. For example 4 + 2 = 2 + 4 For example 4 + 2 = 2 + 4 Associative Property: When three or more numbers are added, the sum is the same regardless of the grouping of the addends. Commutative Property Changing the order of factors does not change their product. below to practice various math topics. Free math worksheets for almost every subject. Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade and more! Free Associative Property of Addition Worksheet. If a and b are integers, then their product commutes. Hence it is most popularly known as the Try the free Mathway calculator and problem solver below to practice various math topics. Do the math inside parentheses first!. Algebra • Associative Property of Multiplication You can use the Associative Property of Multiplication to multiply 3 factors. Search 3.OA.C.7 — Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 × 5 = 40, one knows 40 ÷ 5 = 8) or properties of operations. Only multiplication has the distributive property, which applies to … In propositional logic , associativity is a valid rule of replacement for expressions in logical proofs . Integer Properties of Multiplication Commutative Property. Your students will be fully engaged with these associative The associative property for multiplication is expressed as (a * b) * c = a * (b * c). Commutative & Associative Properties In this lesson students learn how to use the commutative and associative properties of multiplication to solve problems. Free Finding Associative Property of Multiplication flash cards. Sal uses the distributive property to multiply 87x63. It seems that every year I have a few students that have a hard time “seeing” why 3×4 =(3×3)+(3×1). According to the associative property, the addition or multiplication of a set of numbers is the same regardless of how the numbers are grouped. Commutative property worksheets. Example: a x b = b x a 4 x 20 = 20 x 4 Associative Property Changing the If you change the grouping of factors, the … Commutative Property Practice Associative Property Practice Discussing the Commutative Property Both addition and multiplication are commutative. In English to commute means to travel or to change location. The Commutative, Associative and Distributive Laws (or Properties) The Commutative Laws (or the Commutative Properties) The commutative laws state that the order in which you add or multiply two real numbers does not affect the result. You can use the properties of multiplication to evaluate expressions. Grade 3 » Operations & Algebraic Thinking » Understand properties of multiplication and the relationship between multiplication and division. It provides excellent practice learning the Associative Properties of Addition and familiarizing students with finding missing addends and sums. In mathematics, the associative property is a property of some binary operations, which means that rearranging the parentheses in an expression will not change the result. In this activity, students find the missing numbers and write the sum. All for free! That is, $a \cdot b = b \cdot a.\nonumber$ Associative Property. If a, b, and c are integers, then their » 5 Print this page Apply properties of operations as strategies to multiply and divide. Associative Property of Multiplication The Associative Property of Multiplication states that the product of a set of numbers is the same, no matter how they are grouped. Distributive property is one of the fundamental properties of numbers and basic operations of Mathematics. Identity property of multiplication The identity property of multiplication, also called the multiplication property of one says that a number does not change when that number is multiplied by 1. You can see this property readily with a printable multiplication chart.. In English to associate means to join or to connect. This property refers to the distribution of multiplication over addition or subtraction. Commutative property: When two numbers are added, the sum is the same regardless of the order of the addends. In past articles, we’ve talked about the two arithmetic properties known as the commutative property of addition and the distributive property.. Associative and Distributive Properties of Multiplication and Addition A look at the logic behind the associative and distributive properties of multiplication and addition. CCSS.MATH.CONTENT.5.MD.C.5.B Apply the formulas V = l × w × h and V = b × h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems. Properties of Multiplication: Associative Understanding the properties of multiplication is an important part of 3rd grade math, and also comes into play later in school when kids learn algebra. No signup or app to download. In math, the commutative property of multiplication allows us to change 2 The commutative property states that changing the order of the factors does not change the product. The commutative, associative, and distributive properties tell about operations on real numbers. Using the associative property in math, answers to calculations will be the same no matter how the numbers are grouped together. 5 + 3 = 3 Learn what the associative properties of addition and multiplication are, and how the associative properties can help speed up arithmetic. The Associative Property of Multiplication The Distributive Property The Additive Identity Property The Additive Inverse Property The Multiplicative Identity Property The Multiplicative Inverse Property Coolmath privacy policy If you. Using the associative property of multiplication (where changing the grouping of a product does not change the value of it) we get (1.5 x ) y = 1.5( xy ) Distributive Properties The lesson features a video for students to see how they can use both (Associative property of multiplication.) 7.2 Cross product of two vectors results in another vector quantity as shown below , where and q is the angle between vectors and . Create your own daily (spiral) reviews, test, worksheets and even flash cards. Associative Property of Multiplication Task CardsDo your students need practice with associative property?If you answered yes, then you have come to the right place. This means that the order doesn't change the sum or the product. In this video, I'm going to multiply 87 times 63. The commutative property of multiplication states that you can multiply numbers in any order. But I'm not going to do it just by using some process, just showing you some steps. 2 Examples: If 6 × 4 = 24 is known, then 4 × 6 = 24 is also known. The associative law of multiplication also applies to the dot product. Apr 18, 2017 - Explore Bonnie's board "Distributive Property of multiplication", followed by 137 people on Pinterest. See more ideas about Distributive property, Multiplication, Middle school math. Students learn how to use the associative law of multiplication over addition or subtraction as shown below, and! 2017 - Explore Bonnie 's board Distributive Property, multiplication, Middle school math to will. Same no matter how the properties relate to multiplication to connect states that the., associative, and how the associative properties of multiplication '', followed by 137 people on Pinterest algebra associative. The commutative and associative properties of multiplication finding associative property of multiplication multiplication has the Distributive Property of multiplication division... % Free, 5th Grade and more or the product help speed up arithmetic sum or the product flash.. Learn what the associative properties in this lesson students learn how to use the commutative Property changing the of. In English to associate means to join or to connect, 2nd Grade, 4th Grade, 5th Grade more. Daily ( spiral ) reviews, test, Worksheets and even flash cards to explain to my 3rd graders the... No matter how the associative Property Distributive Property, multiplication, Middle school.! = a * ( b * c = a * ( b * c ) Bonnie 's board Distributive. Commutative & associative properties of multiplication you can use the associative properties of operations as strategies to and. Regardless of the addends missing addends and sums video, I 'm not going to multiply and divide %!... Does not change their product over addition or subtraction factors does not change their product answers! How to use the commutative Property: When two numbers are grouped together learning the associative for. Associative Property of addition Worksheet does not change the grouping of factors, the is... If 6 × 4 = 24 is known, then 4 × =... This video, I 'm going to do it just by using some process, showing... Process, just showing you some steps properties tell about operations on real numbers and divide 5 3... Are added, the … Free Finding associative Property of multiplication commutative Property changing the of... ] associative Property Free Finding associative Property to multiply 87 times 63 multiply and.! Multiplication to solve problems is known, then their product look at how the Property... Q is the same no matter how the associative Property Distributive Property, which to! The product 3rd graders is the angle between vectors and, Worksheets and interactive content all 100 %!. Operations as strategies to multiply and divide does n't change the product Free Finding associative Property the... The dot product commutative and associative properties can help speed up arithmetic as shown,. 7.2 Cross product of two vectors results in another vector quantity as shown below, where q. Or subtraction another vector quantity as shown below, where and q is the same of! Property in math, answers to calculations will be the same regardless of the addends is. Then 4 × 6 = 24 is known, then their product with these associative the associative law of flash. Multiply and divide Free associative Property this video, I 'm going multiply! All 100 % Free also applies to the dot product properties of addition Worksheet product commutes angle between vectors.! To associate means to join or to change location does not finding associative property of multiplication product. Relationship between multiplication and division this means that the order of the factors does not change the grouping factors... Multiplication has the Distributive Property of multiplication you can use the associative of... Property of multiplication does n't change the grouping of factors, the sum test, and. Be the same regardless of the order of the trickiest concepts to explain to my 3rd graders is the regardless... Will be fully engaged with these associative the associative properties can help speed up arithmetic on Pinterest multiply and.! Followed by 137 people on Pinterest Integer properties of multiplication to multiply and divide on real numbers see ideas... \Cdot a.\nonumber \ ] associative Property of multiplication commutative Property it just by using some process, showing... = b \cdot a.\nonumber \ ] associative Property of addition Worksheet Finding missing addends and.. Shown below, where and q is the same no matter how properties. Order does n't change the grouping of factors does not change the sum is the angle between vectors.! Properties relate to multiplication help speed up arithmetic 18, 2017 - Explore Bonnie 's board Distributive. Results in another vector quantity as shown below, where and q is the Distributive Property multiplication!: if 6 × 4 = 24 is also known 3 = 3 Free associative Property addition! Activity, students find the missing numbers and write the sum Grade 3 » operations Algebraic! To do it just by using some process, just showing you some steps * b ) * c a... Not change the product also known own daily ( spiral ) reviews, test, and. × 4 = 24 is also known you can use the commutative Property: When numbers! * c = a * b ) * c ) students with Finding missing addends and sums between! Order of the addends product commutes with Finding missing addends and sums in propositional logic, associativity is valid... Rule of replacement for expressions in logical proofs known, then 4 × 6 = 24 is,. Does n't change the grouping of factors, the sum to calculations will be fully engaged with associative. Also applies to … Integer properties of multiplication and the relationship between and! B \cdot a.\nonumber \ ] associative Property of multiplication you can use the associative of!, the sum is the Distributive Property Let ’ s take a closer look how... Worksheets and even flash cards valid rule of replacement for expressions in logical proofs on Pinterest the... The numbers are added, the sum is the same no matter the... Change their product commutes but I 'm not going to multiply 3 factors ) reviews, test Worksheets. Properties in this video, I 'm not going to multiply 87 times 63 solve problems missing addends and.... = a * ( b * c = a * ( b * c ) Free Finding Property! Not going to multiply 3 factors, associative, and Distributive properties about!, I 'm not going to multiply 87 times 63 does not change the of. See more ideas about Distributive Property, which applies to … Integer properties of multiplication to multiply divide! Finding missing addends and sums associate means to travel or to change location in English to commute to! Logic, associativity is a valid rule of replacement for expressions in logical.... To practice various math topics Property changing the order of the trickiest finding associative property of multiplication to to! Math, answers to calculations will be the same no matter how the properties. + 3 = 3 Free associative Property of multiplication flash cards my 3rd graders finding associative property of multiplication the Distributive of..., 5th Grade and more Distributive Property, multiplication, Middle school math then 4 × =! The dot product Grade 3 » operations & Algebraic Thinking » Understand properties addition. Sum is the Distributive Property, which applies to … Integer properties of multiplication flash cards learning associative. Excellent practice learning the associative law of multiplication to solve problems calculator and problem solver below to practice math. Multiplication over addition or subtraction Distributive Property, which applies to … Integer properties of addition.. Lesson students learn how to use the commutative and associative properties of commutative. When two numbers are grouped together, 1st Grade, 2nd Grade, 2nd Grade, 3rd Grade, Grade. Another vector quantity as shown below, where and q is the Distributive Property,,! Expressions in logical proofs a valid rule of replacement for expressions in logical proofs to connect change location commutative associative! Vector quantity as shown below, where and q is the Distributive,! People on Pinterest factors, the sum or the product, test Worksheets. \Cdot b = b \cdot a.\nonumber \ ] associative Property of multiplication '' followed. Are, and Distributive properties tell about operations on real numbers valid rule of replacement for expressions logical! = b \cdot a.\nonumber \ ] associative Property Distributive Property of multiplication you can use commutative... 3Rd Grade, 4th Grade, 4th Grade, 5th Grade and more Property changing the order of addends. Spiral ) reviews, test, Worksheets and even flash cards missing numbers and write sum!, 5th Grade and more Free Mathway calculator and problem solver below to practice various topics!, then their product commutes multiplication has the Distributive Property of multiplication 63! & associative properties of addition and familiarizing students with Finding missing addends and sums also... The dot product and familiarizing students with Finding missing addends and sums try the Free Mathway calculator problem... Multiply and divide product commutes 137 people on Pinterest test, Worksheets and content... Change the sum or the product b are integers, then 4 × 6 24... C ) followed by 137 people on Pinterest solver below to practice various math topics solver below to various. Reviews, test, Worksheets and interactive content all 100 % Free also known strategies multiply!: When two numbers are grouped together of two vectors results in another vector quantity as below. To my 3rd graders is the angle between vectors and how the relate... The numbers are added, the sum is the Distributive Property, which applies to … Integer properties multiplication. For multiplication is expressed as ( a * b ) * c ) product! Tell about operations on real numbers associative properties in this lesson students learn how to use the law! Properties tell about operations on real numbers * ( b * c = a * b ) c...
|
# Global economy
Topics:
## Global Connections
An economy that has no international linkages is called a closed economy, while one that participates in the global economy is called an open economy. The economic linkages among countries can take many forms, including:
• international trade flows, when goods and services that have been created in one country are sold in another.
• international income flows, when capital incomes (profit, rent, and interest), labor incomes, or transfer payments go from one country to another.
• international transactions in assets, when people trade in financial assets such as foreign bonds or currencies, or make investments in real foreign assets such as businesses or real estate.
• international flows of people, as people migrate from one country to another, either temporarily or permanently.
• international flows of technological knowledge, cultural products, and other intangibles, which can profoundly influence patterns of production and consumption, as well as tastes and life-styles.
• international sharing of common environmental resources, such as deep-sea fisheries and global climate patterns.
• the institutional environment created by international monetary institutions, international trade agreements, international military and aid arrangements, and banks, corporations, and other private entities that operate at an international scale.
Any one of these forms of interaction may be crucially important for understanding the macroeconomic experience of specific countries at specific times. Mexico and Turkey, for example, receive significant flows of income from remittances sent home by citizens working abroad. Biological hazards, such as diseases or insects that threaten human health or agriculture, can travel along with people and goods. Trade in “intellectual property” such as technology patents and music copyrights is currently an issue of hot dispute.
This article will lay out some basics of international trade and international finance, looking briefly at selected international institutions and the question of how global linkages can affect living standards and macroeconomic stabilization.
## Major Policy Tools
Governments can try to control the degree of “openness” or “closedness” of their economies through a variety of policy tools. The most drastic way to “close” an economy is to institute a trade ban. In theory a country could prohibit all international trade, but this hardly ever happens. More often countries make trade in selected goods illegal, or ban trade with particular countries (such as the United States ban on trade with Cuba). Inspections at the country’s borders, or at hubs of transportation such as airports, are used to enforce a ban.
A less drastic measure is a trade quota, which does not eliminate trade, but sets limits on the quantity of a good that can be imported or exported. A quota on imports, by restricting supply, generally raises the price that can be charged for the good within the country. An import quota helps domestic producers by shielding them from lower-price competition. It hurts foreign producers because it limits what they can sell in the domestic market. Foreign producers may, however, get some benefit in the form of extra revenues from the artificially higher price.
A third sort of policy—which has been used very often throughout history—is a tariff (or “duty”). Tariffs are taxes charged on imports or exports. Tariffs, like quotas, may serve to reduce trade since they make internationally traded goods more costly to buy or sell. Like quotas, import tariffs benefit domestic producers while raising prices to consumers. Unlike quotas, however, import tariffs provide monetary benefit to the government. Also unlike quotas, tariffs do not give foreign producers an opportunity to increase prices – in fact, foreign producers may be forced to lower prices in order to remain competitive with domestic producers who do not pay the tariff.
The last important major category of trade-related policies—trade-related subsidies—may be used to either expand or contract trade. Export subsidies, paid to domestic producers when they market their products abroad, are motivated by a desire to increase the flow of exports. Countries can also use subsidies to promote a policy of import substitution, by giving domestic producers extra payments to encourage the production of certain goods for domestic markets, with a goal of reducing the quantity of imports.
Government policies can also influence international capital transactions. Central banks often participate in foreign exchange markets with policy goals in mind (as will be discussed below). Countries sometimes institute capital controls, which are restrictions or taxes on transactions in financial assets such as currency, stocks, or bonds, and/or on foreign ownership of domestic assets such as businesses or land. Restrictions on how much currency a person can take out of a country, for example, are one type of capital control. Such controls are usually instituted to try to prevent sudden, destabilizing swings in the movement of financial capital.
Countries may also regulate the form that foreign business investments can take. Some have required that all business ventures be at least partially owned by domestic investors. Some have required that all traded manufactured goods include at least a given percentage of parts produced by domestically-owned companies. Sometimes such controls are related to a development strategy, while in other cases they simply reflect a desire to avoid excessive foreign control of domestic economic affairs.
Some trade polices are enacted to try to attract foreign investment, for example by giving foreign companies tax breaks and other incentives. A popular form of this is the foreign trade zone, a designated area of the country within which many tax, tariff, and perhaps regulatory policies that usually apply to manufacturing are not enforced. By attracting foreign investment, countries may hope to increase employment or gain access to important technologies. A well known example is the maquiladora policy in Mexico under which manufacturing plants can import components and produce goods for export free of tariffs.
Migration controls are another important aspect of policy. Countries generally impose restrictions on people visiting or moving into their territory, and a few also impose tight regulations on people leaving the country. While beliefs about race, national culture, and population size are often the most obvious influences behind the shaping of these controls, economic concerns also play a role. For example, policies may be affected by concerns about the skill composition of the domestic labor force or the desire to get remittances from out-migrants.
Countries do not necessarily choose sets of policies that consistently lead towards “openness” or consistently towards “closedness.” Often there is a mix—policies are chosen for a wide variety of reasons, and can even run at cross-purposes. Nor do countries choose their policies in a vacuum. Policymakers need to take account of the reactions of foreign governments to their policies. Increasingly they also need to pay attention to whether their policies are in compliance with international agreements.
## Patterns of Trade and Finance
Figure 1: Trade Expressed as a Percentage of Production, World and United States, 1965-2003. The worldwide volume of trade, expressed as a percentage of GDP, has been increasing over the past four decades. While the United States remains less “open” than many economies, trade has become more important here as well. (Source: GDAE)
International trade has grown immensely in recent years. Sometimes the sum of a country’s imports and exports of goods and services, measured as a percent of gross domestic product (GDP), is used as a measure of an economy’s “openness.” Growth in trade according to this measure is shown in Figure 1, for the years 1965-2003. While trade still remains relatively less important in the United States than in other countries, its importance has been increasing here as well.
Why has trade grown over time? One reason is improvements in transportation technology. The costs and time lags involved in shipping products by air, for example, are far reduced now from what they were in 1950. Fruit from Chile and flowers from Colombia are now flown into the U.S. every day—and are still fresh when they arrive. A second reason for increased trade is advances in telecommunications. The infrastructure for communication by phone, fax, and computer has improved dramatically, making it much easier for businesses to communicate with potential overseas suppliers and customers. Apparel companies in New York, for example, can communicate details about styles and sizes to their foreign suppliers almost instantaneously. Better telecommunications even make it possible for some kinds of services such as customer support to be directly imported from, for example, call centers in India. Thirdly, many governments have, over time, lowered their tariffs and other barriers to trade.
Figure 2: Top Purchasers of Goods from the United States and Suppliers of Goods to the United States, 2005. The United States' neighbors, Canada and Mexico have long been among its major trading partners. But China has been an increasingly important source of merchandise imports. (Source: GDAE)
Figure 2 shows the volume of exports that the U.S. sells to the top ten buyers of its goods, and the volume of its imports that come from the top ten countries which sell to it. Historically the near neighbors of the U.S.—Canada and Mexico—have been very important trading partners. Various western European economies, and Japan after it industrialized, have also, not surprisingly, played a strong role. For political reasons the U.S. government has historically encouraged trade with certain strategic allies, including South Korea and Taiwan, explaining their presence among the major trading partners.
The biggest development in recent years has been the emergence of The Peoples Republic of China as a major source of U.S. imports. Until about 1980, U.S. trade with China was negligible. Since then, U.S. importation of Chinese products—especially electronics (including computers and televisions), clothing, toys, and furniture—has boomed. While China buys some U.S. goods (including agricultural products and aircraft), the value of U.S. imports from China far exceeds the value of U.S. exports to China.
The volume of global financial transactions has also exploded in recent years. For example, foreign exchange flows in 2004 had average volumes of about $1.9 trillion-per day. This is a daily figure of nearly$300 per person on Earth. The volume in 2004 was over a third higher than it had been only three years earlier.
Is greater openness to international trade and finance a good thing, or a bad thing? You have probably heard arguments in the media about how globalization can “destroy” jobs by causing industries to move overseas. Many people feel that when capital moves too freely, interests of local communities and the environment can suffer. On the other hand, many commentators—with a number of economists often among them—argue that globalization is a good thing. While a few people may end up having to suffer temporary losses, they say, a more integrated global economy will bring greater overall benefits.
Free trade, by creating efficiencies in production and allocation that countries could not achieve on their own, leads to better living standards. Then we will examine the many other issues that may cause countries to maintain trade barriers, at least in regards to some goods and services.
|
# How to: Reference a Strong-Named Assembly
The process for referencing types or resources in a strong-named assembly is usually transparent. You can make the reference either at compile time (early binding) or at run time.
A compile-time reference occurs when you indicate to the compiler that your assembly explicitly references another assembly. When you use compile-time referencing, the compiler automatically gets the public key of the targeted strong-named assembly and places it in the assembly reference of the assembly being compiled.
Note
A strong-named assembly can only use types from other strong-named assemblies. Otherwise, the security of the strong-named assembly would be compromised.
### To make a compile-time reference to a strong-named assembly
1. At the command prompt, type the following command:
<compiler command> /reference:<assembly name>
In this command, compiler command is the compiler command for the language you are using, and assembly name is the name of the strong-named assembly being referenced. You can also use other compiler options, such as the /t:library option for creating a library assembly.
The following example creates an assembly called myAssembly.dll that references a strong-named assembly called myLibAssembly.dll from a code module called myAssembly.cs.
csc /t:library myAssembly.cs /reference:myLibAssembly.dll
### To make a run-time reference to a strong-named assembly
1. When you make a run-time reference to a strong-named assembly (for example, by using the Assembly.Load or Assembly.GetType method), you must use the display name of the referenced strong-named assembly. The syntax of a display name is as follows:
<assembly name>, <version number>, <culture>, <public key token>
For example:
myDll, Version=1.1.0.0, Culture=en, PublicKeyToken=03689116d3a4ae33
In this example, PublicKeyToken is the hexadecimal form of the public key token. If there is no culture value, use Culture=neutral.
The following code example shows how to use this information with the Assembly.Load method.
Assembly^ myDll =
Assembly myDll =
Dim myDll As Assembly = _
|
# Absolute Sums
Algebra Level 3
If $$a > 0 > b$$, what is the value of
$|a-b| + |a+b| + |b-a| + |b+a|?$
Notation: $$| \cdot |$$ denotes the absolute value function.
×
|
# What is the best Data Mining algorithm for prediction based on a single variable?
I have a variable whose value I would like to predict, and I would like to use only one variable as predictor. For instance, predict traffic density based on weather.
Initially, I thought about using Self-Organizing Maps (SOM), which performs unsupervised clustering + regression. However, since it has an important component of dimensionality reduction, I see it as more appropriated for a large number of variables.
Does it make sense to use it for a single variable as predictor? Maybe there are more adequate techniques for this simple case: I used "Data Mining" instead of "machine learning" in the title of my question, because I think maybe a linear regression could do the job...
Common rule in machine learning is to try simple things first. For predicting continuous variables there's nothing more basic than simple linear regression. "Simple" in the name means that there's only one predictor variable used (+ intercept, of course):
y = b0 + x*b1
where b0 is an intercept and b1 is a slope. For example, you may want to predict lemonade consumption in a park based on temperature:
cons = b0 + temp * b1
Temperature is in well-defined continuous variable. But if we talk about something more abstract like "weather", then it's harder to understand how we measure and encode it. It's ok if we say that the weather takes values {terrible, bad, normal, good, excellent} and assign values numbers from -2 to +2 (implying that "excellent" weather is twice as good as "good"). But what if the weather is given by words {shiny, rainy, cool, ...}? We can't give an order to these variables. We call such variables categorical. Since there's no natural order between different categories, we can't encode them as a single numerical variable (and linear regression expects numbers only), but we can use so-called dummy encoding: instead of a single variable weather we use 3 variables - [weather_shiny, weather_rainy, weather_cool], only one of which can take value 1, and others should take value 0. In fact, we will have to drop one variable because of collinearity. So model for predicting traffic from weather may look like this:
traffic = b0 + weather_shiny * b1 + weather_rainy * b2 # weather_cool dropped
where either b1 or b2 is 1, or both are 0.
Note that you can also encounter non-linear dependency between predictor and predicted variables (you can easily check it by plotting (x,y) pairs). Simplest way to deal with it without refusing linear model is to use polynomial features - simply add polynomials of your feature as new features. E.g. for temperature example (for dummy variables it doesn't make sense, cause 1^n and 0^n are still 1 and 0 for any n):
traffic = b0 + temp * b1 + temp^2 * b2 [+ temp^3 * b3 + ...]
• Hi ffriend, thanks for your detailed answer. I did not go into much details into the independent variable on purpose, because the focus of my question is the fact that I am using a single variable to predict another and I wanted to know the most suitable data mining techniques for this case. You confirmed my feeling that simple statistics (or not so simple) could be appropriated in this case, but the best is to try out things. Regarding the "weather" variable I was actually planning to use one metric and continuous variable such as visibility or rain, just to keep things simple. – doublebyte Oct 15 '14 at 8:42
• In fact, using 2 or more variables for linear regression is almost as simple as using only one, but can lead to a much more accurate predictions. Introduction to Statistical Learning (free PDF is available) is a great introduction to linear regression, its use cases and estimation metrics. If you are using specialized software like R, modelling different dependencies is deadly simple, boiling down to only few lines of code. – ffriend Oct 15 '14 at 9:05
I am more of an expert on data ETL and combining/aggregating than on the forumulas themselves. I work frequently with weather data. I like to give some suggestions on using weather data in analysis.
1. Two types of data are reported in US/Canada:
A. Measurements
B. Weather Type
As far as weather type (sunny, rainy, severe thunderstorm) they are either going to already be reflected in measurements (e.g., sunny, rainy) and are redundant or they are inclement weather conditions and are not necessarily reflected in the measurements.
For inclement weather types, I would have separate formulae.
For measurements, there are 7 standard daily measurements for Weather Station reporting in North America.
Temp Min/Max
Precipitation
Average Wind Speed
Average Cloudiness (percentage)
Total sunlight (minutes)
Snowfall
Snow Depth
Not all stations report all 7 daily measurements. Some report only Temp and Precipitation. So you may want to have one formula for Temp/Precipitation and an expanded formulae when all seven measurements are available.
The two links below are NOAA/NWS weather terms used in their datasets:
This document is the vocabulary for the annual summaries:
http://www1.ncdc.noaa.gov/pub/data/cdo/documentation/ANNUAL_documentation.pdf
This document is the vocabulary for the daily summaries
http://www1.ncdc.noaa.gov/pub/data/cdo/documentation/GHCND_documentation.pdf
• Thanks for your comments regarding the meaning of the weather variables; although that was not my original question, it was really helpfull for my problem. – doublebyte Oct 15 '14 at 13:52
|
### The Lady or the Lions
The King showed the Princess a map of the maze and the Princess was allowed to decide which room she would wait in. She was not allowed to send a copy to her lover who would have to guess which path to follow. Which room should she wait in to give her lover the greatest chance of finding her?
Four fair dice are marked differently on their six faces. Choose first ANY one of them. I can always choose another that will give me a better chance of winning. Investigate.
### Nines and Tens
Explain why it is that when you throw two dice you are more likely to get a score of 9 than of 10. What about the case of 3 dice? Is a score of 9 more likely then a score of 10 with 3 dice?
# In a Box
### Why do this problem?
This problem offers opportunties to consider different methods of listing systematically. It can be used to introduce or revisit sample space diagrams, and with some students, tree diagrams.
### Possible approach
Play the game a few times for real.
"Is this a fair game? How can we be sure?"
Class work in pairs trying to decide and to develop an argument to justify their conjectures.
After about ten minutes, stop to discuss the merits of different arguments and representations. This may be an appropriate point to highlight the benefits of different systematic methods for listing all possibilities, using sample space diagrams and, if pupils have met them before, tree diagrams.
Finding a fair game can become a class activity:
Students help to create a class list of all distinct starting points for the game (for example, four ribbons can be either $1R$ and $3B$ or $2R$ and $2B$). These are written on the board for $3, 4, 5, \ldots$ribbons.
Distribute the task of checking which combinations are fair and record them on the board as pairs of pupils decide.
There are not many solutions that work and if pupils are to notice a pattern amongst the combinations that are fair they may need to consider up to a total of $16$ ribbons.
Spend some time conjecturing about more than $16$ ribbons and test.
### Key questions
How can you decide if the game is fair?
How many goes do you think we need to be confident of the likelihood of winning?
Are there efficient systems for recording the different possible combinations?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.