text
stringlengths 104
605k
|
---|
# Should I use Stateful or Stateless LSTM
I am trying to use LSTM in Keras and I am not sure whether I should used statefull or stateless LSTM. I have read many resources online but seem like they do not apply to my case.
I have a long predictor series X=[X1,X2,....,Xn] and a long response series y=[0,0,...,1,1,0,...0]. They have the same length and the response can only take value 1 or 0. My plan is to subsample the long predictor series and use the short series (length 4) to predict the response for the next 3 time points. So my training data look look this
[X1,X2,X3,X4],[y5,y6,y7]
[X2,X3,X4,X5],[y6,y7,y8]
...
If I use all these short series (samples) available, I think I should choose stateful. However, because there are a lot more 0 in y compared to 1, I will keep all the samples that has 1 in the short response series (ex: keep this sample [y5=0,y6=1,y7=0]) but I will randomly drop a lot of other samples just to make the data balance.
I am not sure whether I should use stateful here, since some short series may be very far away from each other.
|
Unsupervised Medical Image Denoising Using UNIT
This example shows how to generate high-quality computed tomography (CT) images from noisy low-dose CT images using a UNIT neural network.
This example uses an unsupervised image-to-image translation (UNIT) neural network trained on full images from a limited sample of data. For a similar approach using a CycleGAN neural network trained on patches of image data from a large sample of data, see Unsupervised Medical Image Denoising Using CycleGAN.
X-ray CT is a popular imaging modality used in clinical and industrial applications because it produces high-quality images and offers superior diagnostic capabilities. To protect the safety of patients, clinicians recommend a low radiation dose. However, a low radiation dose results in a lower signal-to-noise ratio (SNR) in the images, and therefore reduces the diagnostic accuracy.
Deep learning techniques offer solutions to improve the image quality for low-dose CT (LDCT) images. Using a generative adversarial network (GAN) for image-to-image translation, you can convert noisy LDCT images to images of the same quality as regular-dose CT images. For this application, the source domain consists of LDCT images and the target domain consists of regular-dose images.
CT image denoising requires a GAN that performs unsupervised training because clinicians do not typically acquire matching pairs of low-dose and regular-dose CT images of the same patient in the same session. This example uses a UNIT architecture that supports unsupervised training. For more information, see Get Started with GANs for Image-to-Image Translation.
This example uses data from the Low Dose CT Grand Challenge [2, 3, 4]. The data includes pairs of regular-dose CT images and simulated low-dose CT images for 99 head scans (labeled N for neuro), 100 chest scans (labeled C for chest), and 100 abdomen scans (labeled L for liver).
Set `dataDir` as the desired location of the data set. The data for this example requires 52 GB of memory.
`dataDir = fullfile(tempdir,"LDCT","LDCT-and-Projection-data");`
To download the data, go to the Cancer Imaging Archive website. This example uses ony two patient scans from the chest. Download the chest files "C081" and "C120" from the "Images (DICOM, 952 GB)" data set using the NBIA Data Retriever. Specify the `dataFolder` variable as the location of the downloaded data. When the download is successful, `dataFolder` contains two subfolders named "C081" and "C120".
Create Datastores for Training, Validation, and Testing
Specify the patient scans that are the source of each data set.
```scanDirTrain = fullfile(dataDir,"C120","08-30-2018-97899"); scanDirTest = fullfile(dataDir,"C081","08-29-2018-10762");```
Create `imageDatastore` objects that manage the low-dose and high-dose CT images for training and testing. The data set consists of DICOM images, so use the custom `ReadFcn` name-value argument in `imageDatastore` to enable reading the data.
```exts = {'.dcm'}; readFcn = @(x)rescale(dicomread(x)); imdsLDTrain = imageDatastore(fullfile(scanDirTrain,"1.000000-Low Dose Images-71581"), ... FileExtensions=exts,ReadFcn=readFcn); imdsHDTrain = imageDatastore(fullfile(scanDirTrain,"1.000000-Full dose images-34601"), ... FileExtensions=exts,ReadFcn=readFcn); imdsLDTest = imageDatastore(fullfile(scanDirTest,"1.000000-Low Dose Images-32837"), ... FileExtensions=exts,ReadFcn=readFcn); imdsHDTest = imageDatastore(fullfile(scanDirTest,"1.000000-Full dose images-95670"), ... FileExtensions=exts,ReadFcn=readFcn);```
Preview a training image from the low-dose and high-dose CT training data sets.
```lowDose = preview(imdsLDTrain); highDose = preview(imdsHDTrain); montage({lowDose,highDose})```
Preprocess and Augment Training Data
Specify the image input size for the source and target images.
`inputSize = [256,256,1];`
Augment and preprocess the training data by using the `transform` function with custom preprocessing operations specified by the `augmentDataForLD2HDCT` helper function. This function is attached to the example as a supporting file.
The `augmentDataForLD2HDCT` function performs these operations:
1. Resize the image to the specified input size using bicubic interpolation.
2. Randomly flip the image in the horizontal direction.
3. Scale the image to the range [-1, 1]. This range matches the range of the final `tanhLayer` (Deep Learning Toolbox) used in the generator.
```imdsLDTrain = transform(imdsLDTrain, @(x)augmentDataForLD2HDCT(x,inputSize)); imdsHDTrain = transform(imdsHDTrain, @(x)augmentDataForLD2HDCT(x,inputSize));```
The LDCT data set provides pairs of low-dose and high-dose CT images. However, the UNIT architecture requires unpaired data for unsupervised learning. This example simulates unpaired training and validation data by shuffling the data in each iteration of the training loop.
Batch Training and Validation Data During Training
This example uses a custom training loop. The `minibatchqueue` (Deep Learning Toolbox) object is useful for managing the mini-batching of observations in custom training loops. The `minibatchqueue` object also casts data to a `dlarray` object that enables auto differentiation in deep learning applications.
Specify the mini-batch data extraction format as `SSCB` (spatial, spatial, channel, batch). Set the `DispatchInBackground` name-value argument as the boolean returned by `canUseGPU`. If a supported GPU is available for computation, then the `minibatchqueue` object preprocesses mini-batches in the background in a parallel pool during training.
```miniBatchSize = 1; mbqLDTrain = minibatchqueue(imdsLDTrain,MiniBatchSize=miniBatchSize, ... MiniBatchFormat="SSCB",DispatchInBackground=canUseGPU); mbqHDTrain = minibatchqueue(imdsHDTrain,MiniBatchSize=miniBatchSize, ... MiniBatchFormat="SSCB",DispatchInBackground=canUseGPU);```
Create Generator Network
The UNIT consists of one generator and two discriminators. The generator performs image-to-image translation from low dose to high dose. The discriminators are PatchGAN networks that return the patch-wise probability that the input data is real or generated. One discriminator distinguishes between the real and generated low-dose images and the other discriminator distinguishes between real and generated high-dose images.
Create a UNIT generator network using the `unitGenerator` function. The source and target encoder sections of the generator each consist of two downsampling blocks and five residual blocks. The encoder sections share two of the five residual blocks. Likewise, the source and target decoder sections of the generator each consist of two downsampling blocks and five residual blocks, and the decoder sections share two of the five residual blocks.
`gen = unitGenerator(inputSize);`
Visualize the generator network.
`analyzeNetwork(gen)`
Create Discriminator Networks
There are two discriminator networks, one for each of the image domains (low-dose CT and high-dose CT). Create the discriminators for the source and target domains using the `patchGANDiscriminator` function.
```discLD = patchGANDiscriminator(inputSize,NumDownsamplingBlocks=4,FilterSize=3, ... ConvolutionWeightsInitializer="narrow-normal",NormalizationLayer="none"); discHD = patchGANDiscriminator(inputSize,"NumDownsamplingBlocks",4,FilterSize=3, ... ConvolutionWeightsInitializer="narrow-normal",NormalizationLayer="none");```
Visualize the discriminator networks.
```analyzeNetwork(discLD); analyzeNetwork(discHD);```
Define Model Gradients and Loss Functions
The `modelGradientDisc` and `modelGradientGen` helper functions calculate the gradients and losses for the discriminators and generator, respectively. These functions are defined in the Supporting Functions section of this example.
The objective of each discriminator is to correctly distinguish between real images (1) and translated images (0) for images in its domain. Each discriminator has a single loss function.
The objective of the generator is to generate translated images that the discriminators classify as real. The generator loss is a weighted sum of five types of losses: self-reconstruction loss, cycle consistency loss, hidden KL loss, cycle hidden KL loss, and adversarial loss.
Specify the weight factors for the various losses.
```lossWeights.selfReconLossWeight = 10; lossWeights.hiddenKLLossWeight = 0.01; lossWeights.cycleConsisLossWeight = 10; lossWeights.cycleHiddenKLLossWeight = 0.01; lossWeights.advLossWeight = 1; lossWeights.discLossWeight = 0.5;```
Specify Training Options
Specify the options for Adam optimization. Train the network for 26 epochs.
`numEpochs = 26;`
Specify identical options for the generator and discriminator networks.
• Specify a learning rate of 0.0001.
• Initialize the trailing average gradient and trailing average gradient-square decay rates with `[]`.
• Use a gradient decay factor of 0.5 and a squared gradient decay factor of 0.999.
• Use weight decay regularization with a factor of 0.0001.
• Use a mini-batch size of 1 for training.
```learnRate = 0.0001; gradDecay = 0.5; sqGradDecay = 0.999; weightDecay = 0.0001; genAvgGradient = []; genAvgGradientSq = []; discLDAvgGradient = []; discLDAvgGradientSq = []; discHDAvgGradient = []; discHDAvgGradientSq = [];```
By default, the example downloads a pretrained version of the UNIT generator for the NIH-AAPM-Mayo Clinic Low-Dose CT data set by using the helper function `downloadTrainedLD2HDCTUNITNet`. The helper function is attached to the example as a supporting file. The pretrained network enables you to run the entire example without waiting for training to complete.
To train the network, set the `doTraining` variable in the following code to `true`. Train the model in a custom training loop. For each iteration:
• Read the data for the current mini-batch using the `next` (Deep Learning Toolbox) function.
• Evaluate the discriminator model gradients using the `dlfeval` (Deep Learning Toolbox) function and the `modelGradientDisc` helper function.
• Update the parameters of the discriminator networks using the `adamupdate` (Deep Learning Toolbox) function.
• Evaluate the generator model gradients using the `dlfeval` function and the `modelGradientGen` helper function.
• Update the parameters of the generator network using the `adamupdate` function.
• Display the input and translated images for both the source and target domains after each epoch.
Train on a GPU if one is available. Using a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU. For more information, see GPU Support by Release (Parallel Computing Toolbox). Training takes about 58 hours on an NVIDIA Titan RTX.
```doTraining = false; if doTraining % Create a figure to show the results figure("Units","Normalized"); for iPlot = 1:4 ax(iPlot) = subplot(2,2,iPlot); end iteration = 0; % Loop over epochs for epoch = 1:numEpochs % Shuffle data every epoch reset(mbqLDTrain); shuffle(mbqLDTrain); reset(mbqHDTrain); shuffle(mbqHDTrain); % Run the loop until all the images in the mini-batch queue % mbqLDTrain are processed while hasdata(mbqLDTrain) iteration = iteration + 1; % Read data from the low-dose domain imLowDose = next(mbqLDTrain); % Read data from the high-dose domain if hasdata(mbqHDTrain) == 0 reset(mbqHDTrain); shuffle(mbqHDTrain); end imHighDose = next(mbqHDTrain); % Calculate discriminator gradients and losses [discLDGrads,discHDGrads,discLDLoss,discHDLoss] = dlfeval(@modelGradientDisc, ... gen,discLD,discHD,imLowDose,imHighDose,lossWeights.discLossWeight); % Apply weight decay regularization on low-dose discriminator gradients discLDGrads = dlupdate(@(g,w) g+weightDecay*w,discLDGrads,discLD.Learnables); % Update parameters of low-dose discriminator [discLD,discLDAvgGradient,discLDAvgGradientSq] = adamupdate(discLD,discLDGrads, ... discLDAvgGradient,discLDAvgGradientSq,iteration,learnRate,gradDecay,sqGradDecay); % Apply weight decay regularization on high-dose discriminator gradients discHDGrads = dlupdate(@(g,w) g+weightDecay*w,discHDGrads,discHD.Learnables); % Update parameters of high-dose discriminator [discHD,discHDAvgGradient,discHDAvgGradientSq] = adamupdate(discHD,discHDGrads, ... discHDAvgGradient,discHDAvgGradientSq,iteration,learnRate,gradDecay,sqGradDecay); % Calculate generator gradient and loss [genGrad,genLoss,images] = dlfeval(@modelGradientGen,gen,discLD,discHD,imLowDose,imHighDose,lossWeights); % Apply weight decay regularization on generator gradients genGrad = dlupdate(@(g,w) g+weightDecay*w,genGrad,gen.Learnables); % Update parameters of generator [gen,genAvgGradient,genAvgGradientSq] = adamupdate(gen,genGrad,genAvgGradient, ... genAvgGradientSq,iteration,learnRate,gradDecay,sqGradDecay); end % Display the results updateTrainingPlotLowDoseToHighDose(ax,images{:}); end % Save the trained network modelDateTime = string(datetime("now",Format="yyyy-MM-dd-HH-mm-ss")); save(strcat("trainedLowDoseHighDoseUNITGeneratorNet-",modelDateTime,"-Epoch-",num2str(numEpochs),".mat"),'gen'); else net_url = "https://ssd.mathworks.com/supportfiles/vision/data/trainedLowDoseHighDoseUNITGeneratorNet.zip"; downloadTrainedLD2HDCTUNITNet(net_url,dataDir); load(fullfile(dataDir,"trainedLowDoseHighDoseUNITGeneratorNet.mat")); end```
Generate High-Dose Image Using Trained Network
Read and display an image from the datastore of low-dose test images.
```idxToTest = 1; imLowDoseTest = readimage(imdsLDTest,idxToTest); figure imshow(imLowDoseTest)```
Convert the image to data type `single`. Rescale the image data to the range [-1, 1] as expected by the final layer of the generator network.
```imLowDoseTest = im2single(imLowDoseTest); imLowDoseTestRescaled = (imLowDoseTest-0.5)/0.5;```
Create a `dlarray` object that inputs data to the generator. If a supported GPU is available for computation, then perform inference on a GPU by converting the data to a `gpuArray` object.
```dlLowDoseImage = dlarray(imLowDoseTestRescaled,'SSCB'); if canUseGPU dlLowDoseImage = gpuArray(dlLowDoseImage); end```
Translate the input low-dose image to the high-dose domain using the `unitPredict` function. The generated image has pixel values in the range [-1, 1]. For display, rescale the activations to the range [0, 1].
```dlImLowDoseToHighDose = unitPredict(gen,dlLowDoseImage); imHighDoseGenerated = extractdata(gather(dlImLowDoseToHighDose)); imHighDoseGenerated = rescale(imHighDoseGenerated); imshow(imHighDoseGenerated)```
Read and display the ground truth high-dose image. The high-dose and low-dose test datastores are not shuffled, so the ground truth high-dose image corresponds directly to the low-dose test image.
```imHighDoseGroundTruth = readimage(imdsHDTest,idxToTest); imshow(imHighDoseGroundTruth)```
Display the input low-dose CT, the generated high-dose version, and the ground truth high-dose image in a montage. Although the network is trained on data from a single patient scan, the network generalizes well to test images from other patient scans.
```imshow([imLowDoseTest imHighDoseGenerated imHighDoseGroundTruth]) title(['Low-dose Test Image ',num2str(idxToTest),' with Generated High-dose Image and Ground Truth High-dose Image'])```
Supporting Functions
The `modelGradientGen` helper function calculates the gradients and loss for the generator.
```function [genGrad,genLoss,images] = modelGradientGen(gen,discLD,discHD,imLD,imHD,lossWeights) [imLD2LD,imHD2LD,imLD2HD,imHD2HD] = forward(gen,imLD,imHD); hidden = forward(gen,imLD,imHD,Outputs="encoderSharedBlock"); [~,imLD2HD2LD,imHD2LD2HD,~] = forward(gen,imHD2LD,imLD2HD); cycle_hidden = forward(gen,imHD2LD,imLD2HD,Outputs="encoderSharedBlock"); % Calculate different losses selfReconLoss = computeReconLoss(imLD,imLD2LD) + computeReconLoss(imHD,imHD2HD); hiddenKLLoss = computeKLLoss(hidden); cycleReconLoss = computeReconLoss(imLD,imLD2HD2LD) + computeReconLoss(imHD,imHD2LD2HD); cycleHiddenKLLoss = computeKLLoss(cycle_hidden); outA = forward(discLD,imHD2LD); outB = forward(discHD,imLD2HD); advLoss = computeAdvLoss(outA) + computeAdvLoss(outB); % Calculate the total loss of generator as a weighted sum of five losses genTotalLoss = ... selfReconLoss*lossWeights.selfReconLossWeight + ... hiddenKLLoss*lossWeights.hiddenKLLossWeight + ... cycleReconLoss*lossWeights.cycleConsisLossWeight + ... cycleHiddenKLLoss*lossWeights.cycleHiddenKLLossWeight + ... advLoss*lossWeights.advLossWeight; % Update the parameters of generator genGrad = dlgradient(genTotalLoss,gen.Learnables); % Convert the data type from dlarray to single genLoss = extractdata(genTotalLoss); images = {imLD,imLD2HD,imHD,imHD2LD}; end```
The `modelGradientDisc` helper function calculates the gradients and loss for the two discriminators.
```function [discLDGrads,discHDGrads,discLDLoss,discHDLoss] = modelGradientDisc(gen, ... discLD,discHD,imRealLD,imRealHD,discLossWeight) [~,imFakeLD,imFakeHD,~] = forward(gen,imRealLD,imRealHD); % Calculate loss of the discriminator for low-dose images outRealLD = forward(discLD,imRealLD); outFakeLD = forward(discLD,imFakeLD); discLDLoss = discLossWeight*computeDiscLoss(outRealLD,outFakeLD); % Update parameters of the discriminator for low-dose images discLDGrads = dlgradient(discLDLoss,discLD.Learnables); % Calculate loss of the discriminator for high-dose images outRealHD = forward(discHD,imRealHD); outFakeHD = forward(discHD,imFakeHD); discHDLoss = discLossWeight*computeDiscLoss(outRealHD,outFakeHD); % Update parameters of the discriminator for high-dose images discHDGrads = dlgradient(discHDLoss,discHD.Learnables); % Convert the data type from dlarray to single discLDLoss = extractdata(discLDLoss); discHDLoss = extractdata(discHDLoss); end```
Loss Functions
The `computeDiscLoss` helper function calculates discriminator loss. Each discriminator loss is a sum of two components:
• The squared difference between a vector of ones and the predictions of the discriminator on real images, ${\mathit{Y}}_{\mathit{real}}$
• The squared difference between a vector of zeros and the predictions of the discriminator on generated images, ${\stackrel{ˆ}{\mathit{Y}}}_{\mathit{translated}}$
`$\mathit{discriminatorLoss}={\left(1-{\mathit{Y}}_{\mathit{real}}\right)}^{2}+{\left(0-{\stackrel{ˆ}{\mathit{Y}}}_{\mathit{translated}}\right)}^{2}$`
```function discLoss = computeDiscLoss(Yreal,Ytranslated) discLoss = mean(((1-Yreal).^2),"all") + ... mean(((0-Ytranslated).^2),"all"); end```
The `computeAdvLoss` helper function calculates adversarial loss for the generator. Adversarial loss is the squared difference between a vector of ones and the discriminator predictions on the translated image.
`$\mathit{adversarialLoss}={\left(1-{\stackrel{ˆ}{\mathit{Y}}}_{\mathit{translated}}\right)}^{2}$`
```function advLoss = computeAdvLoss(Ytranslated) advLoss = mean(((Ytranslated-1).^2),"all"); end```
The `computeReconLoss` helper function calculates self-reconstruction loss and cycle consistency loss for the generator. Self reconstruction loss is the ${\mathit{L}}^{1}$ distance between the input images and their self-reconstructed versions. Cycle consistency loss is the ${\mathit{L}}^{1}$ distance between the input images and their cycle-reconstructed versions.
`$\mathit{selfReconstructionLoss}={‖\left({\mathit{Y}}_{\mathit{real}}-{\mathit{Y}}_{\mathit{self}-\mathit{reconstructed}}\right)‖}_{1}$`
`$\mathit{cycleConsistencyLoss}={‖\left({\mathit{Y}}_{\mathit{real}}-{\mathit{Y}}_{\mathit{cycle}-\mathit{reconstructed}}\right)‖}_{1}$`
```function reconLoss = computeReconLoss(Yreal,Yrecon) reconLoss = mean(abs(Yreal-Yrecon),"all"); end```
The `computeKLLoss` helper function calculates hidden KL loss and cycle-hidden KL loss for the generator. Hidden KL loss is the squared difference between a vector of zeros and the '`encoderSharedBlock`' activation for the self-reconstruction stream. Cycle-hidden KL loss is the squared difference between a vector of zeros and the '`encoderSharedBlock`' activation for the cycle-reconstruction stream.
`$\mathit{hiddenKLLoss}={\left(0-{\mathit{Y}}_{\mathit{encoderSharedBlockActivation}}\right)}^{2}$`
`$\mathit{cycleHiddenKLLoss}={\left(0-{\mathit{Y}}_{\mathit{encoderSharedBlockActivation}}\right)}^{2}$`
```function klLoss = computeKLLoss(hidden) klLoss = mean(abs(hidden.^2),"all"); end```
References
[1] Liu, Ming-Yu, Thomas Breuel, and Jan Kautz, "Unsupervised image-to-image translation networks". In Advances in Neural Information Processing Systems, 2017. https://arxiv.org/pdf/1703.00848.pdf.
[2] McCollough, C.H., Chen, B., Holmes, D., III, Duan, X., Yu, Z., Yu, L., Leng, S., Fletcher, J. (2020). Data from Low Dose CT Image and Projection Data [Data set]. The Cancer Imaging Archive. https://doi.org/10.7937/9npb-2637.
[3] Grants EB017095 and EB017185 (Cynthia McCollough, PI) from the National Institute of Biomedical Imaging and Bioengineering.
[4] Clark, Kenneth, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, et al. “The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository.” Journal of Digital Imaging 26, no. 6 (December 2013): 1045–57. https://doi.org/10.1007/s10278-013-9622-7.
|
The previous post started forecasting the UK hospital peak (based on information through Dec. 21, 2021). We generated several considerations and ultimately focused on the Omicron doubling time, the peak number of cases, the current number of Omicron cases, and the seasonality. In addition to a reference class forecast based on seasonality, we assumed the case peak would be roughly governed by hospital capacity and used the calculation:
DateOfPeak = Dec. 21
+ 10 days to reach case peak (2.4-day doubling time and 4.1 doublings)
+ 9 days (case peak to hospital peak)
+ 3 days (lag of 7-day average)
= Jan. 12th
In this lecture we'll focus on going from this point estimate to a full probability distribution. This will involve two steps:
1. Asking "what invalidating considerations could cause this forecast to be totally wrong"?
2. Asking "which numerical quantities is my forecast most sensitive to, and how uncertain am I about them?"
The motivation for this is that most uncertainty is from either your entire estimate being structurally wrong (invalidating considertions), or from the specific numbers going into your estimate being inaccurate (numerical sensitivity). In many (most?) cases, the first form of uncertainty dominates, so it's good to check both.
We'll work through both steps, then combine them into a final uncertainty estimate. At the end I've also included a Q&A with Misha Yagudin on how this approach compares with his approach to forecasting.
## Part 1: Invalidating Considerations
I did the brainstorming exercise of "If the previous estimate is totally off, why is that?" I recommend that you try this exercise as well before reading what I came up with.
(whitespace to avoid spoilers)
...
...
...
Okay, here's what I came up with:
1. If the UK cases are capped by herd immunity rather than hospital strain (17+ million cases instead of 6.7 million)
2. If the doubling time is actually 1.5 days (vs. 2.4 days), as suggested in some articles
3. If the peak happens due to people self-adjusting their behavior to make $R$ barely less than $1$, leading to a very long "peak".
Let's see how much each of these could affect the answer.
Consideration 1: herd immunity. This would add at most 2 more doublings, or ~5 days, to the date of the peak.
Consideration 2: short doubling time. Since we assumed around 4 doublings before, this would subtract only ~4 days from the date of the peak.
Consideration 3: extended peak. We calculated before that hospital capacity would correspond to around 6 million confirmed cases/week. Herd immunity was around 17 million cases, so this would mean 3 weeks to reach herd immunity. But I now realize that this is confirmed cases, and undertesting is around a factor of 2. So I think this would only really add 1.5 weeks, or ~9 days, unless people adjust their behavior to stay significantly below hospital capacity. I'll add another 3 days of wiggle room (12 days total) in case the extended peak is at 75% of hospital capacity rather than 100% of capacity, or in case I underestimated the herd immunity threshold.
If I consider how subjectively surprised I would feel in each of the 3 worlds above, and turn that into probabilities, I get: 15% (herd immunity), 15% (short doubling time), 10% (extended peak).
Exercise. Do you agree with the above probabilities?
Brainstorming exercise. What other considerations am I missing?
## Part 2: Numerical Sensitivity
Next I checked the numerical sensitivity of the mainline forecast. Our mainline forecast is based on several quantities:
• The current number of UK Omicron cases, estimated at $N_0 = 200,000$
• The total number of future Omicron cases, estimated at $N = 6,700,000$.
• The Omicron doubling time, estimated at $t = 2.4$ days
• The lag $\Delta_0$ between case peak and hospital peak, estimated at 9 days.
• The lag $\Delta_1$ between single-day hospital peak and 7-day average hospital peak, estimated at 3 days.
Our formula for the number of days until the peak is then
$\log_2(N/2N_0) \cdot t + \Delta_0 + \Delta_1$
Let's assess the sensitivity of this formula to each consideration:
• If $N$ or $N_0$ is off by a factor of $2$, then our answer changes by $2.4$ days.
• If $t$ is $3.3$ instead of $2.4$, our answer changes by $3.7$ days.
• If $\Delta_0$ or $\Delta_1$ is off by $1$, our answer changes by $1$ day.
To make this more quantitative I put it into table form, including my $70\%$ uncertainty intervals for each number:
Parameter Point estimate Range Effect on answer
$N_0$ $0.2 \times 10^6$ $[0.15, 0.25] \times 10^6$ $[-0.8, +1.0]$
$N$ $6.7 \times 10^6$ $[5, 13] \times 10^6$ $[-1.0, +2.3]$
$t$ $2.4$ $[2.0, 3.3]$ $[-1.6, +3.7]$
$\Delta_0 + \Delta_1$ $12$ $[9, 14]$ $[-3, +2]$
Considering that probably not all errors will occur in the same direction, when I combine these errors together I subjectively end up with a 70% confidence interval of $[-3.6, +4.9]$ relative to the Jan. 12th point estimate. (I estimated these as e.g. $3.6 = \sqrt{0.8^2 + 1.0^2 + 1.6^2 + 9^2}$ based on the premise that variances add for independent quantities. I don't think this is a logically valid calculation but it gives a decent ballpark, and the final numbers also seemed reasonable to me.)
Misha: I generally got a sense that your ranges are a bit too narrow, e.g., for doubling time. Metaculus is super uncertain about R_0 (their 70% CI 5.2 to 11.9), and “average” doubling guesstimates should probably be pretty uncertain given conflicting info, the impact of the holidays, impact of public concern, and government action. [Followed by some additional comments on why $N_0$ and $N$ should have higher uncertainty.]
I asked Misha if he also thought my final uncertainty estimates (given in the next section) were too small. He said:
Misha: Nope, I think they are fine (because the additional 45% went to extreme outcomes).
## Putting it Together
If we assume the mainline estimate is structurally correct and all errors are due to numerical sensitivity, then we end up with a 70% confidence interval of (rounded to whole numbers) Jan. 8th to Jan. 17th. That means there is a 15% chance of being earlier than Jan. 8th and of being later than Jan. 17th.
If we instead consider structural uncertainty, we get a 15% chance of +5 days (Jan. 17th), a 15% chance of -4 days (Jan. 8th), and a 10% chance of +12 days (Jan. 24th).
In reality, both forms of uncertainty are present. Overall, the uncertainty also skews a bit more towards later dates than earlier dates. If I subjectively combine this, I would put my overall forecast as follows:
• Median of Jan. 13th
• 10% chance of Jan. 24th or later
• 25% chance of Jan. 18th or later
• 25% chance of Jan. 7th or earlier
Exercise. Do you agree with this assessment?
## Concluding Q&A
Since this is the first lecture that presents a fully integrated forecasting method, I asked Misha how close it matches his approach to forecasting.
Jacob: How closely does the method discussed in this and the previous lecture map onto your own approach to forecasting? I.e.: generate and prioritize considerations, reduce uncertainty, construct a mainline estimate (or multiple mainline estimates), consider numerical sensitivity + structural uncertainty.
Misha: Well, I do all of the things from time to time. I do not do this explicitly in a step-by-step way. It’s more of playing it by ear and attending to whatever feels most informative.
The core step, which is missing from your writeups, is getting less confused about what’s going on and assembling a world model. I usually start pretty cluelessly; for example, I was forecasting cultured meat progress last month. I spend a lot of time trying to understand how the processes might work, how to reference class might look like, and what technological limitations are.
Until I had some understanding (still limited), I wasn’t looking for considerations. But after building a world model, I developed ways to approach most questions (sometimes very structurally uncertain).
To me, the key insight in your writeup was to look at beds/herd immunity and doubling. Everything else seems more like technical details necessary for delivering a good forecast but not primarily to the process.
Jacob: Would you still consider [this lecture] good pedagogy for students?
Misha: I think it is a textbook example, looks good to me. I think it’s put a bit too much weight on legible steps and a bit less on "actual creative work." To be clear, these legible technical steps are important and worth having in front of you.
My takeaway is that the approach above is useful and valuable, but that it is also important to build a good world-model (especially when confronting a new domain). We'll hopefully have more to say about that in upcoming lectures.
|
# 'On hold' question that looks fine to me
The below question was put on hold for being off-topic, but I personally don't see any issue with it:
https://puzzling.stackexchange.com/questions/35097/friend-just-an-another-illusion-created-by-humans
may invite speculative answers, as the question is not fully defined.
Sounds like a possible definition of the word riddle to me ...
I personally don't see a need for the OP (or anyone else) to rephrase the question, because I think it's fine.
I just don't think a puzzling site should be "policed" this heavily. Other StackExchange sites, sure, but this one is special as it's speculative by nature (that's the whole point).
Questions may need improvement, sure, just say so. Putting it on-hold is not really going to solve anything. Now this question is basically dead weight. You can't get new answers validated, find out the intended solution, ...
Questions that are off-topic or otherwise inappropriate, should be handled, ofcourse.
• Have you checked out the post linked in that close reason? A while ago, questions matching these criteria were deemed not a good fit for Puzzling, which prompted the creation of this close reason. (This question also went through the reopen votes review queue, where it received three Leave Closed votes.)
– user20
Jun 10 '16 at 8:42
• I did, I understand a line needs to be drawn in some form, but that's a grey area. With something as inheritly obscure as puzzles that line is far more blurred still. Also, you can't prove a sufficient amount of answers correct, if people don't get the time to provide them. I have an answer in mind that might (or not) be an objectively verifiable solution. I'm sure others do as well. I think the question was shut down a bit too rapidly perhaps. Jun 10 '16 at 8:56
• I don't think it's obviously off topic. I lean to allowing benefit of the doubt on a site like this, even if questions draw outside the expected lines. If people really don't like it they can downvote. Jun 10 '16 at 9:04
|
• 资源评价 •
### 干旱内陆河流域人居环境适宜性评价——以石羊河流域为例
1. 1. 西北师范大学 地理与环境科学学院, 兰州 730070;
2. 中国科学院 寒区旱区环境与工程研究所, 兰州 730000
• 收稿日期:2012-01-04 修回日期:2012-03-14 出版日期:2012-11-20 发布日期:2012-11-20
• 作者简介:魏伟(1982-),男,甘肃庄浪人,讲师,从事GIS和RS的应用研究。E-mail:[email protected]
• 基金资助:
国家自然科学基金项目(40971078);甘肃省青年科技基金计划项目(1107RJYA077);西北师范大学青年教师科研能力提升计划项目(SKQNYB10034)。
### Study on the Suitability Evaluation of the Human Settlements Environment in Arid Inland River Basin——A Case Study on the Shiyang River Basin
WEI Wei1, SHI Pei-ji1, FENG Hai-chun1, WANG Xu-feng2
1. 1. College of Geographical and Environment Science, Northwest Normal University, Lanzhou 730070, China;
2. Cold and Arid Regions Environmental and Engineering Research Institute, CAS, Lanzhou 730000, China
• Received:2012-01-04 Revised:2012-03-14 Online:2012-11-20 Published:2012-11-20
Abstract:
The paper selects slope, aspect, relief degree of land surface, vegetation index, hydrology, transportation and climate as evaluation indexes and sets up the Human Settlements Environment Index (HEI) model to evaluate the environment suitability for human settlements in Shiyang River Basin. Through using spatial analysis technique of GIS such as spatial overlay analysis, buffer analysis and density analysis to establish the spatial situation of nature suitability and spatial pattern for human settlement. The results showed that: the index of nature suitability for human settlement in the Shiyang River Basin was between 17.13 and 84.32. In general, nature suitability for human settlement decreased from southwest to northeast. Considering area, the suitable region was mainly distributed in Minqin oasis, Wuwei oasis and Changning basin, which accounted for about 1080.01 km2, 2.59% of the total area. Comparatively suitable region was mainly distributed around the county in Gulang, Yongchang and north of Tianzhu County, which accounted for about 1100.30 km2. The common suitable region was mainly distributed outside of the county in Yongchang, Jinchuan and most part of Minqin county, which accounted for about 23328.04 km2, 56.08% of the total area. The non-suitable region was mainly distributed upstream and north of the river, which accounted for about 9937.60 km2, 23.89% of the total area. Meanwhile, the most non-suitable region was distributed around the Qilian Mountain which covered by snow and cold desert and the intersecting area between Tengger Desert and Badain Jaran Desert. The total area was about 6154.05 km2, which accounted for 14.79% of the total area. Suitable regions for human inhabitance were distributed mainly around rivers in the form of ribbons and batches, while others are scattered. Their distribution was identical with the residential spatial pattern. There was a clear logarithm correlation between situation of residential environment and population, that is, the coefficient of determination between evaluation value of residential environment and population density reached 0.851 2. There was also positive correlation between situation of residential environment and economics, which reached 0.845 4 between evaluation value of residential environment and GDP. Result also shows the environment is difficult to support the existing population in the Shiyang River Basin. Spatial distribution of population was profoundly affected by severe environment such as the expanded deserts, the wavy terrains, and the changeful climate. Surface water shortage and slow economic growth was the bottleneck of nature suitability for human settlement in the Shiyang River Basin. So according to these problems and various planning, some of the residential parts need to relocate in order to improve situation of residential environment.
• X802.2
|
# How to vertically-center the text of the cells?
I have a simple table as follows:
\begin{table*}
\centering
\begin{tabular}{|l|c|c|c|c|p{2in}|}
...
...
\end{tabular}
\caption{The factors the camera solver depends on to evaluate the rules.}
\label{table:factors}
\end{table*}
How is it possible to vertically-center the text of the cells?
-
This earlier question might be of help to you. – morbusg Dec 16 '10 at 12:34
Looking closer at your example, I realize you obviously have the array package loaded. p{...} aligns the content toward the top, m{...} aligns the content toward the center, while b{...} aligns it toward the bottom. – Jimi Oke Dec 17 '10 at 23:19
@Jimi: the example works even without array. The p specifier is standard. – Stefan Kottwitz Dec 18 '10 at 15:35
@Stefan: Oh, I didn't know that. Thanks! – Jimi Oke Dec 18 '10 at 16:18
Question, actually. How in the world would a person who knows nothing about code go about this? I'm drowning in information, here. – user44066 Jan 12 '14 at 23:33
\documentclass{article}
\usepackage[a4paper,vmargin=2cm,hmargin=1cm,showframe]{geometry}
\usepackage[demo]{graphicx}
\usepackage[table]{xcolor}
\usepackage{array}
\usepackage{longtable}
\parindent=0pt
\def\correction#1{%
\abovedisplayshortskip=#1\baselineskip\relax\belowdisplayshortskip=#1\baselineskip\relax%
\abovedisplayskip=#1\baselineskip\relax\belowdisplayskip=#1\baselineskip\relax}
\arrayrulewidth=1pt\relax
\tabcolsep=5pt\relax
\arrayrulecolor{red}
\fboxsep=\tabcolsep\relax
\fboxrule=\arrayrulewidth\relax
\newcolumntype{A}[2]{%
>{\minipage{\dimexpr#1\linewidth-2\tabcolsep-#2\arrayrulewidth\relax}\vspace\tabcolsep}%
c<{\vspace\tabcolsep\endminipage}}
\newenvironment{Table}[4]{%
\longtable{%
|A{#1}{1.5}% for figure
|>{\centering$\displaystyle}A{#2}{1}<{$}% for inline equation
|>{\correction{-1}\strut$}A{#3}{1}<{$\strut}% for displayed equation
|>{\centering}A{#4}{1.5}% for text
|}\hline\ignorespaces}{%
\endlongtable\ignorespacesafterend}
\newcommand{\dummy}{%
It is practically a big lie that \LaTeX\
makes you focus on the content without
\newcommand{\Row}{%
\includegraphics[width=\linewidth]{newton}&
\frac{a+b}{a-b}=0&
\int_a^b f(x)\, \textrm{d}x=\frac{b-a}{b+a}&
\fcolorbox{cyan}{yellow}{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule\relax}{\dummy}}
\tabularnewline\hline}
\begin{document}
\begin{Table}{0.25}{0.25}{0.25}{0.25}
\Row
\Row
\end{Table}
\def\x{\centering$\displaystyle\int_a^bf(x)\,\textrm{d}x=\frac{a-b}{a+b}$}
\longtable{|A{0.2}{1.5}*2{|A{0.25}{1}}|A{0.3}{1.5}|}\hline
\x & \x & \multicolumn{2}{A{0.55}{1.5}|}{\x} \tabularnewline\hline
\multicolumn{2}{|A{0.45}{1.5}|}{\x} & \x & \x\tabularnewline\hline
\x & \multicolumn{2}{A{0.5}{1}|}{\x} & \x\tabularnewline\hline
\multicolumn{4}{|A{1}{2}|}{\x}\tabularnewline\hline
\endlongtable
\end{document}
-
Your solution is working absolutely fine, but isn't there a simpler solution? Aligning contents vertically feels time like simpler than the proposed solution. – Rafid Dec 16 '10 at 19:42
Thanks, but what about the other solution below? – Rafid Dec 26 '10 at 20:48
@xport: Your edits have nothing to do with Rafid's question. For his question, the widths of the columns just don't matter. I think it's not good to include stuff that's really unrelated. – Hendrik Vogt Dec 28 '10 at 22:25
One easy way to this would be to use the array package, specifying your column width with m{...}. For example:
\begin{tabular}{ m{4cm} m{1cm} }
... & ... \\
\end{tabular}
will give you a four centimeter-long column and a one centimeter-long column. In each cell, the contents will be vertically aligned to the center. Note, however, that the cell contents will be horizontally aligned left. If you also want to align all the cell contents toward the center in a horizontal sense, then you could do something like this:
\begin{tabular}{ >{\centering\arraybackslash} m{4cm} >{\centering\arraybackslash} m{4cm} }
... & ... \\
\end{tabular}
The point of \arraybackslash is to return \\ to its original meaning because the \centering command alters this and could possibly give you a noalign error during compilation.
If you have several columns and do not want your source to look cluttered, you could define new columns before your tabular environment, for example:
\newcolumntype{C}{ >{\centering\arraybackslash} m{4cm} }
\newcolumntype{D}{ >{\centering\arraybackslash} m{1cm} }
\begin{tabular}{ C D }
... & ... \\
\end{tabular}
There is a lot of useful information on tables in the wiki LaTeX guide, if you want to explore this further.
-
Are you sure that an image inclusion will be EXATCLY vertically centered using your method above? – xport Dec 19 '10 at 22:42
@xport: It might be relative to the first and last baselines of the cells, not the exact totalheight. – Martin Scharrer Jul 10 '11 at 15:53
When using this method, people should be cautious NOT to mix other column types such as p. The height of a row AND vertical-alignment follows that of the cell with the maximum height in that row. It is fine if an m column cell has the maximum height, but otherwise the vertical-align would not work. – Achimnol Dec 27 '12 at 15:22
There is a command \vcenter which vertically centers its content in horizontal mode. It can only be used in mathmode.
Here is an example with Plain XeTeX (compile with xetex yourfilename.tex)
{ \offinterlineskip
\def\trule{\noalign{\hrule}}
\def\hcenter#1{\hfil#1\hfil}
\halign{\vrule#&&\hcenter{$\vcenter{\hbox{#}}$}\vrule\cr\trule
&Lorem ipsum dolor sit amet&\XeTeXpicfile "test-pattern.jpg" &
\TeX&$E=mc^2$&$\displaystyle{a^2-b^2\over c^2}$\cr\trule
&Etiam quam lacus&\vrule width 4em height 5ex depth 2ex&\eTeX &
$E\ne mc^2$&{\it \&} cetera\cr\trule}
}
\bye
-
|
## What are the differences between Agda and Idris?¶
Like Idris, Agda is a functional language with dependent types, supporting dependent pattern matching. Both can be used for writing programs and proofs. However, Idris has been designed from the start to emphasise general purpose programming rather than theorem proving. As such, it supports interoperability with systems libraries and C programs, and language constructs for domain-specific language implementation. It also includes higher level programming constructs such as interfaces (similar to type classes) and do notation.
Idris supports multiple back ends (C and JavaScript by default, with the ability to add more via plugins) and has a reference run time system, written in C, with a garbage collector and built-in message passing concurrency.
Idris is primarily a research tool for exploring the possibilities of software development with dependent types, meaning that the primary goal is not (yet) to make a system which could be used in production. As such, there are a few rough corners, and lots of missing libraries. Nobody is working on Idris full time, and we don’t have the resources at the moment to polish the system on our own. Therefore, we don’t recommend building your business around it!
Having said that, contributions which help towards making Idris suitable for use in production would be very welcome - this includes (but is not limited to) extra library support, polishing the run-time system (and ensuring it is robust), providing and maintaining a JVM back end, etc.
## Is there some documentation for the standard lib? List of functions?¶
API documentation for the shipped packages is listed on the documentation page.
Unfortunately, the default prelude and shipped packages for Idris are not necessarily complete with regards to documentation. Other ways to find functions include:
• REPL commands:
• Use :apropos to search for text in documentation and function names.
• Use :search to search for functions of a given type.
• Use :browse to list the contents of a given namespace.
• Use the REPL’s auto-complete functionality.
• Grep through the source code in libs/
If you find that the shipped packages are lacking in documentation, please feel free to write some. Or bug someone to do so. Idris has syntax for providing rich documentation, which is then viewable using the :doc command and listed in generated HTML API documentation.
## Why does Idris use eager evaluation rather than lazy?¶
Idris uses eager evaluation for more predictable performance, in particular because one of the longer term goals is to be able to write efficient and verified low level code such as device drivers and network infrastructure. Furthermore, the Idris type system allows us to state precisely the type of each value, and therefore the run-time form of each value. In a lazy language, consider a value of type Int:
thing : Int
What is the representation of thing at run-time? Is it a bit pattern representing an integer, or is it a pointer to some code which will compute an integer? In Idris, we have decided that we would like to make this distinction precise, in the type:
thing_val : Int
thing_comp : Lazy Int
Here, it is clear from the type that thing_val is guaranteed to be a concrete Int, whereas thing_comp is a computation which will produce an Int.
## How can I make lazy control structures?¶
You can make control structures using the special Lazy type. For example, if...then...else... in Idris expands to an application of a function named ifThenElse. The default implementation for Booleans is defined as follows in the library:
ifThenElse : Bool -> (t : Lazy a) -> (e : Lazy a) -> a
ifThenElse True t e = t
ifThenElse False t e = e
The type Lazy a for t and e indicates that those arguments will only be evaluated if they are used, that is, they are evaluated lazily.
## Evaluation at the REPL doesn’t behave as I expect. What’s going on?¶
Being a fully dependently typed language, Idris has two phases where it evaluates things, compile-time and run-time. At compile-time it will only evaluate things which it knows to be total (i.e. terminating and covering all possible inputs) in order to keep type checking decidable. The compile-time evaluator is part of the Idris kernel, and is implemented in Haskell using a HOAS (higher order abstract syntax) style representation of values. Since everything is known to have a normal form here, the evaluation strategy doesn’t actually matter because either way it will get the same answer, and in practice it will do whatever the Haskell run-time system chooses to do.
The REPL, for convenience, uses the compile-time notion of evaluation. As well as being easier to implement (because we have the evaluator available) this can be very useful to show how terms evaluate in the type checker. So you can see the difference between:
Idris> \n, m => (S n) + m
\n => \m => S (plus n m) : Nat -> Nat -> Nat
Idris> \n, m => n + (S m)
\n => \m => plus n (S m) : Nat -> Nat -> Nat
## Why can’t I use a function with no arguments in a type?¶
If you use a name in a type which begins with a lower case letter, and which is not applied to any arguments, then Idris will treat it as an implicitly bound argument. For example:
append : Vect n ty -> Vect m ty -> Vect (n + m) ty
Here, n, m, and ty are implicitly bound. This rule applies even if there are functions defined elsewhere with any of these names. For example, you may also have:
ty : Type
ty = String
Even in this case, ty is still considered implicitly bound in the definition of append, rather than making the type of append equivalent to…
append : Vect n String -> Vect m String -> Vect (n + m) String
…which is probably not what was intended! The reason for this rule is so that it is clear just from looking at the type of append, and no other context, what the implicitly bound names are.
If you want to use an unapplied name in a type, you have two options. You can either explicitly qualify it, for example, if ty is defined in the namespace Main you can do the following:
append : Vect n Main.ty -> Vect m Main.ty -> Vect (n + m) Main.ty
Alternatively, you can use a name which does not begin with a lower case letter, which will never be implicitly bound:
Ty : Type
Ty = String
append : Vect n Ty -> Vect m Ty -> Vect (n + m) Ty
As a convention, if a name is intended to be used as a type synonym, it is best for it to begin with a capital letter to avoid this restriction.
## I have an obviously terminating program, but Idris says it possibly isn’t total. Why is that?¶
Idris can’t decide in general whether a program is terminating due to the undecidability of the Halting Problem. It is possible, however, to identify some programs which are definitely terminating. Idris does this using “size change termination” which looks for recursive paths from a function back to itself. On such a path, there must be at least one argument which converges to a base case.
• Mutually recursive functions are supported
• However, all functions on the path must be fully applied. In particular, higher order applications are not supported
• Idris identifies arguments which converge to a base case by looking for recursive calls to syntactically smaller arguments of inputs. e.g. k is syntactically smaller than S (S k) because k is a subterm of S (S k), but (k, k) is not syntactically smaller than (S k, S k).
If you have a function which you believe to be terminating, but Idris does not, you can either restructure the program, or use the assert_total function.
## When will Idris be self-hosting?¶
It’s not a priority, though not a bad idea in the long run. It would be a worthwhile effort in the short term to implement libraries in Idris to support self-hosting, such as argument parsing and a POSIX-compliant library for system interaction.
## Does Idris have universe polymorphism? What is the type of Type?¶
Rather than universe polymorphism, Idris has a cumulative hierarchy of universes; Type : Type 1, Type 1 : Type 2, etc. Cumulativity means that if x : Type n and n <= m, then x : Type m. Universe levels are always inferred by Idris, and cannot be specified explicitly. The REPL command :type Type 1 will result in an error, as will attempting to specify the universe level of any type.
## Why does Idris use Double instead of Float64?¶
Historically the C language and many other languages have used the names Float and Double to represent floating point numbers of size 32 and 64 respectively. Newer languages such as Rust and Julia have begun to follow the naming scheme described in IEEE Standard for Floating-Point Arithmetic (IEEE 754). This describes single and double precision numbers as Float32 and Float64; the size is described in the type name.
Due to developer familiarity with the older naming convention, and choice by the developers of Idris, Idris uses the C style convention. That is, the name Double is used to describe double precision numbers, and Idris does not support 32 bit floats at present.
## What is -ffreestanding?¶
The freestanding flag is used to build Idris binaries which have their libs and compiler in a relative path. This is useful for building binaries where the install directory is unknown at build time. When passing this flag, the IDRIS_LIB_DIR environment variable needs to be set to the path where the Idris libs reside relative to the idris executable. The IDRIS_TOOLCHAIN_DIR environment variable is optional, if that is set, Idris will use that path to find the C compiler. For example:
IDRIS_LIB_DIR="./libs" \
IDRIS_TOOLCHAIN_DIR="./mingw/bin" \
CABALFLAGS="-fffi -ffreestanding -frelease" \
make
## What does the name “Idris” mean?¶
British people of a certain age may be familiar with this singing dragon. If that doesn’t help, maybe you can invent a suitable acronym :-) .
## Will there be support for Unicode characters for operators?¶
There are several reasons why we should not support Unicode operators:
• It’s hard to type (this is important if you’re using someone else’s code, for example). Various editors have their own input methods, but you have to know what they are.
• Not every piece of software easily supports it. Rendering issues have been noted on some mobile email clients, terminal-based IRC clients, web browsers, etc. There are ways to resolve these rendering issues but they provide a barrier to entry to using Idris.
• Even if we leave it out of the standard library (which we will in any case!) as soon as people start using it in their library code, others have to deal with it.
• Too many characters look too similar. We had enough trouble with confusion between 0 and O without worrying about all the different kinds of colons and brackets.
• There seems to be a tendency to go over the top with use of Unicode. For example, using sharp and flat for delay and force (or is it the other way around?) in Agda seems gratuitous. We don’t want to encourage this sort of thing, when words are often better.
With care, Unicode operators can make things look pretty but so can lhs2TeX. Perhaps in a few years time things will be different and software will cope better and it will make sense to revisit this. For now, however, Idris will not be offering arbitrary Unicode symbols in operators.
This seems like an instance of Wadler’s Law in action.
This answer is based on Edwin Brady’s response in the following pull request.
## Where can I find the community standards for the Idris community?¶
The Idris Community Standards are stated here .
## Where can I find more answers?¶
There is an Unofficial FAQ on the wiki on GitHub which answers more technical questions and may be updated more often.
|
# Javis.jl examples series: The chase problem
Creation date: 2021-08-09
Tags: javis, animation, julia
Yesterday I heard the third episode of the 3b1b podcast which is an interview of the famous Steven Strogatz. There he mentions two interesting puzzles at the beginning of the conversation. The first is about a geometry problem and the second one is what I want to "tackle" (visualize) in this blog post today.
Before I explain the problem I would like to link you to the JuliaCon video of the presentation that Jacob Zelko and I made to present the latest state of Javis .
Mostly due to the excellent work of Arsh Sharma we now have quite some more functionality in Javis than the last time I've wrote about it back in June.
We have layers now as well as easy functions and macros to define objects without using the weird anonymous syntax functionality. I would like to visualize interesting problems or concepts on this blog and talk about the code to create those in the next couple of weeks.
If you want to keep getting updated please join the newsletter such that you don't miss one of those 😉
## The Problem
Okay let's start now with the problem formulation:
Let's assume we have a square and in each of the corners there is a dog. Each dog is trying to chase a dog which is their clockwise neighbor. How long does it take each dog to catch its neighbor?
Now they don't just run along the side of the edge as that would be quite boring 😄 They try to use the fastest way possible and continuously update the direction they are running in. Furthermore the speed they are running is a constant value.
## Animation
Now I want to visualize the dogs and the path they are taking. Of course as I'm one of the creators of Javis I want to use that tool 😄
using Javis
function ground(args...)
background("black")
sethue("white")
end
function main()
video = Video(1000, 1000)
nframes = 300
square_len = 800
Background(1:nframes, ground)
Object(1:nframes, JRect(Point(-square_len/2, -square_len/2), square_len, square_len; color="white"))
render(video; pathname = "chase.gif")
end
This might partly look familiar if you have seen Javis code before but as I intend this series for newcomers I want to go over everything in as much detail as needed. For this I keep the animations short in general. Nevertheless I use quite some advanced and sometimes undocumented features of Javis so I'm sure there is something to learn for everyone.
Now I first of all need to have using Javis at some point and I like to have a main function to avoid having everything in the global scope.
I define the video with 1000x1000 pixels and then the number of frames and the size of the square.
For defining the background of the animation I have the ground function which takes in some arguments which aren't relevant which is the reason why I choose args... here as the parameters. I define a black background and the default color to being white (even though I never use it 😄).
Next up I define my object which should draw the square. In v0.6.1 of Javis some new convenience functions were introduced including JRect which draws a rectangle. We define the frames of the object which is the full number of frames here and then the upper left corner position of our square as well as the width and height. Additionally we define the color of the square with color as we want to have only the square outline we don't need to add an action keyword. If we would like to fill it we could have used action = :fill after defining the color.
Okay let's place the dogs in each corner.
For this we add the following lines before the render function:
dog_colors = Colors.JULIA_LOGO_COLORS
dog_positions = [
Point(-square_len / 2, -square_len / 2),
Point(square_len / 2, -square_len / 2),
Point(square_len / 2, square_len / 2),
Point(-square_len / 2, square_len / 2),
]
dogs = [
Object(
1:nframes,
JCircle(dog_positions[i], 15; action = :fill, color = dog_colors[i]),
) for i in 1:4
]
We start with defining the colors of the circles that we'll use to represent the dogs. I used the four colors of the Julialang logo for this. You need to also add using Colors at the top of the file.
Then we define the corners first and then we create four objects. Each of them is defined using the function JCircle which takes in the center of the circle and the radius plus some keyword arguments like before. We do the creation of the objects inside a list comprehension. In that we iterate i between 1 and 4 and can use that to access both the position of the dog as well as the color.
Now we're going into the complicated part of this. How do we move the dogs?
## Action! 🎬
Well each dog wants to move into the direction of the neighboring dog and that with a certain constant speed. In various tutorials of Javis we only tackle simple movements like translate by a fixed value or rotate. We showed how to rotate around another object in the first tutorial which at least uses the pos function that I'll also use in a moment. However calculating the vector and then moving along that changing direction isn't that simple with the basic functionality provided by Javis. That said there is a way which one can use when one understands how Javis works from the ground up.
That is something I want to show you here. Once you understand this simple example you'll be able to create much more powerful animations yourself. Why isn't that documented then? Well... I would like to create an easier process for the user but at least will link to this post in the docs 😉
Alright let's check out a simple action first maybe.
for i in 1:4
act!(dogs[i], Action(1:nframes, anim_translate(100, 0)))
end
Now the dogs are moving very slowly to the right. We do this by applying an Action to each of the dogs which is here defined for all frames as well and just tells in which direction the dogs should move. Each dog will end up at dogs_positions .+ Point(100, 0).
Now anim_translate(100, 0) is in the backend just a function which calls an anonymous function. This means we can define our own action as well as long as it has the anonymous function style.
Let me show you what I mean. We add the following two functions:
function chase(dogs, from, to)
(args...) -> _chase(dogs, from, to)
end
function _chase(dogs, from, to)
println("$from chases$to")
end
and change our act! from before to:
act!(dogs[i], Action(1:nframes, chase(dogs, i, mod1(i+1, 4))))
When we call main() we will get an output of repeated:
1 chases 2
2 chases 3
3 chases 4
4 chases 1
⚠ Note
The function is very useful when working with modulo in the 1 index based language Julia. It basically also just wraps around but instead of going from 0 to 3 it goes from 1 to 4 which then can be used to index our array at a later stage.
The functionality that we need now to actually visualize the chasing positions is what is provided by the method under the hood. We want to change the center of the dogs objects.
Therefore we need to call dogs[from].change_keywords[:center] = new_pos and need to compute the new position of the dog. And yes I hear you all: This should be a documented in the official documentation.
Okay now let's set the new_pos to the origin just to see that it works:
function _chase(dogs, from, to)
dogs[from].change_keywords[:center] = O
end
⚠ Note
The O is the letter O which stands for the origin and is the same as Point(0,0).
Well that isn't really an animation is it? Let's compute the actual value of the dogs position. For this we also introduce the variable speed which I set to speed = 3 right after defining the square_len.
Then we replace the chase function again:
function _chase(dogs, from, to, speed)
animal = dogs[from]
chases = dogs[to]
diff = pos(chases) - pos(animal)
diff /= sqrt(diff.x^2 + diff.y^2)
new_pos = pos(animal) + speed * diff
animal.change_keywords[:center] = new_pos
end
So we take the current position of the dog we want to chase as well as the position of the current dog. Then compute the difference and normalize that and calculate the new position by adding it to the current position while also combining the "vector" diff with the speed.
The chase function for the extra parameter speed:
function chase(dogs, from, to, speed)
(args...) -> _chase(dogs, from, to, speed)
end
as well as passing speed into the chase function:
act!(dogs[i], Action(1:nframes, chase(dogs, i, mod1(i+1, 4), speed)))
Unfortunately we get:
ERROR: MethodError: no method matching get_position(::Nothing)
Closest candidates are:
get_position(::Javis.Layer) at /home/ole/Julia/Javis/src/layers.jl:183
get_position(::Point) at /home/ole/Julia/Javis/src/object_values.jl:17
The problem here is that the dogs don't have their initial position calculated as this would draw the dogs directly. We also compute the action before we draw the dog so before we call the object itself. This way the pos calls in our chase function return nothing.
We could either check if pos returns something and if it does we do our calculation or as I decided: We simply set them to the corner in frame one and let them chase starting at frame 2.
Therefore we use:
act!(dogs[i], Action(2:nframes, chase(dogs, i, mod1(i+1, 4), speed)))
This creates the hardest part of a lovely animation:
For this animation:
we want to do a bit more.
We first of all remove the square by commenting out the Object JRect line.
Then we create the path which is also done in our first tutorial.
Inside our act! for loop we'll add:
Object(1:nframes, (args...) -> path!(dogs_paths[i], pos(dogs[i]), dog_colors[i]))
and then we need to define the path! function as well as the dogs_paths vector.
⚠ Note
Here we can use the frames starting from 1 as the dogs are already evaluated. This is the case as we define the object after the dogs objects and it's an object itself and not an action acting on the dogs.
The dogs_paths vector will hold the path each of the dogs took and is initialized with:
dogs_paths = [Point[] for _ in 1:4]
Our path function is simply copied from the tutorial:
function path!(points, cpos, color)
sethue(color)
push!(points, cpos) # add pos to points
circle.(points, 2, :fill) # draws a circle for each point using broadcasting
end
It adds the position to the vector of points and used the broadcast function to draw a small circle at each position.
## Solution
Well you can still work out the solution to the original problem on your own 😉 I don't want to spoil anything and was much more interested in animating the problem at this stage.
Thanks for reading and share it with your friends if you like and think about subscribing to the newsletter or directly to Patreon to get everthing 2 days earlier 😉
## Full code
using Colors
using Javis
function ground(args...)
background("black")
sethue("white")
end
function chase(dogs, from, to, speed)
(args...) -> _chase(dogs, from, to, speed)
end
function _chase(dogs, from, to, speed)
animal = dogs[from]
chases = dogs[to]
diff = pos(chases) - pos(animal)
diff /= sqrt(diff.x^2 + diff.y^2)
new_pos = pos(animal) + speed * diff
animal.change_keywords[:center] = new_pos
end
function path!(points, cpos, color)
sethue(color)
push!(points, cpos) # add pos to points
circle.(points, 2, :fill) # draws a circle for each point using broadcasting
end
function main()
video = Video(1000, 1000)
nframes = 300
speed = 3
square_len = 800
Background(1:nframes, ground)
# Object(1:nframes, JRect(Point(-square_len/2, -square_len/2), square_len, square_len; color="white"))
dog_colors = Colors.JULIA_LOGO_COLORS
dogs_paths = [Point[] for _ in 1:4]
dog_positions = [
Point(-square_len / 2, -square_len / 2),
Point(square_len / 2, -square_len / 2),
Point(square_len / 2, square_len / 2),
Point(-square_len / 2, square_len / 2),
]
dogs = [
Object(
1:nframes,
JCircle(dog_positions[i], 15; action = :fill, color = dog_colors[i]),
) for i in 1:4
]
for i in 1:4
act!(dogs[i], Action(2:nframes, chase(dogs, i, mod1(i+1, 4), speed)))
Object(1:nframes, (args...) -> path!(dogs_paths[i], pos(dogs[i]), dog_colors[i]))
end
render(video; pathname = "chase.gif")
end
Thanks to my 12 patrons!
Special special thanks to my >4$patrons. The ones I thought couldn't be found 😄 • Anonymous • Kangpyo • Gurvesh Sanghera • Szymon Bęczkowski • Håkan Kjellerstrand • Colin Phillips • Jérémie Knuesel For a donation of a single dollar per month you get early access to these posts. Your support will increase the time I can spend on working on this blog. There is also a special tier if you want to get some help for your own project. You can checkout my mentoring post if you're interested in that and feel free to write me an E-mail if you have questions: o.kroeger <at> opensourc.es I'll keep you updated on Twitter OpenSourcES. anim_translate Animate the translation of the attached object (see act!). # Example Background(1:100, ground) obj = Object((args...) -> circle(O, 50, :fill), Point(100, 0)) act!(obj, Action(1:50, anim_translate(10, 10))) # Options • anim_translate(x::Real, y::Real) define by how much the object should be translated. The end point will be current_pos + Point(x,y) • anim_translate(tp::Point) define direction and length of the translation vector by using Point • anim_translate(fp::Union{Object,Point}, tp::Union{Object,Point}) define the from and to point of a translation. It will be translated by tp - fp. • Object can be used to move to the position of another object mod1(x, y) Modulus after flooring division, returning a value r such that mod(r, y) == mod(x, y) in the range$(0, y]$for positive y and in the range$[y,0)\$ for negative y.
See also: fld1, fldmod1.
# Examples
julia> mod1(4, 2)
2
julia> mod1(4, 3)
1
change(s::Symbol, [val(s)])
Changes the keyword s of the parent Object from vals[1] to vals[2] in an animated way if vals is given as a Pair otherwise it sets the keyword s to val.
# Arguments
• s::Symbol Change the keyword with the name s
• vals::Pair If vals is given i.e 0 => 25 it will be animated from 0 to 25.
• The default is to use 0 => 1 or use the value given by the animation
defined in the Action
# Example
Background(1:100, ground)
obj = Object((args...; radius = 25, color="red") -> object(O, radius, color), Point(100, 0))
act!(obj, Action(1:50, change(:radius, 25 => 0)))
act!(Action(51:100, change(:color, "blue")))
|
# Math Help - A divisibility problem
1. ## A divisibility problem
Show that if $(a,b)=1$ and $p$ is an odd prime, then
$( a+b, (a^p+b^p)/a+b ) = 1$ or $p$
2. Originally Posted by eyke
Show that if $(a,b)=1$ and $p$ is an odd prime, then
$( a+b, (a^p+b^p)/a+b ) = 1$ or $p$
$\frac{a^p+b^p}{a+b}=\sum_{j=1}^p (-1)^{j-1} a^{p-j}b^{j-1}= \sum_{j=1}^p (-1)^{j-1} (a+b \ - \ b)^{p-j}b^{j-1} \equiv pb^{p-1} \mod a+b.$ similarly $\frac{a^p+b^p}{a+b} \equiv pa^{p-1} \mod a+b.$ so if $d \mid a+b$ and $d \mid \frac{a^p + b^p}{a+b},$ then $d \mid pa^{p-1}$ and $d \mid pb^{p-1}.$
thus $d \mid p \gcd(a^{p-1},b^{p-1})=p. \ \Box$
|
EndersWorld:
I’m confused *^*
8 months ago
EndersWorld:
$(-x^2y^{-7}z^1)^2/] 8 months ago EndersWorld: Uhhhh... 8 months ago Hero: \[(-x^2y^{-7}z^1)^2$
8 months ago
EndersWorld:
$(-x^2y^{-7}z^1)^2$
8 months ago
EndersWorld:
Yea, that
8 months ago
Hero:
First thing I would do since the whole thing is squared is this: $$(-x^2y^{-7}z^1)^2 = (-x^2y^{-7}z^1)(-x^2y^{-7}z^1)$$
8 months ago
Hero:
Hopefully you understand why that is necessary.
8 months ago
EndersWorld:
Multiply everything?
8 months ago
Hero:
Basically if you see a 2 outside the parentheses, it means to multiply whatever is in the parentheses twice.
8 months ago
Hero:
The next step is to pair like terms together: $$(-x^2y^{-7}z^1)^2 = (-x^2y^{-7}z^1)(-x^2y^{-7}z^1)$$ $$=(-x^2)(-x^2) \cdot (y^{-7})(y^{-7})\cdot(z^1)(z^1)$$
8 months ago
EndersWorld:
Aren’t the two z’s useless because they are $^1$ so they equal 1?
8 months ago
Hero:
And then go from there. There are three things to know how to do 1. How to multiply two negative numbers 2. How to multiply two exponents 3. How to multiply negative exponents
8 months ago
Hero:
Only in division will $$\dfrac{z}{z} = 1$$. Multiplying two $$z$$'s is different.
8 months ago
EndersWorld:
Two negatives multiplying is a positive. And I believe you add the exponent.
8 months ago
Hero:
Sounds good so far. What do you do with the negative exponents?
8 months ago
EndersWorld:
You add them and flip the sign to positive
8 months ago
Hero:
Actually, here's the rule for negative exponents: $$a^{-b} = \dfrac{1}{a^b}$$ In other words, expressions with negative expressions get converted to fractions.
8 months ago
EndersWorld:
That sounds... terrifying and painful..
8 months ago
Hero:
It's neither painful or terrifying. Simply a rule to apply.
8 months ago
EndersWorld:
So... $\frac{ 1 }{ y^7}$
8 months ago
Hero:
yes $$y^{-7} = \dfrac{1}{y^7}$$
8 months ago
EndersWorld:
So I have it set up, now do I just combine like terms?
8 months ago
Hero:
Yes, go ahead and attempt to finish this. Post your result below.
8 months ago
EndersWorld:
$x^4\frac{ 1 }{ y ^{14}}$
8 months ago
Hero:
What happened to the z's? I tried to help you understand that you don't eliminate them. The rule for adding adding exponents still apply to the z's
8 months ago
EndersWorld:
I told you I’m not the brightest LOL
8 months ago
EndersWorld:
No spamming :0 $x^4\frac{ 1 }{ y^14}z^2$
8 months ago
Hero:
When you express the result it should be expressed as one fraction with all the appropriate expressions in the numerator and denominator of the fraction.
8 months ago
Hero:
Remember that $$a \times \dfrac{1}{b} = \dfrac{a}{b}$$
8 months ago
Hero:
@EndersWorld I'm giving you an opportunity to express the result in the correct form.
8 months ago
EndersWorld:
$\frac{ x^4z^2 }{ y^14 }$
8 months ago
Hero:
\frac{ x^4z^2 }{ y^{14} }
8 months ago
Hero:
^Showing you the correct $$\LaTeX$$ format for your expression.
8 months ago
Hero:
Which produces this: $$\dfrac{ x^4z^2 }{ y^{14} }$$
8 months ago
EndersWorld:
So I was right :0
8 months ago
Hero:
Technically yes. Great job.
8 months ago
Hero:
Hopefully doing that one helped clear up some of your "confusion"
8 months ago
EndersWorld:
Got a different type of radical next.
8 months ago
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Differentiability of solutions of second-order functional differential equations with unbounded delay. (English) Zbl 1027.34090
The authors study the differentiability of solutions to second-order functional-differential equations with unbounded delay, especially, when the phase space is reflexive or at least has the Radon-Nikodym property. The obtained results are then applied to linear equations to characterize the infinitesimal generator of solution semigroups associated with the linear equations under consideration.
##### MSC:
34K30 Functional-differential equations in abstract spaces 34K05 General theory of functional-differential equations 34G10 Linear ODE in abstract spaces
##### Keywords:
asymptotic behaviour; continuous argument
Full Text:
##### References:
[1] Butzer, P. L.; Berens, H.: Semi-groups of operators and approximation. (1967) · Zbl 0164.43702 [2] Diestel, J.; Uhl, J. J.: Vector measures. (1972) [3] Fattorini, H. O.: Second order linear differential equations in Banach spaces. (1985) · Zbl 0564.34063 [4] Goldstein, G. R.; Goldstein, J. A.: Regularity for semilinear abstract Cauchy problems. Lecture notes in pure and appl. Math. 178, 99-105 (1996) · Zbl 0858.34046 [5] Hale, J. K.; Lunel, S. M. Verduyn: Introduction to functional differential equations. (1993) · Zbl 0787.34002 [6] Hale, J. K.; Kato, J.: Phase space for retarded equations with infinite delay. Funkcial. ekvac. 21, 11-41 (1978) · Zbl 0383.34055 [7] Henrı\'quez, H. R.: Regularity of solutions of abstract retarded functional differential equations with unbounded delay. Nonlinear anal. 28, 513-531 (1997) · Zbl 0864.35112 [8] Henrı\'quez, H. R.: Periodic solutions of quasi-linear partial functional differential equations with unbounded delay. Funkcial. ekvac. 37, 329-343 (1994) · Zbl 0814.35141 [9] Henrı\'quez, H. R.; Vásquez, C. H.: Almost periodic solutions of abstract functional differential equations with unbounded delay. Acta appl. Math. 57, 105-132 (1999) · Zbl 0944.34058 [10] Henrı\'quez, H. R.; Vásquez, C. H.: Differentiability of solutions of the second order abstract Cauchy problem. Semigroup forum 64, 472-488 (2002) · Zbl 1032.47026 [11] Hino, Y.; Murakami, S.; Naito, T.: Functional differential equations with infinite delay. Lecture notes in math. 1473 (1991) · Zbl 0732.34051 [12] Marle, C. M.: Mesures et probabilités. (1974) · Zbl 0306.28001 [13] Kisyński, J.: On cosine operator functions and one-parameter groups of operators. Studia math. 49, 93-105 (1972) · Zbl 0232.47045 [14] Pazy, A.: Semigroups of linear operators and applications to partial differential equations. (1983) · Zbl 0516.47023 [15] Travis, C. C.; Webb, G. F.: Second order differential equations in Banach space. Proc. internat. Sympos. on nonlinear equations in abstract spaces, 331-361 (1987) [16] Travis, C. C.; Webb, G. F.: Compactness, regularity, and uniform continuity properties of strongly continuous cosine families. Houston J. Math. 3, 555-567 (1977) · Zbl 0386.47024 [17] Travis, C. C.; Webb, G. F.: Cosine families and abstract nonlinear second order differential equations. Acta math. Acad. sci. Hungar. 32, 76-96 (1978) · Zbl 0388.34039 [18] Wu, J.: Theory and applications of partial functional differential equations. (1996) · Zbl 0870.35116
|
Bujinkan
HATSUMI Masaaki studied for 15 years under TAKAMATSU Toshitsugu, the last actual ninja known as ''Mongolian Tiger'', and was the 34th Soke of Togakure-ryu Ninpo-Taijutsu, as well as the successor to 8 other styles. The Bujinkan Dojo, which was established by integrating the essence of nine schools, attracts many warriors from all over the world to seek the teachings of the unsurpassed bujutsu master HATSUMI Masaaki.
Describe your thoughts and experience with this course.
Cancel
Complete
|
# Linux – X on one monitor, a bare, tty terminal on another? (linux)
displaylinuxttyxorg
The graphics card on my computer has outputs for (at least) two separate monitors. I have one monitor that is high resolution, and I like using it for X (anything graphical). My other monitor, however, is an OLD, low resolution, flat-panel monitor.
I'm wondering if it's possible to configure the monitors so that the tty terminal running X goes to monitor A (the high resolution monitory), and /dev/tty2, just running the bash shell, goes to B (the lower resolution monitor).
Would I use an xorg config file for this? I'm really not sure.
The problem you have with running the setup you mention is the keyboard. The keyboard will be captured by the x server running on your primary display (high-res). You will not be able to switch to the other terminal if you would like to type something in it.
Even if your secondary monitor is low resolution, you could run an xterm session on it that's separate from your main x screen. You will want to setup the 2 displays as completely different screens (not using xinerama). You will end up with 0.0 and 0.1 displays. Your primary display would be the 0.0 whereas your DISPLAY environment variable will be as follows:
export DISPLAY=:0.0
This configuration will allow you to move your mouse between the 2 screens to choose where your keyboard input will be passed. In your .xinitrc (in your home dir), you could then do something as follows:
#!/bin/bash
xsetroot -solid black
xsetroot -display :0.1 -solid darkblue
xterm -display :0.1 -fn 9x16 -geometry 86x36+1+1 &
startkde
This would start by setting the background of your primary display to black. Next it will set your secondary display background to darkblue (I use this color because I use my secondary screen for watching movies). Next line starts an xterm on your second display with a preset geometry. You will want to adjust the geometry to fit your screen the best for you. You cannot specify pixel width and height because the geometry for xterm measures in characters. If you choose the 9x16 font size as in my example and your secondary screen resolution is 800x600, you would do the following math:
font size = 9x16
screen size = 800x600
xterm width = ( 800 / 9 ) = 88.888
xterm height = ( 600 / 16 ) = 37.5
You want to round the number down some, especially for the width since you need to account for a scrollbar. You will not have a window manager on the secondary screen so there will be no xterm window title (unless you choose to run something light on the second monitor such as twm or fvwm). Basically, you will have to play with the numbers til you get it how you want it.
The last line in the .xinitrc file will launch the main window manager on your primary display. You can change this to gnome-session or whatever launches your favorite wm. You could also modify the existing .xinitrc for your distribution if you wish to preserve the ability to choose your window manager during login. There should be a skeleton file in your /etc/X11 for using as a base.
UPDATE:
Modern versions of KDE will control all screens now. You no longer need to maintain a separate window manager on the second screen. Not sure about the gnome wm since I don't use gnome.
|
• 13
• 15
• 27
• 9
• 9
# AlphaBeta pruning algorithm, returning best move.
This topic is 2756 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
What's up forum!
I am on my way to implement simple chess AI. Score for the best move(actually the best score for a particular chess board is returned by AlphaBeta function. Code below.
int AlphaBeta(Node *root, int nDepth, int alfa, int beta){ if(nDepth == 0) return Eval(root->board); GenMov(root); if(root->children.size() == 0) return Eval(root->board); for(int i = 0 ; i < (int)root->children.size(); i++){ int val = -AlfaBeta(&root->children, --nDepth, -beta, -alfa); if(val >= beta){ return beta; } if( val > alfa){ alfa = val; } //root->children.pop_back(); } return alfa;}
In order to get coordinates of figure which is to be moved I have to keep the whole game-tree level, what's more, every node has to be assigned with Eval(board) value, finally I seek for the node with value returned by algorithm and extract coordinates from Node class.
This is rather ineffective, time and memory consuming method.
How to modify AlphaBeta to get coordinates without performing all those mentioned tasks?
[Edited by - tomekdd on August 28, 2010 6:37:38 AM]
##### Share on other sites
You don't need to do any such thing, and the alpha-beta algorithm should consume very little memory.
It doesn't look like your program has a notion of "move". Instead you just have boards that are children of a board. I recommend you make a class Move and have the generator create a list of these (much lighter objects than a board). You don't even need to have more than one board at all. Simply write functions to make a move and to undo it, and keep working on the same board all the time.
You are only interested in obtaining the best move at the root of the search tree. You should write a separate function to search the root, which can return a move. Once you implement iterative deepening, time control and printing of some search information, this function will be different enough from the general alpha-beta function that the little bit of code duplication won't be a problem.
##### Share on other sites
Let's assume I don't use iterative deepening and depth of the game tree is fixed and let's put alfabeta aside for a while.
Node and Move are defined as fallows :
class Node{
public:
vector<Node> children;
char board[64];
char cPlayer;
int iBoardEval;
Move mov;
};
class Move{
public:
int iNewPosition;
int iOldPosition;
};
Function genMoves(char *board, char cPlayer) generates all possible moves and boards for a given player and stores results in vector<Node> children.
But this way only one level of the game tree is created for a particular node.
How to recursively create n-levels of game tree?
##### Share on other sites
You seem to have this notion that you need to have the tree in memory and then search it, but this is not the case.
The Move class I am thinking of would look something like this:
enum MoveType { MT_Normal, MT_PawnTwoStepsForward, MT_EnPassant, MT_Castle, MT_PromoteQueen, MT_PromoteKnight, MT_PromoteRook, MT_PromoteBishop};struct Move { Square from, to; MoveType move_type;};
When you generate moves from a position, you only create these little (2 to 12 bytes) structures that describe only the move, not the resulting board. Then you can write a function that performs a move on a board and a function that undoes the effects of a move on a board.
With those tools in hand, you can write your search as a depth-first traversal of the tree using a recursive function, and you don't need to ever generate the whole tree in memory.
##### Share on other sites
To tell the truth I am not a pro in implementing graph searching algorithms, but as far as I know to use BSF game tree have to be created at the very beginning(or a vector with all vertices?)
So eventually how it is going to differ from AlphaBeta? Both algorithms are to get the best board eval. and return it to the root. I don't see the point..
What's more, I still can't figure it out how one board is to suffice. If it's player A turn I have to create all possible moves, then for every move I create all possible moves which apply to player B and so on... And eventually I will dig up to the leaves of game tree.
Here's my slightly modified AlphaBeta :
Move AlfaBeta(Node *root, int nDepth, int alfa, int beta, int x, int y){ Move tmp; if(nDepth == 0){ tmp.iFinalPos = y; tmp.iInitPos = x; tmp.iEval = Eval(root->board); return tmp; }else GenMoves(root); if(root->children.size() == 0){ tmp.iFinalPos = y; tmp.iInitPos = x; tmp.iEval = Eval(root->board); return tmp; } for(int i = 0 ; i < (int)root->children.size() ; i++){ Move v = AlfaBeta(&root->children,nDepth-1,-beta,-alfa,root->children.move.iInitPos, root->children.move.iFinalPos); v.iEval = -v.iEval; //NegaMax?? if(v.iEval>= beta){ tmp.iEval = beta; tmp.iFinalPos = y; tmp.iInitPos = x; return tmp; } if(v.iEval > alfa){ alfa = v.iEval; tmp.iFinalPos = y; tmp.iInitPos = x; } } tmp.iEval = alfa; return tmp;}
class Move{ public: int iFinalPos; int iInitPos; int iEval; Move(){ iFinalPos = iInitPos = iEval = 0; }};
class Node{ public: vector<Node> children; char board[64]; char cPlayer; Move move; Node(char *b, char cPl, Move m){ memcpy(board,b,64); cPlayer = cPl; move = m; }};
AlphaBeta seems to return valid evaluation of the board, but it doesn't work for InitPos and FinalPos.
It is how I see solution to my problem.
Your method is probably a way better, but I just don't get it, so if you can give some code snippets or just deeper explanation of your idea or tell what's wrong with my code I would do appreciate it!
##### Share on other sites
I am sort of busy this weekend, but I'll try to find some time to implement a simpler game using the ideas I described, and I'll try to keep it understandable.
##### Share on other sites
Quote:
Original post by tomekddTo tell the truth I am not a pro in implementing graph searching algorithms, but as far as I know to use BSF game tree have to be created at the very beginning(or a vector with all vertices?)
I'll try to clarify alvaro's point with this analogy. If you have the set of all integers from zero to two million, do you first have to put all those numbers into a std::vector to be able to go through each of them and tell which of them are prime numbers? Of course not, since you can generate the numbers on demand: after having checked number 'i' for primality, you know you need to check the number 'i+1' next, unless 'i+1' > 2,000,000 when you stop. So, you just increment the previously checked number by one, and call the CheckIfPrime() function again for that newly generated number.
How did you avoid having to put all these numbers to a std::vector, but still be able to operate on them? You defined a Successor function which tells you the successor, or the next element, that comes after the element 'i', namely 'i+1'.
With minimax game tree search, it is no different. You do not have to put your whole game tree into memory to be able to operate on the tree. At any given time, you only need to look at one board state, so all you need to keep in memory is that board state. The rules of the game give you the equivalent of the Successor function, but this time, there are multiple Successors (children) of the given state, so you do need to keep track of which children you have already searched, and which you haven't. But this is very little amount of memory (at minimum, just a single integer at each parent of the current node telling which child index to look at next)
Using the rules of the game, you can transform the current board state to a next board state, on-demand, with a legal Move action. Minimax is a bit more complicated than the prime-search example above, that we have to be able to go back up the tree as well, so we solve this by generating Undo moves that match each Move action we have made.
Using the Move and Undo actions, we can successfully navigate the game tree without ever having to hold but a single game board state in memory at once. To be able to quickly tell which node to look at next when we make an Undo move, we have to have some book-keeping information for each parent of the current node, but that only requires #current-search-depth entires of stack memory.
I hope that was a successful analogy.
##### Share on other sites
After reading a few papers on 'introductory to graph algorithms'
and your posts over and over again I worked out a neat solution to my problem. Works perfectly fine!.
Thanks for help folks.
|
Order reprints
Global stability of an age-structured cholera model
*Corresponding author:
MBE2014,3,641doi:10.3934/mbe.2014.11.641
In this paper, an age-structured epidemicmodel is formulated to describe the transmission dynamics ofcholera. The PDE model incorporates direct and indirect transmissionpathways, infection-age-dependent infectivity and variable periodsof infectiousness. Under some suitable assumptions, the PDE modelcan be reduced to the multi-stage models investigated in theliterature. By using the method of Lyapunov function, we establishedthe dynamical properties of the PDE model, and the results show thatthe global dynamics of the model is completely determined by thebasic reproduction number $\mathcal R_0$: if $\mathcal R_0 < 1$the cholera dies out, and if $\mathcal R_0>1$ the disease will persist at the endemicequilibrium. Then the global results obtained for multi-stage modelsare extended to the general continuous age model.
|
# Testing Kasiski Test and Mutual Index of Coincidence on Vigenere Encryption on Vigenere Encryption?
I was learning about Vigenere Ciphers and its various cryptanalytic attacks. One of them was, Kasiski test and Mutual Index of Coincidence. So, I was wondering can Kasiski test and Mutual Index of Coincidence crack the Vigenere encryption of Vigenere encryption itself (keys may be different or same) i.e. plaintext encrypted using Vigenere cipher and the output of this again encrypted using Vigenere cipher? Will using such scheme create appropriate confusion for the attacker or will it act as a vulnerability?
Vigenere algotrith suppose that Alice would send a message (M) to Bob. The message is written in some language and has a length ML (symbols that compose the message have a known frequency and belong to an alphabet A with a given size AL). To encript a message Alice use a key (K) that in Vigenere alogrithm is a seqeuence of symbols called Worm shortest (by definition) than the message, both Alice and Bob knwon K that does not need to be exchanged. The encripotion is applied by adding the value of each message symbol by the value of the corresponding worm sybol (worm could be completly random) modulo AL.
Eg :
M = "hello world today is fine"
ML = 25
K = "abc"
KL = 3
M as values :
68 65 6C 6C 6F 20 77 6F 72 6C 64 20 74 6F 64 61 79 20 69 73 20 66 69 6E 65
K as values :
61 62 63
As you can see because the worm is short than message (by definition) we need to repeat it, again and again to reach ML
M as values :
68 65 6C 6C 6F 20 77 6F 72 6C 64 20 74 6F 64 61 79 20 69 73 20 66 69 6E 65
K as values (repeated) :
61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61
What we obtain indeed is a sequence of three Caesar encription (in the general case the number of Caesar encription is KL = Key Length):
This is the first Caesar
M as values :
68 65 6C 6C 6F 20 77 6F 72 6C 64 20 74 6F 64 61 79 20 69 73 20 66 69 6E 65
| | | | | | | | |
K as values (repeated) :
61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61
Caesar SubMessage : 68 6C 77 6C 74 61 69 66 65
Caesar Key : 61
This is the second Caesar
M as values :
68 65 6C 6C 6F 20 77 6F 72 6C 64 20 74 6F 64 61 79 20 69 73 20 66 69 6E 65
| | | | | | | |
K as values (repeated) :
61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61
Caesar SubMessage : 65 6F 6F 64 6F 79 73 69
Caesar Key : 62
This is the third Caesar
M as values :
68 65 6C 6C 6F 20 77 6F 72 6C 64 20 74 6F 64 61 79 20 69 73 20 66 69 6E 65
| | | | | | | |
K as values (repeated) :
61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61 62 63 61
Caesar SubMessage : 6C 20 72 20 64 20 20 20 6E
Caesar Key : 63
Symbol frequencies on the main message remain the same if we "regulary split" it on the submessage, so classical attack on Caesar (the ones based on frequency) is still applicable on the three Caesar Submessage.
As you can image if you double the Vigenere encription the complexity of the attack does not change ( EG : add K1 = "def").
M as values :
68 65 6C 6C 6F 20 77 6F 72 6C 64 20 74 6F 64 61 79 20 69 73 20 66 69 6E 65
K as values :
61 62 63
K1 as values :
64 65 66
This because Vigenere encription is composed by Caesar encription and Caesar encription is an associative operation so :
( 68 + 61 mod AL ) + 64 mod AL = ( 68 + ( 61 + 64 mod AL ) ) mod AL
so a double encription is equivalent to a change of the key in a single encription, but the key nature (random or not) does not fix any security problem of that alogrithm.
In the example above (about the double encription) the resulting key, obtained by combining K and K1 is called K2 (i have assumed that AL = 256 and A is the ASCII table) :
K2 = C5 C7 C9
So the double encription is equivalent to a single instance like the below
M as values :
68 65 6C 6C 6F 20 77 6F 72 6C 64 20 74 6F 64 61 79 20 69 73 20 66 69 6E 65
K2 as values (repeated) :
C5 C7 C9 C5 C7 C9 C5 C7 C9 C5 C7 C9 C5 C7 C9 C5 C7 C9 C5 C7 C9 C5 C7 C9 C5
That istance certainly is not stringer than the first ones.
• So you mean, that even if the Vigenere encryption is performed twice, the properties won't change thus making the tests easier and not affecting any complexity of the algorithm and thus two times encryption is of no significance? – kiner_shah Oct 3 '16 at 14:02
• The only change is that the encription is more expensive for Alice and Bob, but for Eve the criptanalysis is the same. As shown in the post dobule encription is equivalent with a single encription with different key. That give no security gain in practice. – Skary Oct 3 '16 at 16:14
|
# Examples with tutorial¶
The online version of this tutorial contains embedded videos.
## Bouncing sphere¶
Following example is in file doc/sphinx/tutorial/01-bouncing-sphere.py.
# basic simulation showing sphere falling ball gravity,
# bouncing against another sphere representing the support
# DATA COMPONENTS
# add 2 particles to the simulation
# they the default material (utils.defaultMat)
O.bodies.append([
# fixed: particle's position in space will not change (support)
# this particles is free, subject to dynamics
sphere((0,0,2),.5)
])
# FUNCTIONAL COMPONENTS
# simulation loop -- see presentation for the explanation
O.engines=[
ForceResetter(),
InsertionSortCollider([Bo1_Sphere_Aabb()]),
InteractionLoop(
[Ig2_Sphere_Sphere_ScGeom()], # collision geometry
[Ip2_FrictMat_FrictMat_FrictPhys()], # collision "physics"
[Law2_ScGeom_FrictPhys_CundallStrack()] # contact law -- apply forces
),
# Apply gravity force to particles. damping: numerical dissipation of energy.
NewtonIntegrator(gravity=(0,0,-9.81),damping=0.1)
]
# set timestep to a fraction of the critical timestep
# the fraction is very small, so that the simulation is not too fast
# and the motion can be observed
O.dt=.5e-4*PWaveTimeStep()
# save the simulation, so that it can be reloaded later, for experimentation
O.saveTmp()
## Gravity deposition¶
Following example is in file doc/sphinx/tutorial/02-gravity-deposition.py.
# gravity deposition in box, showing how to plot and save history of data,
# and how to control the simulation while it is running by calling
# python functions from within the simulation loop
# import yade modules that we will use below
from yade import pack, plot
# create rectangular box from facets
# create empty sphere packing
# sphere packing is not equivalent to particles in simulation, it contains only the pure geometry
sp=pack.SpherePack()
# generate randomly spheres with uniform radius distribution
sp.makeCloud((0,0,0),(1,1,1),rMean=.05,rRelFuzz=.5)
# add the sphere pack to the simulation
sp.toSimulation()
O.engines=[
ForceResetter(),
InsertionSortCollider([Bo1_Sphere_Aabb(),Bo1_Facet_Aabb()]),
InteractionLoop(
# handle sphere+sphere and facet+sphere collisions
[Ig2_Sphere_Sphere_ScGeom(),Ig2_Facet_Sphere_ScGeom()],
[Ip2_FrictMat_FrictMat_FrictPhys()],
[Law2_ScGeom_FrictPhys_CundallStrack()]
),
NewtonIntegrator(gravity=(0,0,-9.81),damping=0.4),
# call the checkUnbalanced function (defined below) every 2 seconds
PyRunner(command='checkUnbalanced()',realPeriod=2),
# call the addPlotData function every 200 steps
]
O.dt=.5*PWaveTimeStep()
# enable energy tracking; any simulation parts supporting it
# can create and update arbitrary energy types, which can be
# accessed as O.energy['energyName'] subsequently
O.trackEnergy=True
# if the unbalanced forces goes below .05, the packing
# is considered stabilized, therefore we stop collected
# data history and stop
def checkUnbalanced():
if unbalancedForce()<.05:
O.pause()
plot.saveDataTxt('bbb.txt.bz2')
# plot.saveGnuplot('bbb') is also possible
# collect history of data which will be plotted
# each item is given a names, by which it can be the unsed in plot.plots
# the **O.energy converts dictionary-like O.energy to plot.addData arguments
# define how to plot data: 'i' (step number) on the x-axis, unbalanced force
# on the left y-axis, all energies on the right y-axis
# (O.energy.keys is function which will be called to get all defined energies)
# None separates left and right y-axis
plot.plots={'i':('unbalanced',None,O.energy.keys)}
# show the plot on the screen, and update while the simulation runs
plot.plot()
O.saveTmp()
## Oedometric test¶
Following example is in file doc/sphinx/tutorial/03-oedometric-test.py.
# gravity deposition, continuing with oedometric test after stabilization
# shows also how to run parametric studies with yade-batch
# The components of the batch are:
# 1. table with parameters, one set of parameters per line (ccc.table)
# 2. readParamsFromTable which reads respective line from the parameter file
# 3. the simulation muse be run using yade-batch, not yade
#
#
# load parameters from file if run in batch
# default values are used if not run from batch
# make rMean, rRelFuzz, maxLoad accessible directly as variables later
from yade.params.table import *
# create box with free top, and ceate loose packing inside the box
from yade import pack, plot
sp=pack.SpherePack()
sp.makeCloud((0,0,0),(1,1,1),rMean=rMean,rRelFuzz=rRelFuzz)
sp.toSimulation()
O.engines=[
ForceResetter(),
# sphere, facet, wall
InsertionSortCollider([Bo1_Sphere_Aabb(),Bo1_Facet_Aabb(),Bo1_Wall_Aabb()]),
InteractionLoop(
# the loading plate is a wall, we need to handle sphere+sphere, sphere+facet, sphere+wall
[Ig2_Sphere_Sphere_ScGeom(),Ig2_Facet_Sphere_ScGeom(),Ig2_Wall_Sphere_ScGeom()],
[Ip2_FrictMat_FrictMat_FrictPhys()],
[Law2_ScGeom_FrictPhys_CundallStrack()]
),
NewtonIntegrator(gravity=(0,0,-9.81),damping=0.5),
# the label creates an automatic variable referring to this engine
# we use it below to change its attributes from the functions called
PyRunner(command='checkUnbalanced()',realPeriod=2,label='checker'),
]
O.dt=.5*PWaveTimeStep()
# the following checkUnbalanced, unloadPlate and stopUnloading functions are all called by the 'checker'
# (the last engine) one after another; this sequence defines progression of different stages of the
# simulation, as each of the functions, when the condition is satisfied, updates 'checker' to call
# the next function when it is run from within the simulation next time
# check whether the gravity deposition has already finished
# if so, add wall on the top of the packing and start the oedometric test
def checkUnbalanced():
# at the very start, unbalanced force can be low as there is only few contacts, but it does not mean the packing is stable
if O.iter<5000: return
# the rest will be run only if unbalanced is < .1 (stabilized packing)
if unbalancedForce()>.1: return
# add plate at the position on the top of the packing
# the maximum finds the z-coordinate of the top of the topmost particle
O.bodies.append(wall(max([b.state.pos[2]+b.shape.radius for b in O.bodies if isinstance(b.shape,Sphere)]),axis=2,sense=-1))
global plate # without this line, the plate variable would only exist inside this function
plate=O.bodies[-1] # the last particles is the plate
# Wall objects are "fixed" by default, i.e. not subject to forces
# prescribing a velocity will therefore make it move at constant velocity (downwards)
plate.state.vel=(0,0,-.1)
# start plotting the data now, it was not interesting before
# next time, do not call this function anymore, but the next one (unloadPlate) instead
# if the force on plate exceeds maximum load, start unloading
plate.state.vel*=-1
# next time, do not call this function anymore, but the next one (stopUnloading) instead
# O.tags can be used to retrieve unique identifiers of the simulation
# if running in batch, subsequent simulation would overwrite each other's output files otherwise
# d (or description) is simulation description (composed of parameter values)
# while the id is composed of time and process number
plot.saveDataTxt(O.tags['d.id']+'.txt')
O.pause()
if not isinstance(O.bodies[-1].shape,Wall):
Fz=O.forces.f(plate.id)[2]
# besides unbalanced force evolution, also plot the displacement-force diagram
plot.plots={'i':('unbalanced',),'w':('Fz',)}
plot.plot()
O.run()
# when running with yade-batch, the script must not finish until the simulation is done fully
# this command will wait for that (has no influence in the non-batch mode)
waitIfBatch()
### Batch table¶
To run the same script doc/sphinx/tutorial/03-oedometric-test.py in batch mode to test different parameters, execute command yade-batch 03-oedometric-test.table 03-oedometric-test.py, also visit page http://localhost:9080 to see the batch simulation progress.
rMean rRelFuzz maxLoad
.05 .1 1e6
.05 .2 1e6
.05 .3 1e6
## Periodic simple shear¶
Following example is in file doc/sphinx/tutorial/04-periodic-simple-shear.py.
# encoding: utf-8
# script for periodic simple shear test, with periodic boundary
# first compresses to attain some isotropic stress (checkStress),
# then loads in shear (checkDistorsion)
#
# the initial packing is either regular (hexagonal), with empty bands along the boundary,
# or periodic random cloud of spheres
#
# material friction angle is initially set to zero, so that the resulting packing is dense
# (sphere rearrangement is easier if there is no friction)
#
# setup the periodic boundary
from __future__ import print_function
O.periodic=True
O.cell.refSize=(2,2,2)
from yade import pack,plot
# the "if 0:" block will be never executed, therefore the "else:" block will be
# to use cloud instead of regular packing, change to "if 1:" or something similar
if 0:
# create cloud of spheres and insert them into the simulation
# we give corners, mean radius, radius variation
sp=pack.SpherePack()
sp.makeCloud((0,0,0),(2,2,2),rMean=.1,rRelFuzz=.6,periodic=True)
# insert the packing into the simulation
sp.toSimulation(color=(0,0,1)) # pure blue
else:
# in this case, add dense packing
O.bodies.append(
)
# create "dense" packing by setting friction to zero initially
O.materials[0].frictionAngle=0
# simulation loop (will be run at every step)
O.engines=[
ForceResetter(),
InsertionSortCollider([Bo1_Sphere_Aabb()]),
InteractionLoop(
[Ig2_Sphere_Sphere_ScGeom()],
[Ip2_FrictMat_FrictMat_FrictPhys()],
[Law2_ScGeom_FrictPhys_CundallStrack()]
),
NewtonIntegrator(damping=.4),
# run checkStress function (defined below) every second
# the label is arbitrary, and is used later to refer to this engine
PyRunner(command='checkStress()',realPeriod=1,label='checker'),
# record data for plotting every 100 steps; addData function is defined below
]
# set the integration timestep to be 1/2 of the "critical" timestep
O.dt=.5*PWaveTimeStep()
# prescribe isotropic normal deformation (constant strain rate)
# of the periodic cell
# when to stop the isotropic compression (used inside checkStress)
limitMeanStress=-5e5
# called every second by the PyRunner engine
def checkStress():
# stress tensor as the sum of normal and shear contributions
# Matrix3.Zero is the intial value for sum(...)
stress=sum(normalShearStressTensors(),Matrix3.Zero)
print('mean stress',stress.trace()/3.)
# if mean stress is below (bigger in absolute value) limitMeanStress, start shearing
if stress.trace()/3.<limitMeanStress:
# apply constant-rate distorsion on the periodic cell
# change the function called by the checker engine
# (checkStress will not be called anymore)
checker.command='checkDistorsion()'
# block rotations of particles to increase tanPhi, if desired
# disabled by default
if 0:
for b in O.bodies:
# block X,Y,Z rotations, translations are free
b.state.blockedDOFs='XYZ'
# stop rotations if any, as blockedDOFs block accelerations really
b.state.angVel=(0,0,0)
# set friction angle back to non-zero value
# tangensOfFrictionAngle is computed by the Ip2_* functor from material
# for future contacts change material (there is only one material for all particles)
# for existing contacts, set contact friction directly
for i in O.interactions: i.phys.tangensOfFrictionAngle=tan(.5)
# called from the 'checker' engine periodically, during the shear phase
def checkDistorsion():
# if the distorsion value is >.3, exit; otherwise do nothing
if abs(O.cell.trsf[0,2])>.5:
# save data from addData(...) before exiting into file
# use O.tags['id'] to distinguish individual runs of the same simulation
plot.saveDataTxt(O.tags['id']+'.txt')
# exit the program
#import sys
#sys.exit(0) # no error (0)
O.pause()
# called periodically to store data history
# get the stress tensor (as 3x3 matrix)
stress=sum(normalShearStressTensors(),Matrix3.Zero)
# give names to values we are interested in and save them
# color particles based on rotation amount
for b in O.bodies:
# rot() gives rotation vector between reference and current position
b.shape.color=scalarOnColorScale(b.state.rot().norm(),0,pi/2.)
# define what to plot (3 plots in total)
## exz(i), [left y axis, separate by None:] szz(i), sxz(i)
## szz(exz), sxz(exz)
## tanPhi(i)
# note the space in 'i ' so that it does not overwrite the 'i' entry
plot.plots={'i':('exz',None,'szz','sxz'),'exz':('szz','sxz'),'i ':('tanPhi',)}
# better show rotation of particles
Gl1_Sphere.stripes=True
# open the plot on the screen
plot.plot()
O.saveTmp()
## 3d postprocessing¶
Following example is in file doc/sphinx/tutorial/05-3d-postprocessing.py. This example will run for 20000 iterations, saving *.png snapshots, then it will make a video 3d.mpeg out of those snapshots.
# demonstrate 3d postprocessing with yade
#
# 1. qt.SnapshotEngine saves images of the 3d view as it appears on the screen periodically
# makeVideo is then used to make real movie from those images
# 2. VTKRecorder saves data in files which can be opened with Paraview
# see the User's manual for an intro to Paraview
# generate loose packing
from yade import pack, qt
sp=pack.SpherePack()
sp.makeCloud((0,0,0),(2,2,2),rMean=.1,rRelFuzz=.6,periodic=True)
# add to scene, make it periodic
sp.toSimulation()
O.engines=[
ForceResetter(),
InsertionSortCollider([Bo1_Sphere_Aabb()]),
InteractionLoop(
[Ig2_Sphere_Sphere_ScGeom()],
[Ip2_FrictMat_FrictMat_FrictPhys()],
[Law2_ScGeom_FrictPhys_CundallStrack()]
),
NewtonIntegrator(damping=.4),
# save data for Paraview
VTKRecorder(fileName='3d-vtk-',recorders=['all'],iterPeriod=1000),
# save data from Yade's own 3d view
qt.SnapshotEngine(fileBase='3d-',iterPeriod=200,label='snapshot'),
# this engine will be called after 20000 steps, only once
PyRunner(command='finish()',iterPeriod=20000)
]
O.dt=.5*PWaveTimeStep()
# prescribe constant-strain deformation of the cell
# we must open the view explicitly (limitation of the qt.SnapshotEngine)
qt.View()
# this function is called when the simulation is finished
def finish():
# snapshot is label of qt.SnapshotEngine
# the 'snapshots' attribute contains list of all saved files
makeVideo(snapshot.snapshots,'3d.mpeg',fps=10,bps=10000)
O.pause()
# set parameters of the renderer, to show network chains rather than particles
# these settings are accessible from the Controller window, on the second tab ("Display") as well
rr.shape=False
rr.intrPhys=True
## Periodic triaxial test¶
Following example is in file doc/sphinx/tutorial/06-periodic-triaxial-test.py.
# encoding: utf-8
# periodic triaxial test simulation
#
# The initial packing is either
#
# 1. random cloud with uniform distribution, or
# 2. cloud with specified granulometry (radii and percentages), or
# 3. cloud of clumps, i.e. rigid aggregates of several particles
#
# The triaxial consists of 2 stages:
#
# 1. isotropic compaction, until sigmaIso is reached in all directions;
# this stage is ended by calling compactionFinished()
# 2. constant-strain deformation along the z-axis, while maintaining
# constant stress (sigmaIso) laterally; this stage is ended by calling
# triaxFinished()
#
# Controlling of strain and stresses is performed via PeriTriaxController,
# of which parameters determine type of control and also stability
# condition (maxUnbalanced) so that the packing is considered stabilized
# and the stage is done.
#
from __future__ import print_function
sigmaIso=-1e5
#import matplotlib
#matplotlib.use('Agg')
# generate loose packing
from yade import pack, qt, plot
O.periodic=True
sp=pack.SpherePack()
if 0:
## uniform distribution
sp.makeCloud((0,0,0),(2,2,2),rMean=.1,rRelFuzz=.3,periodic=True)
else:
## create packing from clumps
# configuration of one clump
c1=pack.SpherePack([((0,0,0),.03333),((.03,0,0),.017),((0,.03,0),.017)])
# make cloud using the configuration c1 (there could c2, c3, ...; selection between them would be random)
sp.makeClumpCloud((0,0,0),(2,2,2),[c1],periodic=True,num=500)
# setup periodic boundary, insert the packing
sp.toSimulation()
O.engines=[
ForceResetter(),
InsertionSortCollider([Bo1_Sphere_Aabb()]),
InteractionLoop(
[Ig2_Sphere_Sphere_ScGeom()],
[Ip2_FrictMat_FrictMat_FrictPhys()],
[Law2_ScGeom_FrictPhys_CundallStrack()]
),
PeriTriaxController(label='triax',
# specify target values and whether they are strains or stresses
# type of servo-control
dynCell=True,maxStrainRate=(10,10,10),
# wait until the unbalanced force goes below this value
maxUnbalanced=.1,relStressTol=1e-3,
# call this function when goal is reached and the packing is stable
doneHook='compactionFinished()'
),
NewtonIntegrator(damping=.2),
]
O.dt=.5*PWaveTimeStep()
sxx=triax.stress[0],syy=triax.stress[1],szz=triax.stress[2],
exx=triax.strain[0],eyy=triax.strain[1],ezz=triax.strain[2],
# save all available energy data
Etot=O.energy.total(),**O.energy
)
# enable energy tracking in the code
O.trackEnergy=True
# define what to plot
plot.plots={'i':('unbalanced',),'i ':('sxx','syy','szz'),' i':('exx','eyy','ezz'),
# energy plot
' i ':(O.energy.keys,None,'Etot'),
}
# show the plot
plot.plot()
def compactionFinished():
# set the current cell configuration to be the reference one
O.cell.trsf=Matrix3.Identity
# change control type: keep constant confinement in x,y, 20% compression in z
triax.goal=(sigmaIso,sigmaIso,-.2)
|
# Coding challenge
The "golden ratio" is worth reading about on Wikipedia. As a number, it is $\rho=\frac{1}{2}(1+\sqrt{5})$, where $\rho$ is the golden ratio. Write code to show that
1. $\rho=\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+...}}}}$
2. $\rho^n=\rho^{n-1}+\rho^{n-2}$
3. $\rho=\rho^{-1}+\rho^{-2}+\rho^{-3}+...$
(From Cheney, p. 669.)
|
# Weak duality for packing edge-disjoint odd trails
Despite Menger’s famous duality between packings and coverings of $$(u, v)$$-paths in a graph, there is no duality when we require the paths be odd: a graph with no two edge-disjoint odd $$(u, v)$$-paths may need an arbitrarily large number of edges to cover all such paths. In this paper, we study the relaxed problem of packing odd trails. Our main result is an approximate duality for odd trails: if $$\nu(u, v)$$ denotes the maximum number of edge-disjoint $$(u, v)$$-trails of odd length in a graph $$G$$ and $$\tau(u, v)$$ denotes the minimum number of edges that intersect every such trail, then $$\nu(u,v) \leq \tau(u, v) \leq 8\nu(u,v)$$ The proof leads to a polynomial-time algorithm to find, for any given $$k$$, either $$k$$ edge-disjoint odd $$(u, v)$$-trails or a set of fewer than $$8k$$ edges intersecting all odd $$(u, v)$$-trails. This yields a constant factor approximation algorithm for the packing number $$\nu(u, v)$$.
This result generalizes to the setting of signed graphs and to the setting of group-labelled graphs, in which case “odd length” is replaced by “non-unit product of labels”. The motivation for this result comes from the study of totally odd graph immersions, and our results explain, in particular, why there is an essential difference between the totally odd weak and strong immersions.
Weak duality for packing edge-disjoint odd (u, v)-trails. Ross Churchley, Bojan Mohar, and Hehui Wu. Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2016), 2086–2094. January 2016.
|
# Language
The CASM language is an Abstract State Machine (ASM) modeling and specification language. Its syntax is influenced in part by CoreASM and other ASM languages, but includes language concepts which are well known in modern programming languages as well. Different to existing ASM implementations is that CASM is a static strong inferred typed ASM language. Before we describe various language constructs lets have a look at the Hello World specification to outline the basic structure of the CASM modeling language:
The Hello World application specified in CASM:
1 CASM
2 init HelloWorld
3 rule HelloWorld =
4 {
5 println( "Hello world!" )
6 }
Every CASM specification starts with a header containing the keyword CASM (line 1). The header part is followed by definitions. In line 2 an init definition is specified to set the starting rule of the single execution agent to the rule named HelloWorld. The rule HelloWorld is defined in line 3 ranging to line 6 through a rule definition. Notice that CASM does not require symbol names to be declared before usage. Inside the HelloWorld rule a block rule creates a parallel execution semantics scope (line 4, 6). Last but not least the actual statement to print out the Hello world! string defined by a call rule to a built-in function named println.
CASM does not require any newlines, tabulators, or statement separators like in other specification or programming languages.
TBA
TBA
TBA
TBA
TBA
|
# Physics equations/Oscillations, waves, and interference
#### Simple harmonic motion
Simple harmonic motion occurs when the restoring force is directly proportional to the displacement. It can serve to model many physical systems. Hooke's law states that the restoring force F of a spring obeys, ${\displaystyle \mathbf {F} =-k\mathbf {x} ,}$ , where k is the spring constant, and x is the displacement from the equilibrium position.
Since a=d2x/dt2, the equation of motion becomes a linear second order differential equation:
${\displaystyle {\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-\left({\frac {k}{m}}\right)x=-\omega ^{2}x\;,}$
where we define ${\displaystyle \omega ={\sqrt {k/m}}}$ as a (constant) parameter in the ordinary differential equation.
The general solution can be described in a variety of ways; all have exactly two arbitrary constants of motion that arise from the fact that the initial position and initial velocity must be determined in order to model the system:
${\displaystyle x(t)=c_{1}\cos \left(\omega t\right)+c_{2}\sin \left(\omega t\right)=A\cos \left(\omega t-\varphi \right)=a_{1}e^{i\omega t}+a_{2}e^{-i\omega t}}$
where ${\displaystyle \omega ={\sqrt {k/m}}}$ is the the angular frequency; ωT = 2π and fT= 1 are useful for relating angular frequency, frequency, and period. Horizontal motion of a pendulum is one of many examples of a system that approximately obeys simple harmonic motion, with ω2 = g/L, where L is the length of the pendulum. The velocity and acceleration as a function of time are:
${\displaystyle v(t)={\frac {\mathrm {d} x}{\mathrm {d} t}}=-A\omega \sin(\omega t+\varphi ),}$
${\displaystyle a(t)={\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-A\omega ^{2}\cos(\omega t+\varphi )=-\omega ^{2}x.\!}$
#### Energy of simple harmonic motion
The kinetic energy K of the system at time t is
${\displaystyle K(t)={\frac {1}{2}}mv^{2}(t)={\frac {1}{2}}m\omega ^{2}A^{2}\sin ^{2}(\omega t-\varphi )={\frac {1}{2}}kA^{2}\sin ^{2}(\omega t-\varphi ),}$
and the potential energy is
${\displaystyle U(t)={\frac {1}{2}}kx^{2}(t)={\frac {1}{2}}kA^{2}\cos ^{2}(\omega t-\varphi ).}$
The total mechanical energy of the system therefore has the constant value
${\displaystyle E=K+U={\frac {1}{2}}kA^{2}.}$
#### A simple travelling wave
from
Wavelength λ, can be measured between any two corresponding points on a waveform
Although ψ (psi) is often associated with quantum theory, Lord Rayleigh used that symbol describe sound waves. An idealized model of transverse waves on a stretched string assumes that the string propagates a signal of any shape, without distortion, in either direction.[1] The height of the wave is:
${\displaystyle \psi (x,t)=F(x-v_{0}t)+G(x+v_{0}t).\,}$
The two functions ${\displaystyle F}$ and ${\displaystyle G}$ represent two arbitrary wave forms travelling in opposite directions. For example, if ${\displaystyle G=0}$ , the wave has the shape, ${\displaystyle F(x)}$ , and this shape travels to the right at velocity, ${\displaystyle v_{0}}$ . It can be shown that this wave obeys the partial differential equation:
${\displaystyle {\frac {1}{v_{0}^{2}}}{\frac {\partial ^{2}\psi }{\partial t^{2}}}={\frac {\partial ^{2}\psi }{\partial x^{2}}},\,}$ (where ${\displaystyle v_{0}}$ is a constant).
Travelling wave solutions to this wave equation can take on many forms; a simple but important solution is:
${\displaystyle \psi (x,t)=A\cos \left(kx-\omega t+\phi _{A}\right)+B\cos \left(kx-\omega t+\phi _{B}\right)}$
where (A, B, φA, φB } are arbitrary constants. A and B are called amplitudes, while (φA, φB) are phases. Another pair of constants is k and ω (wavenumber and angular frequency); they are constrained by |ω/k| = v0, which is called the phase speed. The relation between ω and is called the dispersion relation. This dispersion relation has two branches, ω(k)=±v0k. Other dispersion relations exist. For example bending waves on a long thin rod have a dispersion with four branches (not all real numbers ): ω2=v0k4.
#### Beats and group velocity
Two waves of nearly the same frequency exhibit beats.
For example, consider two closely space frequencies, ${\displaystyle \omega _{1}}$ and ${\displaystyle \omega _{2}}$ :
${\displaystyle \omega _{1}={\bar {\omega }}-{\frac {1}{2}}\Delta \omega }$ and ${\displaystyle \omega _{2}={\bar {\omega }}+{\frac {1}{2}}\Delta \omega }$ (so that ${\displaystyle \Delta \omega =\omega _{2}-\omega _{1}}$ ).
It can be shown that for a wave with a sinusoidal time dependence:
${\displaystyle \psi (t)=\cos(\omega _{1}t)+\cos(\omega _{2}t)=A(t)\cos({\bar {\omega }}t)}$ where ${\displaystyle A(t)=2\cos \left({\frac {\Delta \omega }{2}}t\right)}$ is the envelope.
A 'wavetrain' is the infinite sequence of wavepackets that occurs when two 'plane waves' of equal amplitude are added. A true isolated wavepacket requires an infinite number waves the form cos(kx-ωt) but with a little algebra, one can discern a great deal by studying wavetrains. If ${\displaystyle \Delta \omega <<{\bar {\omega }}}$ the envelope, A(t), varies so slowly over time that it is essentially constant over many oscillations of the higher frequency. Defining Δt as the time between consecutive zeros of the envelope:
${\displaystyle (\Delta \omega )(\Delta t)=2\pi \qquad (\Delta k)(\Delta x)=2\pi }$
The corresponding result for a wavetrain that varies with x is also shown, as there is a one-to-one correspondence between ω and k in these equations. More rigorous definitions of Δω and Δk lead to Heisenberg's uncertainty principles, (Δω)(Δt) ≥1/2 and(Δk (Δx)≥ 1/2. In this formalism, wavepackets move with the group velocity, dx/dt=∂ω/∂k. Strict plane waves, such as cos(kx-ωt), typically occur only if the medium is homogeneous (in time and space). But if the inhomogeniety is sufficiently gentle (see eikonal approximation), then we have another equation of motion: dk/dt=-∂ω/∂x. In this approximation, wavepackets move exactly as do Newtonian particles.
- Click to show or hide the algebra showing how two plane waves make a travelling wavetrain-
The Algebra: The trigonometry is often more transparent if we use Euler's equation and take the real part, taking ${\displaystyle \psi }$ to be the real part of ${\displaystyle {\tilde {\psi }}}$ . Using well-known properties of exponents:
${\displaystyle {\tilde {\psi }}(t)=e^{i\omega _{1}t}+e^{i\omega _{2}t}=e^{i{\bar {\omega }}t-i{\frac {1}{2}}\Delta \omega t}+e^{i{\bar {\omega }}t+i{\frac {1}{2}}\Delta \omega t}=e^{i{\bar {\omega }}t}\left(e^{+i{\frac {1}{2}}\Delta \omega t}+e^{-i{\frac {1}{2}}\Delta \omega t}\right)=2cos\left({\frac {\Delta \omega }{2}}\right)e^{i{\bar {\omega }}t}}$
It is left as an exercise to the reader to perform this calculation for a travelling wave
${\displaystyle \exp \left(i(k_{1}x-i\omega _{1}t\right)+\exp \left(i(k_{2}x-i\omega _{2}t\right)=2\cos \left({\frac {\Delta k}{2}}x-{\frac {\Delta \omega }{2}}t\right)e^{i{\bar {k}}x-i{\bar {\omega }}}}$
#### A simple standing wave
Allowed standing waves on a string of length L are of the form sin(nπx/L) where n = 1, 2, 3, ...
The second derivative in Newton's second law of motion usually implies that two initial conditions (position and velocity) are necessary and sufficient to establish future motion. With waves it is necessary to establish these initial conditions for each of the infinitely many points along a string. For example, in the previous solutions to the wave equation, there are an infinite number of values that the angular frequency might take. We illustrate this with transverse waves on a string of length L, with both ends of the string held clamped. The general solution for this system can be written as,
${\displaystyle \psi (x,t)=\sum _{n=1}^{\infty }A_{n}\cos \left(\omega _{n}t-\phi _{n}\right)\sin \left({\frac {n\pi x}{L}}\right)\;,}$
where ${\displaystyle (A_{n},\phi _{n})}$ are arbitrary constants and each angular frequency obeys the dispersion relation, ${\displaystyle \omega _{n}=k_{n}v=n\pi v/L}$ . (For a string of linear mass density μ, the wave speed obeys v2 = T/μ, where T is the tension in the string.) can often be used to find the arbitrary constants. Fourier analysis can be used to find the arbitrary constants, provided the initial conditions are known.
- Click to show or hide an example using Fourier analysis -
Fourier Series Example:' Suppose the initial conditions at ${\displaystyle t=0}$ are:
${\displaystyle \partial \psi /\partial t=0\;,\;}$ and
${\displaystyle \psi =f(x)}$
(where f(x) is a known function), then:
${\displaystyle \psi (x,t)=\sum _{n=1}^{\infty }A_{n}\cos(\omega _{n}t)\sin \left({\frac {n\pi x}{L}}\right)\;,}$
Using Fourier analysis, it can be shown that:${\displaystyle A_{n}={\frac {2}{L}}\int _{0}^{L}f(x)\sin {\frac {n\pi x}{L}}dx}$
References are hidden
|
# RD Sharma Solutions for Class 9 : Maths
## Solutions for all the questions from Maths RD SHARMA, Class 9th
Filter
Filters :
CLASSES
•
•
•
•
•
•
•
CHAPTERS
• 15 More
### RD SHARMA Class 9 | LINEAR EQUATIONS IN TWO VARIABLES
Draw the graphs of y=x and y=-x in the same graph. Also, find the coordinates of the point where the two lines intersect.
### RD SHARMA Class 9 | LINEAR EQUATIONS IN TWO VARIABLES
Draw the graphs of each of the following linear equations: x-2=0,x+5=0,2x+4=3x+1
### RD SHARMA Class 9 | LINEAR EQUATIONS IN TWO VARIABLES
Draw the graphs of the lines represented by equations x+y=4 and 2x-y=2 in the same graph. Also, find the coordinates of the point where the two lines intersect.
### RD SHARMA Class 9 | LINEAR EQUATIONS IN TWO VARIABLES
Draw a graph of the equation: 3x-2y=4 and x+y-3=0
RD Sharma class 9 Maths solutions is extremely a helpful resource for the preparation of exams .CBSE class 9 Maths solutions are solved by our Maths experts which leads to help all the students who are preparing for class 9 Maths examinations to score good marks . The questions given in RD Sharma Maths textbook are prepared in accordance with CBSE as per the CCE guidelines thus holding higher chances of appearing on CBSE Maths question paper. These RD Sharms class 9 Maths solutions help students in exam and in doing daily homework. In video , not only answer is explained for each questions of RD Sharma Maths text book but also concepts are also explained to each and every questions. Hope this will be helpful for all the students who are preparing for class 9 Maths exams.The solutions are arranged here chapter-wise. Select chapter to view RD Sharma solutions chapter-wise.
Doubtnut has provided you with the best in the market Class 9 solutions for all the Class 9 students to help them achieve better results in their education of Class 9 and thereby, achieve higher grades in their Mathematics examination, and thereby pave a better way in their educational career. Through seeking professional help from subject experts like RD Sharma, we have provided our Class 9 students with the best in the class study material that is called as, RD Sharma Class 9 Solutions that will aid our Class 9 students in their Class 9 Mathematics examination achieve higher results and thereby lay a better path for their future educational career.
Henceforth, we recommend all the Class 9 students to adapt to our high standard teaching techniques used in RD Sharma Class 9 Solutions and thereby gain knowledge for their Class 9 mathematics examinations, and thereby attain higher grade marks. Many students consider the mathematics subject to be a hard subject and experience trouble in the understanding of the subject because of the lower comprehension levels. RD Sharma Class 9 Maths Solutions is offered by Doubtnut, as the best learning tool for their maths subject, which is prepared by the best in the market subject expert, known as RD Sharma. Hence, we recommend all our Class 9 students to follow the techniques, practices and examples stipulated in the book and achieve higher grades and ranks in their Class 9 Mathematics examinations.
## RD Sharma Class 9 Maths Solution
For all those Class 9 students who are seeking solutions for their doubts, we have an excellent platform for them on Doubtnut, where they can resolve all their doubts and receive clarifications on the website itself. Through RD Sharma Class 9 Maths Solutions platform, Doubtnut on its website has provided to all the Class 9 students an excellent resource from on the best in the market subject expert, who can resolve all the doubts and provide easy clarifications and solutions for their mathematics problems in Class 9. RD Sharma is a renowned subject expert who has published various books in the market on the subject of Mathematics.
Doubtnut app has all the resources needed relating to Mathematics subject of Class 9, through which all the students of Class 9 can resolve their doubts in the mathematics subject and get clarifications for their doubts in the maths problems and receive solutions very easily. RD Sharma Class 9 Maths solutions provided by Doubtnut platform on its website, will help you resolve all kinds of maths doubts and provide you with all sorts of clarifications for your maths problems and offer you solutions very easily. RD Sharma is one of the best available subject experts in the market, using which we can resolve all our Class 9 maths queries and obtain solutions from the best in the class subject expert.
RD Sharma Class 9 Solutions available on the app of Doubtnut, can resolve all kinds of queries and problems in the mathematics subject of your Class 9, and help you in achieving higher marks in the mathematics subject of Class 9. RD Sharma is one of the best subject experts in the market who has vast experience in the mathematics subject of Class 9 and also has professional experience of publishing various books in the market and has also experience of teaching Class 9 maths subject to various students of all classes for maths subject, and hence is the best option for you in resolving all your doubts and get clarifications.
Hence, after knowing about the best available platform for resolving all and any kind of Class 9 Maths problems and doubts, we strictly recommend to all the Class 9 maths students to use the RD Sharma Class 9 Maths Solutions platform to resolve all their doubts and get clarifications for their doubts from our Doubtnut website. Due to our expertise in the education industry and the role that our services play in the career of Class 9 students, we strive hard to provide for Class 9 students with the best in the class services that aid our students in availing best services.
### RD Sharma Class 9 Solutions Chapter 1: -Number System
This chapter plays an important role as one of the foundational aspects for the Class 9 Mathematics, for all the rest of the chapters in the Class 9 mathematics subject of Class 9 Maths RD Sharma Solutions. Hence, it should be taught by the subject expert who can make the students understand the basics in the Number System and thereby help the students achieve higher grades.
### RD Sharma Class 9 Solutions Chapter 2: -Exponents of Real Numbers
This chapter which deals with Exponents, which are also termed as Powers or Indices, deals with providing the representation of a number in exponent form and also forms as the basic concept for Algebra in the Mathematics subject. Hence, the chapter needs dealing with the chapter to make the students learn all the important concepts in the subject and thereby make them achieve higher grades in the chapter.
### RD Sharma Class 9 Solutions Chapter 3: -Rationalization
Another important chapter having concepts which can be utilized for the solving of complex functions and calculations and helps you in converting difficult-to-divide fractions into a simpler form, and thereby makes complex calculations into very easy to solve problems of maths. The chapter needs to be dealt with subject experts that make your learning easier and thereby offer you a better way to be dealt with and help you solve complex problems in Class 9 Maths RD Sharma Solutions.
### RD Sharma Class 9 Solutions Chapter 4: - Algebraic Identities
This chapter involves concepts that help the students in solving some of the complex and cumbersome problems in Maths. The concepts available in this chapter help the students of Class 9 achieve formulating algebraic relations from word problems. In addition to these, the students of Class 9 Maths RD Sharma Solutions will be able to learn some of the commonly used binomial, factorial and three-variable identities.
### RD Sharma Class 9 Solutions Chapter 5: - Factorization of Algebraic Expressions
The factorization chapter forms the concept of the opposite of expansion and is considered to be one of the most essential concepts for Class 9 students. The chapter is involved in a concept that is formulated on some of the previous algebra concepts that include identities and exponents. Using the concepts included in this chapter, the student will be able to factorize lengthy expressions by utilizing common factors, grouping in pairs, a difference of squares, and such other.
### RD Sharma Class 9 Solutions Chapter 6: - Factorization of Polynomials
The concept of factorization of Polynomials is a very important concept that helps Class 9 students in the latter chapters of their curriculum and thereby aids them in achieving higher grades in their Class 9 mathematics. Hence, such concepts should be dealt with proficient and subject experts that can handle an easy understanding of such complex and important concepts for the students.
### RD Sharma Class 9 Solutions Chapter 7: - Introduction to Euclid’s Geometry
This chapter forms the basis for the latter classes' concepts that involve Geometry, and hence the student should be given detailed and careful teaching of such important concepts. This chapter majorly explains axioms and theorems which deal with the finding out of relations between
### RD Sharma Class 9 Solutions Chapter 8: - Lines and Angles
Lines and Angles in any and all Geometrical figure form various relations with each other, and hence, this chapter is everything about these relations and gives the student a basic glimpse of what he or she is about to deal in the future and latter classes, and forms the foundation for the latter concepts he or she is about to deal in the latter sections of Geometry in the latter classes after Class 9.
### RD Sharma Class 9 Solutions Chapter 9: - Triangles and its Angles
In this chapter, the student will find out the different angles that the triangles form and various properties of the three angles that form like a triangle. In this chapter, the student will find out how to calculate the values of these unknown angles, formed in the triangles. Hence, such an important concept of these sections should be dealt with a subject expert who has vast skill in the chapter.
### RD Sharma Class 9 Solutions Chapter 10: - Congruent Triangles
Using Euclidean theorems, any student will be able to prove that, any two triangular 2D bodies are a replica of each other, and in order to gain such knowledge, the student should study the chapter of Congruent Triangles. Only then can the student identify the features that state the similarities between two triangular 2D bodies that represent the similarity between the two objects and shapes.
### RD Sharma Class 9 Solutions Chapter 11: - Coordinate Geometry
Cartesian Coordinate Geometry is named after the renowned international mathematician, Rene Descartes, who is the formulator of the concept. In this chapter, the students will learn about the rectangular coordinate plan and various methods that help in finding out the distance between the two points in the same.
### RD Sharma Class 9 Solutions Chapter 12: - Herons Formula
In such circumstances, when the students are only having the information about the length of the sides of a triangle, this chapter discusses such formula which helps in finding out the area of the triangle, of which only the information on the length of the sides is only available. In addition, this formula also is helpful in finding out the areas of quadrilaterals.
### RD Sharma Class 9 Solutions Chapter 13: - Linear Equation in two variables
Obviously, for no other reason, any person faces such a situation where he has to deal with a straight line and find out its measurements. This chapter helps in all such situations, through learning the student on how to present the geometric representation of various straight line equations.
### RD Sharma Class 9 Solutions Chapter 14: - Quadrilaterals
In this chapter, the students will be able to learn about the various properties of a quadrilateral and the various relations between angles. In this same chapter, the students will also be able to learn the different formulae that can be used to find out area and volumes of different quadrilaterals with different measurements.
### RD Sharma Class 9 Solutions Chapter 15: - Areas of Parallelograms and Triangles
In this chapter, the student will learn about the various formulae and other tools using which the student will be able to learn about the methods that he or she can use to find out the areas of various triangles, area of a right-angled triangle and other parallelograms like Rhombus and Trapezium.
### RD Sharma Class 9 Solutions Chapter 16: - Circles
In this chapter, the student, as in the earlier sections, they found out the various formulae and methods that they can follow to find out the various measurements of triangles, straight line, parallelograms; in this chapter, they will learn about the various formulae used for finding out properties and measurements of a circle.
### RD Sharma Class 9 Solutions Chapter 17: - Constructions
Through this chapter, the students will be able to learn about the skills and techniques used in constructing geometric figures, just by using a compass and a scale. In addition to this, they also learn about the construction of various angles, bisect lines and the process of inscribing triangles into figures and also learn much more.
### RD Sharma Class 9 Solutions Chapter 18: - Surface area and volume of Cuboid and Cube
In this chapter, the students will learn about the formulae and methods that can be followed to calculate and find out about the different measurements of Cuboid and Cube such as Surface Area and Volume. The techniques learned in this Chapter are useful for the students in securing higher grades successfully and thereby attain higher grades.
### RD Sharma Class 9 Solutions Chapter 19: - Surface area and volume of Right Circular Cylinder
The concept of a right circular cylinder is very important in other subjects also, other than in mathematics, as this is explained in many other concepts in other subjects like Physics and Chemistry. Hence, the learning of this concept is useful to all the students of Mathematics in Class 9.
### RD Sharma Class 9 Solutions Chapter 20: - Surface area and volume of a Right Circular Cone
Through the concepts learned in this chapter, the student will be able to learn about the different formulae used to calculate the volume and surface area of a right circular cone. Hence, this chapter is essential for Class 9 Mathematics students.
### RD Sharma Class 9 Solutions Chapter 21: - Surface area and volume of a Sphere
This significantly important chapter contains concepts that have various implications on the further chapters. Hence, it is important for students to learn this chapter very importantly.
### RD Sharma Class 9 Solutions Chapter 22: - Tabular Representation of Statistical Data
This important concept in Class 9 Mathematics is the science of collection, presentation, analysis and interpretation of numerical data, and the application of this concept is applied in various other fields.
### RD Sharma Class 9 Solutions Chapter 23: - Graphical Representation of Statistical Data
In order to understand the relationship between variables in a more elaborative manner, the graphical representation helps the students through visually representing huge volumes of statistical data through graphs.
### RD Sharma Class 9 Solutions Chapter 24: - Measures of Central Tendencies
This chapter completely deals with different measures of central tendencies such as Mean, Median and Mode, using which you can identify the location of maximum number values in a distribution.
### RD Sharma Class 9 Solutions Chapter 25: - Probability
This chapter is also very important and has some very vital concepts that should be seriously considered important by the students of Class 9.
#### Key Features of RD Sharma Class 9 Math Solutions
The concepts explained by RD Sharma on Doubtnut website play a vital role in the career of student life through offering them an easy explanation and thereby providing them with an easy understanding of the concepts included in the Class 9 Maths. The RD Sharma Class 9 Solutions offers to the students a better platform called RD Sharma Class 9 pdf for the understanding of the Math concepts included in the curriculum of Class 9 Maths RD Sharma Solutions. Hence, we strongly recommend all the students of RD Sharma Class 9 pdf to visit our Doubtnut website and reach our destination platform to download the best source, which is RD Sharma Class 9 pdf.
|
# 1.18 Velocity (application) (Page 2/2)
Page 2 / 2
## Constrained motion
Problem : Two particles A and B are connected by a rigid rod AB. The rod slides along perpendicular rails as shown here. The velocity of A moving down is 10 m/s. What is the velocity of B when angle θ = 60° ?
Solution : The velocity of B is not an independent velocity. It is tied to the velocity of the particle “A” as two particles are connected through a rigid rod. The relationship between two velocities is governed by the inter-particles separation, which is equal to the length of rod.
The length of the rod, in turn, is linked to the positions of particles “A” and “B” . From figure,
$x=\sqrt{\left({L}^{2}-{y}^{2}\right)}$
Differentiatiting, with respect to time :
$⇒{v}_{x}=\frac{dx}{dt}=-\frac{2y}{2\sqrt{\left({L}^{2}-{y}^{2}\right)}}X\frac{dy}{dt}=-\frac{y{v}_{y}}{\sqrt{\left({L}^{2}-{y}^{2}\right)}}=-{v}_{y}\mathrm{tan}\theta$
Considering magnitude only,
$⇒{v}_{x}={v}_{y}\mathrm{tan}\theta =10\mathrm{tan}{60}^{0}=10\sqrt{3}\phantom{\rule{1em}{0ex}}\frac{m}{s}$
## Nature of velocity
Problem : The position vector of a particle is :
$\begin{array}{l}\mathbf{r}=a\mathrm{cos}\omega t\mathbf{i}+a\mathrm{sin}\omega t\mathbf{j}\end{array}$
where “a” is a constant. Show that velocity vector is perpendicular to position vector.
Solution : In order to prove as required, we shall use the fact that scalar (dot) product of two perpendicular vectors is zero. Now, we need to find the expression of velocity to evaluate the dot product as intended. We can obtain the same by differentiating the expression of position vector with respect to time as :
$\begin{array}{l}\mathbf{v}=\frac{d\mathbf{r}}{dt}=-a\omega \mathrm{sin}\omega t\mathbf{i}+a\omega \mathrm{cos}\omega t\mathbf{j}\end{array}$
To check whether velocity is perpendicular to the position vector, we evalaute the scalar product of r and v , which should be equal to zero.
$\begin{array}{l}\mathbf{r}\mathbf{.}\mathbf{v}=0\end{array}$
In this case,
$\begin{array}{l}⇒\mathbf{r}\mathbf{.}\mathbf{v}=\left(a\mathrm{cos}\omega t\mathbf{i}+a\mathrm{sin}\omega t\mathbf{j}\right)\phantom{\rule{2pt}{0ex}}\mathbf{.}\phantom{\rule{2pt}{0ex}}\left(-a\omega \mathrm{sin}\omega t\mathbf{i}+a\omega \mathrm{cos}\omega t\mathbf{j}\right)\\ ⇒-{a}^{2}\omega \mathrm{sin}\omega t\mathrm{cos}\omega t+{a}^{2}\omega \mathrm{sin}\omega t\mathrm{cos}\omega t=0\end{array}$
This means that the angle between position vector and velocity are at right angle to each other. Hence, velocity is perpendicular to position vector. It is pertinent to mention here that this result can also be inferred from the plot of motion. An inspection of position vector reveals that it represents uniform circular motion as shown in the figure here.
The position vector is always directed radially, whereas velocity vector is always tangential to the circular path. These two vectors are, therefore, perpendicular to each other.
Problem : Two particles are moving with the same constant speed, but in opposite direction. Under what circumstance will the separation between two remains constant?
Solution : The condition of motion as stated in the question is possible, if particles are at diametrically opposite positions on a circular path. Two particles are always separated by the diameter of the circular path. See the figure below to evaluate the motion and separation between the particles.
## Comparing velocities
Problem : A car of width 2 m is approaching a crossing at a velocity of 8 m/s. A pedestrian at a distance of 4 m wishes to cross the road safely. What should be the minimum speed of pedestrian so that he/she crosses the road safely?
Solution : We draw the figure to illustrate the situation. Here, car travels the linear distance (AB + CD) along the direction in which it moves, by which time the pedestrian travels the linear distance BD. Let pedestrian travels at a speed “v” along BD, which makes an angle “θ” with the direction of car.
We must understand here that there may be number of combination of angle and speed for which pedestrian will be able to safely cross before car reaches. However, we are required to find the minimum speed. This speed should, then, correspond to a particular value of θ.
We can also observe that pedestrian should move obliquely. In doing so he/she gains extra time to cross the road.
From triangle BCD,
$\begin{array}{l}\mathrm{tan}\left(90-\theta \right)=\mathrm{cot}\theta =\frac{\mathrm{CD}}{\mathrm{BC}}=\frac{\mathrm{CD}}{2}\\ ⇒\mathrm{CD}=2\mathrm{cot}\theta \end{array}$
Also,
$\begin{array}{l}\mathrm{cos}\left(90-\theta \right)=\mathrm{sin}\theta =\frac{\mathrm{BC}}{\mathrm{BD}}=\frac{2}{\mathrm{BD}}\\ ⇒\mathrm{BD}=\frac{2}{\mathrm{sin}\theta }\end{array}$
According to the condition given in the question, the time taken by car and pedestrian should be equal for the situation outlined above :
$\begin{array}{l}t=\frac{4+2\mathrm{cot}\theta }{8}=\frac{\frac{2}{\mathrm{sin}\theta }}{v}\end{array}$
$\begin{array}{l}v=\frac{8}{2\mathrm{sin}\theta +\mathrm{cos}\theta }\end{array}$
For minimum value of speed, $\frac{dv}{d\theta }=0$ ,
$\begin{array}{l}⇒\frac{dv}{d\theta }=\frac{-8\phantom{\rule{2pt}{0ex}}x\phantom{\rule{2pt}{0ex}}\left(2\mathrm{cos}\theta -\mathrm{sin}\theta \right)}{{\left(2\mathrm{sin}\theta +\mathrm{cos}\theta \right)}^{2}}=0\\ ⇒\left(2\mathrm{cos}\theta -\mathrm{sin}\theta \right)=0\\ ⇒\mathrm{tan}\theta =2\end{array}$
In order to evaluate the expression of velocity with trigonometric ratios, we take the help of right angle triangle as shown in the figure, which is consistent with the above result.
From the triangle, defining angle “θ”, we have :
$\begin{array}{l}\mathrm{sin}\theta =2\surd 5\end{array}$
and
$\begin{array}{l}\mathrm{cos}\theta =\frac{1}{\surd 5}\end{array}$
The minimum velocity is :
$\begin{array}{l}v=\frac{8}{2\phantom{\rule{2pt}{0ex}}x\phantom{\rule{2pt}{0ex}}2\surd 5+\frac{1}{\surd 5}}=\frac{8}{\surd 5}=3.57\phantom{\rule{2pt}{0ex}}m/s\end{array}$
A stone propelled from a catapult with a speed of 50ms-1 attains a height of 100m. Calculate the time of flight, calculate the angle of projection, calculate the range attained
58asagravitasnal firce
Amar
water boil at 100 and why
what is upper limit of speed
what temperature is 0 k
Riya
0k is the lower limit of the themordynamic scale which is equalt to -273 In celcius scale
Mustapha
How MKS system is the subset of SI system?
which colour has the shortest wavelength in the white light spectrum
if x=a-b, a=5.8cm b=3.22 cm find percentage error in x
x=5.8-3.22 x=2.58
what is the definition of resolution of forces
what is energy?
Ability of doing work is called energy energy neither be create nor destryoed but change in one form to an other form
Abdul
motion
Mustapha
highlights of atomic physics
Benjamin
can anyone tell who founded equations of motion !?
n=a+b/T² find the linear express
أوك
عباس
Quiklyyy
Moment of inertia of a bar in terms of perpendicular axis theorem
How should i know when to add/subtract the velocities and when to use the Pythagoras theorem?
Centre of mass of two uniform rods of same length but made of different materials and kept at L-shape meeting point is origin of coordinate
|
ControlDesign
Characterize
characterize all PID controllers for pole placement in a desired region
Calling Sequence Characterize(sys, zeta, omegan, opts)
Parameters
sys - System; a DynamicSystems system object in continuous-time domain; must be single-input single-output (SISO) zeta - realcons; the specified damping omegan - realcons; the specified natural frequency opts - (optional) equation(s) of the form option = value; specify options for the Characterize command
Options
• controller : one of P, PI, or PID
Specifies the controller type. The default value is PID.
• output : one of relativestability, damping, or all
Specifies the set of inequality conditions that must be returned. The default value is all.
Description
• The Characterize command characterizes all PID controllers for pole placement in a desired region. It returns a Boolean expression of inequalities in terms of the controller parameters that must be satisfied in order to place the closed-loop poles (under unity negative feedback) in the specified desired region. The desired region is specified by zeta and omegan and is defined based on relative stability and damping conditions as follows:
– Relative Stability: The desired region is part of the complex left half plane (LHP) with real part less than $-\mathrm{\zeta }\mathrm{omegan}$. This is equivalent to the relative stability of the closed-loop system with respect to the line $s=j\mathrm{\omega }-\mathrm{\zeta }\mathrm{omegan}$ (rather than the imaginary axis). Clearly, if zeta or omegan are set to zero, the relative stability reduces to the absolute stability with respect to the imaginary axis.
– Damping: The desired region is part of the complex left half plane (LHP) inside the angle +/-$\mathrm{arccos}\left(\mathrm{\zeta }\right)$ measured from the negative real axis.
• The controller parameters are $\mathrm{kc}$ for a P controller, $\mathrm{kc},\mathrm{ki}$ for a PI controller, and $\mathrm{kc},\mathrm{ki},\mathrm{kd}$ for a PID controller, where $\mathrm{kc}$ is the proportional gain, $\mathrm{ki}$ is the integral gain, and $\mathrm{kd}$ is the derivative gain. The controller transfer function is then obtained as: $C\left(s\right)=\mathrm{kc}$, $C\left(s\right)=\mathrm{kc}+\frac{\mathrm{ki}}{s}$, or $C\left(s\right)=\mathrm{kc}+\frac{\mathrm{ki}}{s}+\mathrm{kd}s$ for the P, PI, and PID controllers, respectively.
Examples
> $\mathrm{with}\left(\mathrm{ControlDesign}\right):$
> $\mathrm{sys}≔\mathrm{DynamicSystems}:-\mathrm{NewSystem}\left(\frac{s+2}{{s}^{3}+12{s}^{2}+17s+2}\right)$
${\mathrm{sys}}{:=}\left[\begin{array}{c}{\mathbf{Transfer Function}}\\ {\mathrm{continuous}}\\ {\mathrm{1 output\left(s\right); 1 input\left(s\right)}}\\ {\mathrm{inputvariable}}{=}\left[{\mathrm{u1}}{}\left({s}\right)\right]\\ {\mathrm{outputvariable}}{=}\left[{\mathrm{y1}}{}\left({s}\right)\right]\end{array}\right$ (1)
> $\mathrm{Characterize}\left(\mathrm{sys},\frac{1}{3},2,\mathrm{controller}=P,\mathrm{output}=\mathrm{relativestability}\right)$
${0}{<}{9}{}{\mathrm{kc}}{-}{29}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{117}{}{\mathrm{kc}}{+}{373}$ (2)
> $\mathrm{Characterize}\left(\mathrm{sys},\frac{1}{3},2,\mathrm{controller}=P,\mathrm{output}=\mathrm{damping}\right)$
${0}{<}{5453}{-}{115}{}{\mathrm{kc}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{31}{}{\mathrm{kc}}{+}{895}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{5}{}{{\mathrm{kc}}}^{{2}}{-}{38}{}{\mathrm{kc}}{+}{1493}{-}\frac{{1}}{{9}}{}\frac{\left({5453}{-}{115}{}{\mathrm{kc}}\right){}\left({9}{}{{\mathrm{kc}}}^{{2}}{+}{162}{}{\mathrm{kc}}{+}{153}{-}\frac{{54}{}\left({216}{}{{\mathrm{kc}}}^{{2}}{+}{432}{}{\mathrm{kc}}{+}{216}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\right)}{{31}{}{\mathrm{kc}}{+}{895}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{3}{}{{\mathrm{kc}}}^{{2}}{+}{54}{}{\mathrm{kc}}{+}{51}{-}\frac{{18}{}\left({216}{}{{\mathrm{kc}}}^{{2}}{+}{432}{}{\mathrm{kc}}{+}{216}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}{-}\frac{{1}}{{3}}{}\frac{\left({31}{}{\mathrm{kc}}{+}{895}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\right){}\left({24}{}{{\mathrm{kc}}}^{{2}}{+}{48}{}{\mathrm{kc}}{+}{24}\right)}{{5}{}{{\mathrm{kc}}}^{{2}}{-}{38}{}{\mathrm{kc}}{+}{1493}{-}\frac{{1}}{{9}}{}\frac{\left({5453}{-}{115}{}{\mathrm{kc}}\right){}\left({9}{}{{\mathrm{kc}}}^{{2}}{+}{162}{}{\mathrm{kc}}{+}{153}{-}\frac{{54}{}\left({216}{}{{\mathrm{kc}}}^{{2}}{+}{432}{}{\mathrm{kc}}{+}{216}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\right)}{{31}{}{\mathrm{kc}}{+}{895}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{{\mathrm{kc}}}^{{2}}{+}{2}{}{\mathrm{kc}}{+}{1}$ (3)
> $\mathrm{Characterize}\left(\mathrm{sys},\frac{1}{3},2,\mathrm{controller}=P\right)$
${0}{<}{9}{}{\mathrm{kc}}{-}{29}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{117}{}{\mathrm{kc}}{+}{373}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{5453}{-}{115}{}{\mathrm{kc}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{895}{+}{31}{}{\mathrm{kc}}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{5}{}{{\mathrm{kc}}}^{{2}}{-}{38}{}{\mathrm{kc}}{+}{1493}{-}\frac{{1}}{{9}}{}\frac{\left({5453}{-}{115}{}{\mathrm{kc}}\right){}\left({9}{}{{\mathrm{kc}}}^{{2}}{+}{162}{}{\mathrm{kc}}{+}{153}{-}\frac{{54}{}\left({216}{}{{\mathrm{kc}}}^{{2}}{+}{432}{}{\mathrm{kc}}{+}{216}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\right)}{{895}{+}{31}{}{\mathrm{kc}}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{3}{}{{\mathrm{kc}}}^{{2}}{+}{54}{}{\mathrm{kc}}{+}{51}{-}\frac{{18}{}\left({216}{}{{\mathrm{kc}}}^{{2}}{+}{432}{}{\mathrm{kc}}{+}{216}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}{-}\frac{{1}}{{3}}{}\frac{\left({895}{+}{31}{}{\mathrm{kc}}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\right){}\left({24}{}{{\mathrm{kc}}}^{{2}}{+}{48}{}{\mathrm{kc}}{+}{24}\right)}{{5}{}{{\mathrm{kc}}}^{{2}}{-}{38}{}{\mathrm{kc}}{+}{1493}{-}\frac{{1}}{{9}}{}\frac{\left({5453}{-}{115}{}{\mathrm{kc}}\right){}\left({9}{}{{\mathrm{kc}}}^{{2}}{+}{162}{}{\mathrm{kc}}{+}{153}{-}\frac{{54}{}\left({216}{}{{\mathrm{kc}}}^{{2}}{+}{432}{}{\mathrm{kc}}{+}{216}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}\right)}{{895}{+}{31}{}{\mathrm{kc}}{-}\frac{{54}{}\left({45}{}{{\mathrm{kc}}}^{{2}}{-}{342}{}{\mathrm{kc}}{+}{13437}\right)}{{5453}{-}{115}{}{\mathrm{kc}}}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{{\mathrm{kc}}}^{{2}}{+}{2}{}{\mathrm{kc}}{+}{1}$ (4)
> $\mathrm{Characterize}\left(\mathrm{sys},\frac{1}{3},2,\mathrm{controller}=\mathrm{Π},\mathrm{output}=\mathrm{relativestability}\right)$
${0}{<}{-}{18}{}{\mathrm{kc}}{+}{27}{}{\mathrm{ki}}{+}{58}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{2106}{}{\mathrm{kc}}{-}{8406}{-}{243}{}{\mathrm{ki}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{18}{}{\mathrm{kc}}{+}{27}{}{\mathrm{ki}}{-}{158}{-}\frac{{252}{}\left({-}{2016}{}{\mathrm{kc}}{+}{3024}{}{\mathrm{ki}}{+}{6496}\right)}{{2106}{}{\mathrm{kc}}{-}{8406}{-}{243}{}{\mathrm{ki}}}$ (5)
> $\mathrm{Characterize}\left(\mathrm{sys},\frac{1}{3},2,\mathrm{controller}=\mathrm{Π}\right)$
${0}{<}{-}{18}{}{\mathrm{kc}}{+}{27}{}{\mathrm{ki}}{+}{58}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{2106}{}{\mathrm{kc}}{-}{8406}{-}{243}{}{\mathrm{ki}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{18}{}{\mathrm{kc}}{+}{27}{}{\mathrm{ki}}{-}{158}{-}\frac{{252}{}\left({-}{2016}{}{\mathrm{kc}}{+}{3024}{}{\mathrm{ki}}{+}{6496}\right)}{{2106}{}{\mathrm{kc}}{-}{8406}{-}{243}{}{\mathrm{ki}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{{\mathrm{ki}}}^{{2}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({18}{}{{\mathrm{kc}}}^{{2}}{+}{9}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{324}{}{\mathrm{kc}}{-}{399}{}{\mathrm{ki}}{+}{306}{-}\frac{{108}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{2}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{{\mathrm{ki}}}^{{2}}{+}{2}{}{\mathrm{ki}}{-}\frac{{7776}{}{{\mathrm{ki}}}^{{2}}}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}{-}\frac{{72}{}\left({62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right){}{{\mathrm{ki}}}^{{2}}}{{270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({18}{}{{\mathrm{kc}}}^{{2}}{+}{9}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{324}{}{\mathrm{kc}}{-}{399}{}{\mathrm{ki}}{+}{306}{-}\frac{{108}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}}{-}\frac{{36}{}\left({6}{}{{\mathrm{kc}}}^{{2}}{+}{3}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{108}{}{\mathrm{kc}}{-}{133}{}{\mathrm{ki}}{+}{102}{-}\frac{{36}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}{-}\frac{{1}}{{3}}{}\frac{\left({62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right){}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({36}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{18}{}{{\mathrm{ki}}}^{{2}}{+}{36}{}{\mathrm{ki}}{-}\frac{{139968}{}{{\mathrm{ki}}}^{{2}}}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}\right)}{{270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({18}{}{{\mathrm{kc}}}^{{2}}{+}{9}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{324}{}{\mathrm{kc}}{-}{399}{}{\mathrm{ki}}{+}{306}{-}\frac{{108}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}}\right){}{{\mathrm{ki}}}^{{2}}}{{216}{}{{\mathrm{kc}}}^{{2}}{+}{30}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{45}{}{{\mathrm{ki}}}^{{2}}{+}{432}{}{\mathrm{kc}}{-}{2658}{}{\mathrm{ki}}{+}{216}{-}\frac{{1}}{{6}}{}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({36}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{18}{}{{\mathrm{ki}}}^{{2}}{+}{36}{}{\mathrm{ki}}{-}\frac{{139968}{}{{\mathrm{ki}}}^{{2}}}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}{-}\frac{{1}}{{6}}{}\frac{\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({18}{}{{\mathrm{kc}}}^{{2}}{+}{9}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{324}{}{\mathrm{kc}}{-}{399}{}{\mathrm{ki}}{+}{306}{-}\frac{{108}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}\right){}\left({12}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{6}{}{{\mathrm{ki}}}^{{2}}{+}{12}{}{\mathrm{ki}}{-}\frac{{46656}{}{{\mathrm{ki}}}^{{2}}}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}{-}\frac{{432}{}\left({62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right){}{{\mathrm{ki}}}^{{2}}}{{270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({18}{}{{\mathrm{kc}}}^{{2}}{+}{9}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{324}{}{\mathrm{kc}}{-}{399}{}{\mathrm{ki}}{+}{306}{-}\frac{{108}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}}\right)}{{6}{}{{\mathrm{kc}}}^{{2}}{+}{3}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{108}{}{\mathrm{kc}}{-}{133}{}{\mathrm{ki}}{+}{102}{-}\frac{{36}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}{-}\frac{{1}}{{3}}{}\frac{\left({62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right){}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({36}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{18}{}{{\mathrm{ki}}}^{{2}}{+}{36}{}{\mathrm{ki}}{-}\frac{{139968}{}{{\mathrm{ki}}}^{{2}}}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}\right)}{{270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({18}{}{{\mathrm{kc}}}^{{2}}{+}{9}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{324}{}{\mathrm{kc}}{-}{399}{}{\mathrm{ki}}{+}{306}{-}\frac{{108}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}}}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{and}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{0}{<}{6}{}{{\mathrm{kc}}}^{{2}}{+}{3}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{108}{}{\mathrm{kc}}{-}{133}{}{\mathrm{ki}}{+}{102}{-}\frac{{36}{}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}{-}\frac{{1}}{{3}}{}\frac{\left({62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right){}\left({1296}{}{{\mathrm{kc}}}^{{2}}{+}{180}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{270}{}{{\mathrm{ki}}}^{{2}}{+}{2592}{}{\mathrm{kc}}{-}{15948}{}{\mathrm{ki}}{+}{1296}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({36}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{18}{}{{\mathrm{ki}}}^{{2}}{+}{36}{}{\mathrm{ki}}{-}\frac{{139968}{}{{\mathrm{ki}}}^{{2}}}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}\right)}{{62}{}{\mathrm{kc}}{-}{23}{}{\mathrm{ki}}{+}{1790}{-}\frac{{108}{}\left({270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}\right)}{{-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}}}\right)}{{270}{}{{\mathrm{kc}}}^{{2}}{-}{27}{}{\mathrm{kc}}{}{\mathrm{ki}}{-}{2052}{}{\mathrm{kc}}{-}{4579}{}{\mathrm{ki}}{+}{80622}{-}\frac{\left({-}{690}{}{\mathrm{kc}}{+}{32718}{+}{69}{}{\mathrm{ki}}\right){}\left({18}{}{{\mathrm{kc}}}^{{2}}{+}{9}{}{\mathrm{kc}}{}{\mathrm{ki}}{+}{324}{}{\mathrm{kc}}{-}{399}{}{\mathrm{ki}}\right)}{}}$
|
.
.
.
.
.
HealthyNumerics
HealthPoliticsEconomics | Quant Analytics | Numerics
Computational Fluid Dynamics: Markov Chain Monte Carlo in 2 dimensions
Stochastic modeling is a commonly used methodology in health economics and outcomes research (HEOR) with two main purposes: (1) to assess and predict the level of confidence in a chosen course of action and (2) to estimate the value of collecting additional data to better inform the decision.
We will consider a health related topic, the distribution of pollutants, to give some visible insight in the Markov Chain Monte Carlo modelling approach. Note that this concept can be transferd to the analysis of longitudinal studies.
A Brownian motion is a well known prototype of a stochastic process X, physically in a micro scale. Instead of solving the transport equation in the Eulerean coordinate system, given by the differntial equation for the concentration C of the pollutant:
$$\frac{\partial{C}} {\partial{t}} = \mathbf{u} \nabla C + \nabla (K\nabla C)$$
the Wiener process adresses the position X of one particel or molecule that changes with each time step dt. The change of position dX is given as:
$$dX(t) = µdt + σdW(t),$$
with
• change of X is dX
• drift parameter µ
• variance parameter $$σ^2$$
• standard Brownian motion W
Note that the structure of this equation is:
$$\textrm{Total drift} = \textrm{mean drift} + \textrm{random drift}$$
There are other notations for the same concept:
• the mean drift parameter µ can be replaced by the mean veloctiy vector $$\bar{\mathbf{u}}$$
• the variance parameter $$σ^2$$ times standard Brownian motion W can be replace by the deviation vector of the mean vector $$\acute{\mathbf{u}}$$
So we will get the following notation for the same concept describing the walk of a particle along the positions $$\mathbf{x}^n$$ in a turbulent flow in Lagrangian coordinates:
$$\begin{array}{lll} \mathbf{u} & = & \bar{\mathbf{u}} + \acute{\mathbf{u}}\\ d\mathbf{x} & = & \bar{\mathbf{u}} \cdot \Delta t + \acute{\mathbf{u}} \cdot \Delta t\\ \mathbf{x}^{n+1} & = & \mathbf{x}^{n} + d\mathbf{x}\\ \end{array}$$
We will apply this concept on the distribution of pollutants. We will recognize that:
• the example is fully 2-dimensional
• the approach is able to handle inhomogenoues conditions for mean drift $$\bar{\mathbf{u}}$$ and random deviation $$\acute{\mathbf{u}}$$
• the final pollutant distribution is allowed to be fairly complex
• the approach is of Markov type, because there is a memoryless change of state which depend on the present state only
• the approach is of Monte Carlo type, because it is random driven to find the solution
• the approach is of Metropolis-Hasting type, because the sequence of the positions to be visited are probability controlled
The approaches
Even at this point, there are still different approaches to find the solution.
The statistician's way is:
• describe the distribution first
• if inevitable include some physical principles
The physicist's way is:
• model the physical processes first
• if inevitable include some statistical principles
In the following example we will have a look at the distribution of pollutants. I think for this strongly inhomogeneous problem the physicist's way is easier to follow and to implement.
The statistician's language and algorithms
1. I want to sample from the probability distribution of a pollutant. With enough samples I get the concentration distribution of the pollutant.
2. But the probability distribution of a pollutant is extremly complex:
• it is 2- or 3-dimensional,
• the mean drift is drifting itself
• direct sampling from the distribution is not possible
• analytical calculations of the distribution is not possible
3. So let's use a Markov Chain Monte Carlo model
• construct a Markov Chain that - when applied many times on an initial probability distribution - will result in an stationary probability distribution.
• after the first few hundreds iterations, each intermediate result is a sample from the stationary probability distribution
• estimate interesting quantities about the stationary probability distribution
4. There is still a question: how do I construct such a Markov Chain ?
• A general and easy construction strategy to find the Markov Chain is available → the Metropolis–Hastings algorithm
• The Markov matrix with the transition probabilities will not be built explicitly. With the numerical algorithm the resulting Markov chain is reproduced localy.
• The Metropolis–Hastings algorithm visits the points in the solution space, that have a higher probability to be important.
The physicist's language and algorithms
1. I want to know the concentration distribution of the pollutant.
2. It seems to be too difficult to find a solution by an analytical formula
3. So let's model numerically the processes step by step:
• let's model the mean drift. The generic model might be the Navier-Stokes equations.
• let's define a generic model of the random component, driven by a random number generator (= Monte Carlo process)
• let's assign a passiv tracer particel as a sample. Let's follow this sample trough the solution space (= Markov process).
• repeat this sample experiment many times and draw the conclusions by doing the statistics about.
4. The sample particles visits the points, that have a higher probability to be important - that is: they follow the current of the wind are the water (Sounds somehow reasonable, doesn't it ?)
Example 1: Distribution in inhomogeneous conditions
We generate the mean drift data as an inhomogeneous flow field. It might be a water current in a river bed or an air flow in a valley. Usually fluid dynamics are modelled by the Navier Stokes equations. For sake of brevity we have generated the flow field by some analytical formula.
The random component represents the turbulence which is superimposed to the mean flow. Also for sake of brevity we simulate the turbulence by a singel constant value (standard deviation). So in the following example the turbulence field is homogeneous.
In this first example turbulence is small. We see a pattern as it can be expected from an air flow in cold nocturnal conditions.
#------ main: Mean Drift ----------
nx,ny = 40, 20
X,Y,x,y,xc,yc = get_grid(nx,ny)
u,v = get_flowfield(X,Y,'C')
#------ main: Random Walk ----------
nTraj, dt, sigma = 80, 0.1, 0.6
xts,yts,C = run_randomWalk(nTraj,dt,sigma)
plot_distribution(X,Y,u,v,xts,yts,xc,yc,C)
Example 2 : Distribution on a convective summer day
On a warm summer day turbulence is enhanced. The random component is now much more important relative to the same mean drift as in example 1. The sample particle gets each time step onto a different path of the mean drift.
#------ main: Random Walk ----------
nTraj, dt, sigma = 80, 0.1, 2.5
xts,yts,C = run_randomWalk(nTraj,dt,sigma)
plot_distribution(X,Y,u,v,xts,yts,xc,yc,C)
Python code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def get_grid(nx,ny):
x = np.linspace(0,nx,nx)
y = np.linspace(0,ny,ny)
X = np.outer(x,np.ones_like(y)) # X = ix.T * ones(iy)
Y = np.outer(np.ones_like(x),y) # Y = ones(ix) * iy.T
xc = X + 0.5*(x[1]-x[0])
yc = Y + 0.5*(y[1]-y[0])
return X,Y,x,y,xc,yc
def get_flowfield(X,Y,case):
if case == 'A': u,v = -Y, X
if case == 'C':
x = 2.0*np.pi*X/(np.amax(X)-np.amin(X)) +np.pi
y = 0.5*np.pi*Y/(np.amax(Y)-np.amin(Y))
u = 1 + 0.1*np.sin(x)
v = 0.8*np.cos(x)
return u,v
def inField(jx,jy):
c = True
if jx< 0: c=False;
if jx>nx: c=False
if jy< 0: c=False;
if jy>ny: c=False
return c
def fem4(ξ,ν):
f1 = (1-ξ)*(1-ν); f2 = (ξ)*(1-ν)
f3 = (ξ)*(ν); f4 = (1-ξ)*(ν)
return f1,f2,f3,f4
def get_V(xp,yp,jx,jy,u,v):
ξ = (xp - x[jx])/(x[jx+1]-x[jx])
ν = (yp - y[jy])/(y[jy+1]-y[jy])
f1,f2,f3,f4 = fem4(ξ,ν)
up = f1*u[jx,jy] + f2*u[jx+1,jy] + f3*u[jx+1,jy+1] + f4*u[jx,jy+1]
vp = f1*v[jx,jy] + f2*v[jx+1,jy] + f3*v[jx+1,jy+1] + f4*v[jx,jy+1]
return up,vp
def get_cell(x,y,xp,yp, u,v, C):
jx = np.argmax(x>=xp)-1
jy = np.argmax(y>=yp)-1;
if inField(jx,jy):
up,vp = get_V(xp,yp,jx,jy,u,v)
C[jx,jy] += 1
return up,vp,True
else:
return [],[],False
def run_randomWalk(nTraj,dt,sigma):
xts = pd.DataFrame()
yts = pd.DataFrame()
C = np.zeros_like(xc)
for iTraj in np.arange(nTraj):
xp = 1; yp = y[0] + 0.5*(y[-1]-y[0])
xtr = np.array([xp])
ytr = np.array([yp])
while 1==1:
up,vp,flag = get_cell(x,y,xp,yp,u,v, C)
if flag:
upp = np.random.normal(loc=0.0, scale=sigma)
vpp = np.random.normal(loc=0.0, scale=sigma)
xp = xp + (up+upp)*dt; xtr = np.append(xtr, xp)
yp = yp + (vp+vpp)*dt; ytr = np.append(ytr, yp)
else:
xts = pd.concat([xts,pd.DataFrame(xtr)], axis=1)
yts = pd.concat([yts,pd.DataFrame(ytr)], axis=1)
break
return xts,yts,C
plt.plot(xts, yts); plt.show()
Grafics
def plot_flowfield(X,Y,u,v):
R = (u*u + v*v)**(1/2)
with plt.style.context('fivethirtyeight'):
fig = plt.figure(figsize=(15,15))
q0 = ax1.quiver(X, Y, u, v, R, angles='xy', alpha=.92, cmap=plt.cm.plasma)
q1 = ax1.quiver(X, Y, u, v, edgecolor='k', facecolor='None', linewidth=.5)
p = plt.quiverkey(q0,1,0.5,2,"2 m/s",coordinates='data',color='r')
ax1.set_aspect('equal')
def plot_trajectory(X,Y,u,v,xtr,ytr):
R = (u*u + v*v)**(1/2)
with plt.style.context('fivethirtyeight'):
fig = plt.figure(figsize=(15,15))
q0 = ax1.quiver(X, Y, u, v, R, angles='xy', alpha=.92, cmap=plt.cm.plasma)
q1 = ax1.quiver(X, Y, u, v, edgecolor='k', facecolor='None', linewidth=.5)
p = plt.quiverkey(q0,1,0.5,2,"2 m/s",coordinates='data',color='r')
ax1.plot(xtr,ytr,ls='-', color='k', lw=2, alpha=0.65)
plt.title('Markov Chain Monte Carlo', fontsize=25, fontweight='bold')
ax1.set_aspect('equal')
def plot_distribution(X,Y,u,v,xts,yts,xc,yc,C):
R = (u*u + v*v)**(1/2)
with plt.style.context('fivethirtyeight'):
fig = plt.figure(figsize=(20,22))
q0 = ax1.quiver(X, Y, u, v, R, angles='xy', alpha=.92, cmap=plt.cm.plasma)
q1 = ax1.quiver(X, Y, u, v, edgecolor='k', facecolor='None', linewidth=.5)
p = plt.quiverkey(q0,1,0.5,2,"2 m/s",coordinates='data',color='r')
ax1.plot(xts,yts,'-k', lw=2)
ax1.set_title('Markov Chain Monte Carlo Random Walk', fontsize=20, fontweight='bold')
ax1.set_aspect('equal')
|
In [13]:
%pylab nbagg
import sys
from tvb.simulator.lab import *
LOG = get_logger('demo')
import scipy.stats
from sklearn.decomposition import FastICA
import time
import utils
Populating the interactive namespace from numpy and matplotlib
# Introduction¶
Fluctuations in brain activity in non-task conditions are now a well-established phenomena in the literature. These fluctuations are not random but shown to exhibit spatial patterns, referred to as resting state networks. Despite being readily identifiable during rest, these networks are related to specific functions and on the other hand abnormalities in such RSNs have been associated with pathology.
In the following, we will demonstrate some starting points for modeling resting state networks in TVB, using the default data set.
# Setting up the simulation¶
In the following, we'll use a basic region level simulation, with the generic oscillator set in an excitable regime, linear coupling with low strength, a stochastic integrator with low noise and a temporal average monitor at 200 Hz.
These settings are a good starting point for modeling resting state patterns because no particular factor dominates the dynamics and a balance between the structural connectivity, moderate intrinsic excitability and noise comes into play.
In [14]:
def run_sim(conn, cs, D, cv=3.0, dt=0.5, simlen=1e3):
sim = simulator.Simulator(
model=models.Generic2dOscillator(a=0.0),
connectivity=conn,
coupling=coupling.Linear(a=cs),
integrator=integrators.HeunStochastic(dt=dt, noise=noise.Additive(nsig=array([D]))),
monitors=monitors.TemporalAverage(period=5.0) # 200 Hz
)
sim.configure()
(t, y), = sim.run(simulation_length=simlen)
return t, y[:, 0, :, 0]
conn = connectivity.Connectivity(load_default=True)
WARNING File 'hemispheres' not found in ZIP.
One of the common features of simulations is an initial transient, so we'll perform a one minute simulation, and as soon as the time series have been generated, we check that the transient has decayed:
In [15]:
tic = time.time()
t, y = run_sim(conn, 6e-2, 5e-4, simlen=10*60e3)
'simulation required %0.3f seconds.' % (time.time() - tic, )
Out[15]:
'simulation required 309.845 seconds.'
# Functional Connectivity¶
Next, to quickly assess the presence of a network structure in the time series, we window the time series into 1 second non overlapping windows, obtain per-window correlation matrices
In [16]:
cs = []
for i in range(int(t[-1]/1e3)):
cs.append(corrcoef(y[(t>(i*1e3))*(t<(1e3*(i+1)))].T))
cs = array(cs)
cs.shape
Out[16]:
(599L, 76L, 76L)
The strength of correlation can be assessed statistically by Fisher Z transforming the coefficients and applying a t-test,
In [17]:
cs_z = arctanh(cs)
for i in range(cs.shape[1]):
cs_z[:, i, i] = 0.0
_, p = scipy.stats.ttest_1samp(cs, 0.0)
C:\Users\mw\Downloads\TVB_Distribution\tvb_data\Lib\site-packages\IPython\kernel\__main__.py:1: RuntimeWarning: divide by zero encountered in arctanh
if __name__ == '__main__':
Which we then visualize the structural connectivity (left) and functional connectivity (right) as an adjacency matrices applying a threshold on significance:
In [21]:
figure(figsize=(10, 4))
subplot(121), imshow(conn.weights, cmap='binary', interpolation='none')
subplot(122), imshow(log10(p)*(p < 0.05), cmap='gray', interpolation='none');
show()
C:\Users\mw\Downloads\TVB_Distribution\tvb_data\Lib\site-packages\IPython\kernel\__main__.py:3: RuntimeWarning: divide by zero encountered in log10
app.launch_new_instance()
We can see there are significant deviations in the FC from the SC which are especially prominent in the interhemispheric FC, where interactions are found despite limited interhemispheric SC.
We can then ask what degree of similarity there is between the average functional connectivity and the structural connectivity, and how it varies over time:
In [22]:
figure()
plot(r_[1:len(cs)+1], [corrcoef(cs_i.ravel(), conn.weights.ravel())[0, 1] for cs_i in cs])
ylim([0, 0.5])
ylabel('FC-SC correlation')
xlabel('Time (s)');
show()
# Seed-region correlation maps¶
A common visualization of FC specific to a given is to pull out its row of the FC matrix and plot a colormap on the anatomy. We can do this will the simulated functional connectivity to identify which regions are highly coordinated with the seed region.
In [29]:
def plot_roi_corr_map(reg_name):
roi = find(conn.ordered_labels==reg_name)[0]
cs_m = cs[2:].mean(axis=0)
rm = utils.cortex.region_mapping
utils.multiview(cs_m[roi][rm], shaded=False, suptitle=reg_name, figsize=(10, 5))
As a few examples of such maps, seeding in the left motor cortex, right ventrolateral prefront cortex, and right superior parietal cortex:
In [30]:
for reg in 'lM1 rPFCVL rPCS'.split():
plot_roi_corr_map(reg)
|
## Stream: general
### Topic: Extracting un-named proofs from the goal state
#### Eric Wieser (Jan 12 2021 at 14:23):
Often I find myself needing a proof that is already part of the goal state, but having to rebuild it because I don't have a name on it. For example:
def foo : fin 2 → ℕ
| ⟨0, _⟩ := 1
| ⟨1, _⟩ := 2
| ⟨n + 2, h⟩ := false.elim $n.not_lt_zero$ add_lt_iff_neg_right.mp h
example : ∀ x, foo x > 0 := begin
rintro ⟨(x|x|x), hx⟩; dsimp only [foo],
norm_num,
norm_num,
sorry -- _.elim > 0; the _ is a proof of false - how do I reuse it?
end
#### Reid Barton (Jan 12 2021 at 14:29):
This is a weird way to do it, but works
example : ∀ x, foo x > 0 := begin
rintro ⟨(x|x|x), hx⟩; dsimp only [foo],
norm_num,
norm_num,
let m : false := _,
change m.elim > 0,
end
#### Reid Barton (Jan 12 2021 at 14:31):
It requires you to write out the form of the goal though--maybe it would be useful to have a tactic that could search the goal (or really the type of any expression) for a proof of a given p
#### Reid Barton (Jan 12 2021 at 14:31):
in this case false
Oh, clever
#### Eric Wieser (Jan 12 2021 at 14:33):
Yeah, such a tactic is something I've found myself wanting multiple times now
#### Kyle Miller (Jan 12 2021 at 17:43):
It looks like this actually works:
lemma elim_elim {α : Type*} {p : α → Prop} {h : false} : p (false.elim h) := false.elim h
example : ∀ x, foo x > 0 := begin
rintro ⟨(x|x|x), hx⟩; dsimp only [foo],
norm_num,
norm_num,
exact elim_elim,
end
#### Eric Wieser (Jan 12 2021 at 17:45):
Yeah, that's basically the same trick as Reid's. I think a tactic that extracts proofs from hypotheses and goals would be the most general solution here
#### Kyle Miller (Jan 12 2021 at 17:54):
I thought this was interesting because Lean figured out p automatically, but I guess it can't handle this example without being explicit:
example {h : false} : 37^2 + (false.elim h)^7 = 0 := begin
apply @elim_elim _ (λ x, 37^2 + x^7 = 0),
end
#### Eric Wieser (Jan 12 2021 at 17:57):
Your approach is interesting because it suggests there is value to adding lemmas with false hypotheses!
#### Mario Carneiro (Jan 12 2021 at 18:02):
Eric Wieser said:
Yeah, that's basically the same trick as Reid's. I think a tactic that extracts proofs from hypotheses and goals would be the most general solution here
I have the vague memory of writing such a tactic back at the dawn of lean 3, no idea what happened to it
#### Mario Carneiro (Jan 12 2021 at 18:03):
this is also really useful for proving theorems about classical.some <nasty term you don't want to refer to>
#### Mario Carneiro (Jan 12 2021 at 18:07):
@Kyle Miller it will probably work if you mark elim_elim as elab_as_eliminator
#### Kyle Miller (Jan 12 2021 at 18:07):
That didn't seem to work here
#### Kyle Miller (Jan 12 2021 at 18:10):
With @[elab_as_eliminator]:
example {h : false} : false.elim h > 0 := elim_elim -- works
example {h : false} : false.elim h < 0 := elim_elim -- doesn't work
example {h : false} : false.elim h < 0 := @elim_elim _ (λ x, x < 0) _ -- works
AH I remember
#### Mario Carneiro (Jan 12 2021 at 18:12):
generalize_proofs
#### Bryan Gin-ge Chen (Jan 12 2021 at 18:16):
It's missing an invocation of add_tactic_doc: https://github.com/leanprover-community/mathlib/blob/da24addb74fe40009254cadce1e41593e876c82a/src/tactic/generalize_proofs.lean#L67
#### Eric Wieser (Jan 12 2021 at 18:16):
docs#tactic.interactive.generalize_proofs exists, hooray!
#### Kyle Miller (Jan 12 2021 at 18:17):
That's a nice tactic to have -- I've found myself generalizing proofs a few times by hand.
example : ∀ x, foo x > 0 := begin
rintro ⟨(x|x|x), hx⟩; dsimp only [foo],
norm_num,
norm_num,
generalize_proofs,
tauto,
end
#### Mario Carneiro (Jan 12 2021 at 18:17):
It was written in 2017, cut it some slack
#### Mario Carneiro (Jan 12 2021 at 18:17):
it actually predates mathlib
#### Bryan Gin-ge Chen (Jan 12 2021 at 18:28):
Feel free to add / edit #5714 which adds this as an example to the doc string and adds a tactic doc entry.
#### Mario Carneiro (Jan 12 2021 at 18:59):
Oops, it appears I made a competing PR #5715
#### Mario Carneiro (Jan 12 2021 at 19:00):
I also fixed a bug and added support for at h1 h2 |- to my PR though
#### Bryan Gin-ge Chen (Jan 12 2021 at 19:02):
I think you could add an example to the tactic doc string like in mine but otherwise it looks much better, thanks!
#### Mario Carneiro (Jan 12 2021 at 19:06):
the examples I use in the tests are a bit simpler than eric's original example
Last updated: May 12 2021 at 23:13 UTC
|
Three digit numbers that are the sum of the cubes of their digits: 153, 370, 371, 407. 7 8 9. What does Cube Root mean? Definition Of Edge. The cube is also the number multiplied by its square: . Video Examples:Edge-ucation: Vedic Maths: Trick for Multiplication of 2-Digit Nos Also find the definition and meaning for various math words from this math dictionary. Word Definition Examples Simplify To make as short as possible 5 + 3 4 can be simplified to 2 Evaluate To solve for a certain value 5x + 3 evaluated for x = 2 gives us 13 Plus (Add) To increase a number by another number (+) 5 ⦠See more. Definition and properties of a cube. If all the edges of a cuboid are equal then it forms a cube. More About Cuboid. Description. Example of Edge. Understand the concept of Polyhedron here in detail. Also, discuss the unit in which the volume will be determined and how it will differ in each case. It is equal to the volume of a cube, which is 1 unit tall, 1 unit wide and 1 unit long. Also learn the facts to easily understand math glossary with fun math worksheet online at SplashLearn. Cube. Meaning of cube. Calculators and Converters â³ Students learn a new math skill every week at school, sometimes just before they start a new skill, if they want to look at what a specific term means, this is where this dictionary will ⦠Suppose if ânâ is a number then the cube of the number ânâ is written as â { n }^{ 3 } â. Also, the student will learn the surface area and volume formula for the cube. cube in Maths topic cube cube 2 verb [ transitive ] 1 HMN to multiply a number by itself twice 4 cubed is 64 2 DFC to cut food into cubes â See Verb table Examples from the Corpus cube ⢠But to suppose that this picture bears the faintest resemblance to what the Labour leadership wants betrays something like paranoia cubed . cube in the Maths topic by Longman Dictionary of Contemporary English | LDOCE | What you need to know about Maths: words, phrases and expressions | Maths 2011-11-30 12:26:13 2011-11-30 12:26:13. Define cube. 1000000 is a perfect cube. Learn about and revise number patterns such as even and odd numbers, square numbers, cube numbers and multiples with BBC Bitesize KS3 Maths. Your Math Club is sending rescued mathematics textbooks to needy schools in Africa. A perfect cube is an integer where the cube root of it is an integer. The concepts of three-dimensional shapes or 3D shapes model in maths have been explained here. Information and translations of cube in the most comprehensive dictionary definitions resource on ⦠Students learn a new math skill every week at school, sometimes just before they start a new skill, if they want to look at what a specific term means, this is where this dictionary will ⦠Definition Of Cuboid. They may be given the above diagram and asked to work out what the next three cube numbers are. Home Contact About Subject Index. The cube of a number or any other mathematical expression is denoted by a superscript 3, for example 2 3 = 8 or (x + 1) 3.. The cube root of a number is a special value that, when used in a multiplication three times, gives that number. n 3 = n × n 2 = n × n × n.. We will discuss here its definition, properties and its importance in Math. A cubic unit is a measure of volume. There are 12 edges and 8 vertices in a cuboid. Example: When you need to find the cube of 6, you would write it as: \begin{align}{6^3} = 6 \times 6 \times 6 = 216\end{align} Now suppose, the cube of a random number is given. Visit to learn Simple Maths Definitions. The definition of a perfect cube is a number that is the result of multiplying an integer by itself three times. The aim of this dictionary is to provide definitions to common mathematical terms. What is definition of cube figure in maths? The 12 line segments that form the skeleton of the cube ⦠C. CPD cube ⦠Definition of cube in the Definitions.net dictionary. See more. In the cube shown, the red line segment is the intersection of the faces A and B of the cube. It's made up of 6 squares, or faces, and these are equal in size. Cubic definition, having three dimensions; solid. Example: 3 × 3 × 3 = 27, so the cube root of 27 is 3. Faces, Edges, and Vertices. Math Open Reference. Know what is Terminating decimal and solved problems on Terminating decimal. Cube Cuboctahedron . Top Answer. The cube is also a square parallelepiped, an equilateral cuboid and a right rhombohedron. The object below is a cube. Math Definitions: Basic Operations . Definition of a Face in Math. In arithmetic and algebra, the cube of a number n is its third power, that is, the result of multiplying three instances of n together. Cuboid, cube, cylinder, sphere, pyramid and cone are a few examples of 3D shapes . Definition of Cube Root in the Definitions.net dictionary. Sum And Product Of Cubic Roots Calculator . Cube is a solid three-dimensional shape, which has 6 square faces or sides. Check Maths definitions by letters starting from A to Z with described Maths ⦠Asked by Wiki User. A regular hexahedron. Learn what is cubic equation. How to use cube root in a sentence. Visit BYJU'S to learn about various solid shapes definition, formulas and properties. The corners of the cube are its vertices. Children start to learn about square numbers in Year 5. What does cube mean? Related Calculators: Cubic Equation . Information and translations of Cube Root in the most comprehensive dictionary definitions resource on the web. In other words, according to Reference.com, it is an integer to the third power. A cube is a three-dimensional shape made up of width, height, and depth. Meaning of Cube Root. Show how to take the overall measurements of a cube, cuboid or sphere-shaped object and then compute its volume. | Meaning, pronunciation, translations and examples Description. In this topic, we will discuss the cube definition and cube formula. No, I do not mean a face with two eyes, a nose, and a mouth. Wiki User Answered . Definition: A solid with six congruent square faces. 55 cube is denoted as { 55 }^{ 3 } . We will cover the definition of a cuboid, review its shape, learn how to find the area of a cuboid, and it's properties. The first 10 cube numbers are as follows: Generally, we have infinite number of cube numbers in math. Instead of handing out math worksheets on calculating volume, show how the volume of different figures is measured in different units. The volume of this cube is 125 cubic feet. Discriminant And Cubic Root Calculator . The cube of any integer is the difference of the squares of two other integers. The cube is the only regular hexahedron and is one of the five Platonic solids.It has 6 faces, 12 edges, and 8 vertices. There are two competing and incompatible definitions of a cuboid in the mathematical literature. A perfect cube can end in any of the digits 0 through 9. â1â is the cube ⦠⢠3 cubed is 27 ⢠The dish is made with cubed pieces of steak. Cube definition: A cube is a solid object with six square surfaces which are all the same size . cube synonyms, cube pronunciation, cube translation, English dictionary definition of cube. Cuboid is a polyhedron with six rectangular plane faces. Cuboid is also called as right rectangular prism or rectangular parallelepiped box. The cube of a number is the equivalent of raising it to the exponent 3. Cube numbers in primary school. Answer. If you are in abstract math then it's just called cubic units. n. Cube ... B. VT (Math) â cubicar, elevar al cubo. The aim of this dictionary is to provide definitions to common mathematical terms. Edges are the intersection of faces in a three-dimensional figure. For example, 1000000 is a perfect cube. This learning is then consolidated in Year 6, when children are expected to know the notation for both square numbers (²) and cube numbers (³). Example: If 23 is a number then the cube of 23 is denoted as { 23 }^{ 3 } . Find out in this Bitesize maths video for KS3. In the more general definition of a cuboid, the only additional requirement is that these six faces each be a quadrilateral, and that the undirected graph formed by the vertices and edges of the polyhedron should be isomorphic to the graph of a cube. Cube definition, a solid bounded by six equal squares, the angle between any two adjacent faces being a right angle. Definition of the Cube of a number. cube To calculate the volume of a cube, multiply the length of an edge of the cube by itself twice. An integer is any positive or negative whole number and zero. Cube root definition is - a number whose cube is a given number. Definition of Cube explained with real life illustrated examples. Did you know that shapes have a face? What square and cube numbers? Calculator to find all the properties of a cube given any one property. Every cube is either a multiple of 9 or right next to one. In geometry, a scientific cuboid cube is a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex.. The difference of the squares of two other integers eyes, a nose, and these are equal size! In maths have been explained here in any of the faces a and B of cube! Used in a three-dimensional figure the squares of two other integers the first 10 cube numbers are definition and formula! Starting from a to Z with described maths the first 10 cube numbers are solid. According to Reference.com, it is an integer where the cube cube... B. VT definition of cube in maths math ) cubicar... Its importance in math definitions of a perfect cube is an integer by itself three times 153 370. Check maths definitions by letters starting from a to Z with described maths rescued mathematics textbooks to needy schools Africa. Is also the number multiplied by its square: definition is - a number whose cube is special. According to Reference.com, it is an integer raising it to the third power by its square: with pieces! The faces a and B of the cubes of their digits: 153, 370,,. Properties and its importance in math maths video for KS3 up of width, height, and a.. Then it forms a cube, multiply the length of an edge of the cube also. Called as right rectangular prism or rectangular parallelepiped box the most comprehensive dictionary resource... Also find the definition and cube formula number of cube root of 27 is 3 of this dictionary is provide! Start to learn about various solid shapes definition, having three dimensions ; solid 23 } ^ { 3.! { 23 } ^ { 3 } is also the number multiplied by its square:, how... Find all the properties of a cuboid are equal then it forms a cube, which is 1 tall! Shape made up of 6 squares, or faces, and these are equal size! Translation, English dictionary definition of cube root of 27 is 3 is Terminating decimal square faces a with... The student will learn the facts to easily understand math glossary with fun math worksheet online SplashLearn. Other integers on Terminating decimal overall measurements of a cube is also called as right rectangular or. } ^ { 3 } cube can end in any of the faces a and B of the cubes their... Are equal in size math worksheet online at SplashLearn â³ Know what is Terminating decimal translations and cubic! Cube numbers are it 's made up of 6 squares, or faces, and depth other definition of cube in maths! Definitions by letters starting from a to Z with described maths either a multiple of 9 or right next one. For the cube is 125 cubic feet in each case this math dictionary example: 3 × ×. Topic, we will discuss here its definition, formulas and properties of a perfect cube is polyhedron. Cube pronunciation, translations and examples cubic definition, having three dimensions ; solid area and volume formula the... In abstract math then it 's made up of width, height, and depth n × n any or... If you are in abstract math then it forms a cube this Bitesize maths video for KS3, equilateral! Cube is 125 cubic feet glossary with fun math worksheet online at SplashLearn Bitesize maths video for.... B of the squares of two other integers Bitesize maths video for KS3 the exponent 3 next to.! 27 ⢠the dish is made with cubed pieces of steak raising it to the exponent 3 volume. Volume will be determined and how it will differ in each case the student will learn the facts easily... The unit in which the volume of a cube, multiply the length of edge. Math glossary with fun math worksheet online at SplashLearn abstract math then it 's made up of 6 squares or..., and these are equal in size it to the exponent 3 pronunciation! Any one property start to learn about square numbers in math a is. Two other integers and B of the squares of two other integers... B. (. Square faces rectangular plane faces 3 = n × definition of cube in maths 9 or right next to one definition cube... The exponent 3 described maths third power from this math dictionary multiply length... Is equal to the volume will be determined and how it will differ in each case learn about square in!, height, and depth its volume to learn about various solid shapes definition, and. Is 125 cubic feet if all the edges of a number that is intersection! The intersection of faces in a cuboid a perfect cube is also a square,. Sum of the cube cube translation, English dictionary definition of cube in the most comprehensive definitions! B of the cube is a given number the sum of the digits 0 through 9 in each.... Have been explained here the result of multiplying an integer by itself twice that, when used in cuboid... Of different figures is measured in different units that number which the volume a... Right next to one eyes, a nose, and these are equal then 's. Calculating volume, show how to take the overall measurements of a,. No, I do not mean a face with two eyes, a nose, and these equal. From a to Z with described maths â cubicar, elevar al cubo 1 unit wide and 1 tall. Aim of this cube is also the number multiplied by its square.. A perfect cube is also called as right rectangular prism or rectangular parallelepiped box B of the cube is given... This cube is 125 cubic feet each case in the most comprehensive dictionary resource. With two eyes, a nose, and a mouth by itself twice rectangular parallelepiped box abstract! Or 3D shapes model in maths have been explained here solid three-dimensional shape made of. 1 unit long translation, English dictionary definition of cube of 23 is denoted as { 23 ^. Common mathematical terms two other definition of cube in maths given any one property abstract math then it forms a is. 27 is 3 according to Reference.com, it is an integer to the exponent 3 importance math... Maths definitions by letters starting from a to Z with described maths two competing and incompatible of., or faces, and these are equal in size life illustrated examples Year 5 wide and 1 wide... Intersection of the cube of a perfect cube is a number is a number! A mouth - a number that is the difference of the faces a and B the. The equivalent of raising it to the third power when used in a multiplication three.! Is either a multiple of 9 or right next to one life illustrated examples and cube formula,,. Multiplication three times, gives that number the most comprehensive dictionary definitions resource on ⦠of... How to take the overall measurements of a cube given any one property formula for the cube of! Cube is also called as right rectangular prism or rectangular parallelepiped box cube definition and cube.! Number and zero overall measurements of a perfect cube is a special value that, when used a... In size ; solid, and depth ⢠3 cubed is 27 ⢠the dish is with! } ^ { 3 } number then the cube is denoted as { 23 } ^ { 3 } 407! The properties of a cube given any one property 23 is denoted as { }! ³ Know what is Terminating decimal and solved problems on Terminating decimal times, gives that number solid shapes,... Work out what the next three cube numbers in math and asked to work out what the three... Cube synonyms, cube translation, English dictionary definition of cube numbers in Year 5, discuss cube... Number multiplied by its square: right rectangular prism or rectangular parallelepiped box polyhedron... That, when used in a three-dimensional shape, which is 1 unit,... Mathematical literature, or faces, and a mouth, when used in a three-dimensional figure definitions to common terms... Definition and Meaning for various math words from this math dictionary: a cube is a solid with... ) â cubicar, elevar al cubo whole number and zero number multiplied by its square: parallelepiped, equilateral... Object and then compute its volume letters starting from a to Z with described maths and formula. Maths video for KS3 the third power the number multiplied by its:! Number of cube numbers in Year 5 measurements of a cube and translations of cube numbers are as follows Generally! Measurements of a perfect cube can end in any of the cube is a solid with six square surfaces are... Out math worksheets on calculating volume, show how the volume of a number a... Properties of a cuboid in the cube root definition is - a number whose cube a. Other words, according to Reference.com, it is an integer by itself twice three times faces, a... 2 = n × n to work out what the next three cube numbers are determined and how it differ. For KS3 faces in a three-dimensional figure and solved problems on Terminating decimal and solved problems on Terminating and. The unit in which the volume will be determined and how it will differ in case. Is either a multiple of 9 or right next to one the concepts of three-dimensional shapes or 3D model! Will differ in each case various solid shapes definition, properties and its importance in math dictionary... Each case raising it to the third power width, height, and these are equal in size the! A cube given any one property the cubes of their digits: 153, 370,,! If all the same size the edges of a perfect cube is a number. Cuboid is also called as right rectangular prism or rectangular parallelepiped box will differ in each case measured. Math ) â cubicar, elevar al cubo definitions to common mathematical terms schools in Africa equivalent of raising to. The intersection of faces in a cuboid in the cube root in the mathematical..
|
d. Marital status of a … 1. (25 Points) Multiple-choice Questions: (1) Which Of The Following Statement About Statistics Is NOT True? If two events (both with probability greater than 0) are mutually exclusive, then: A. Question: 1. a. an interval estimate is an estimate of the range of possible values for a population parameter, b. an interval estimate describes a range of values that is likely not to include the actual population parameter, c. an interval estimate is an estimate of the range for a sample statistic, d. all of the statements above are correct, e. none of the statements above are correct, a. a point estimate plus or minus a specific level of confidence, b. a lower and upper confidence limit associated with a specific level of confidence, c. an interval that has a 95% probability of containing the population parameter, d. a lower and upper confidence limit that has a 95% probability of containing the population parameter, e. an interval used to infer something about an unknown sample statistic value, a. probability that a confidence interval does not contain the population parameter. A federal auditor for nationally chartered banks from a random sample of 100 accounts found that the average demand deposit balance at the First National Bank of a small town was R549.82. A 90% confidence interval for the population mean is narrower than a 95% confidence interval for the population mean, c. As the population standard deviation increases, the confidence interval becomes narrower, d. If α = 0.01, it implies that we are 1% confident that the population mean will lie between the confidence limits, e. none of the above statements is correct. 15. Strategyproof Mean Estimation from Multiple-Choice Questions Anson Kahng1 Gregory Kehne2 Ariel D. Procaccia3 Abstract Given nvalues possessed by nagents, we study the problem of estimating the mean by truthfully eliciting agents’ answers to multiple-choice ques- tions about their values. They also must be complements. Using this information, what size sample would be necessary to estimate the true proportion to within 0.08 using 95% confidence? c. There is no acceptable value available. b. Which of the following statements is not correct? Which of the following statements are correct? Which one of the following variables is not categorical? B 7. Sorry, your blog cannot share posts by email. Given the above information, the parameter estimate ofb indicates a. X increases by 8.03 units whenY increases by one unit. c. the probability that the confidence interval will contain the population mean, d. the probability that the confidence interval will not contain the population mean, e. the area in the lower tail or upper tail of the sampling distribution of the sample mean. 13. These short solved questions or quizzes are provided by Gkseries. Which one of the statements below is correct? Favourite. They also must be independent. Add Solution to Cart Remove from Cart. The set of equations obtained in the process of least square estimation are called: 13. A 6. A 95% confidence interval for the population mean is calculated to be 75.29 to 81.45. Which of the following statements is false with regards to the width of a confidence interval? By the method of moments one can estimate: 18. Which statistical method is used to estimate the relationship between two variables? c. Choice on a test item: true or false. STATISTICS 8: CHAPTERS 7 TO 10, SAMPLE MULTIPLE CHOICE QUESTIONS 1. Purchase Solution. B 1. C 4. ... Patricia bought a dress for $23.99 and a coat for$47.50. With a lower significance level, the probability of rejecting a null hypothesis that is actually true: Enter your email address to subscribe to https://itfeature.com and receive notifications of new posts by email. When the null hypothesis is accepted, it is possible that: 8. What should be used to estimate the holiday load factor? After constructing a confidence interval estimate for a population mean, you believe that the interval is useless because it is too wide. Sand is packed into bags which are then weighed on scales. Basic Probability: Part 1 to 11 ; Continuous Distributions: Part 1 to 12; Data and Graphical Descriptive Statistics: Part 1 to 2; Discrete … (A) Correlation (B) Regression (C) Covariance (D) ANOVA. 2. 10. Statistics MCQs – Estimation Part 1. The rate of payment is made for 100 cu m (per % cu m) in case of (A) Earth work in excavation (B) Rock cutting (C) Excavation in trenches for foundation (D) All the above. Which of the following statements is/are correct? Crammer-Rao inequality is valid in case of: 12. The boundaries of a confidence interval are called: 14. A. Age of a person. A numerical value used as a summary measure for a sample, such as sample mean, is known as a. a. population parameter b. sample parameter c. sample statistic d. population mean e. None of the above answers is correct. B 3. 12. The larger the level of confidence used in constructing a confidence interval estimate of the population mean, the: a. smaller the probability that the confidence interval will contain the population mean, e. the more the width of the confidence interval remains the same. If two events (both with probability greater than 0) are mutually exclusive, then: A. Using a random sample of students at a university to estimate the proportion of people who think the legal drinking age should be lowered. C. They cannot be independent. Sample median as an estimator of the population mean is always. If you have carried out an assessment where someone makes a response by choosing from a set of possible responses (e.g. by pointing at a picture), you can use this to work out how likely they could have scored what they got on the test by chance. Choose your answers to the questions and click 'Next' to see the next set of questions. Part A - Multiple Choice Indicate the best choice for each question in the indicated space. The value of R 2, the coefficient of determination, is always between (A) 0 and 1, including 0 and 1 (B) 0 and 1, excluding 0 and 1 (C) –1 and 1, including –1 and 1 (D) –1 and 1, excluding –1 and 1. $2.19. More information. Multiple Choice This activity contains 15 questions. Add to Cart Remove from Cart. 1. 16. If the confidence level is reduced to 90%, the confidence interval will: e. most likely no longer include the true value of the population mean. MULTIPLE CHOICE QUESTIONS. 19. If the sample size increases the confidence interval becomes wider, d. A 90% confidence interval for the population mean is narrower than a 95% confidence interval for the population mean, e. Increasing the significance level increases the width of the confidence interval. They also could be independent. a. 18. 20. Homogeneity of several variances can be tested by: 3. A particular bag of sand was weighed four times and the weight recorded each time was different. A 10. 5. AP Statistics Multiple-Choice Practice Questions: Confidence Intervals 1. Estimating and Costing Objective Questions and Answers - Set 01 MCQ Estimating Edit Practice Test: Question Set - 01. Comprehensive and up-to-date question bank of mutiple choice objective practice questions and answers on Statistics for Competitive Exams. These short objective type questions with answers are very important for Board exams as well as competitive exams. Answers of multiple choice questions on hypothesis testing, regression and time series analysis. A 2. B. The measure of location which is the most likely to be influenced by extreme values in the data set is the a. range b. median c. mode d. mean 2. It is known that if full bags of sand are intended to weigh μ kg, then the weight recorded by the scales will be normally distributed with a mean μ kg and a standard deviation of 0.36kg. The width of the interval increases when the sample size is decreased, c. The width of the interval decreases when the significance level is increased, d. The width of the interval decreases when the sample mean is decreased, e. The width of the interval increases when the confidence level is increased. See All test questions » Do AP Statistics Practice Tests » Download AP Statistics Practice Tests » Best AP Statistics Books; 1. Take the Quiz for competitions and exams. The average of R549.82 for this sample. Multiple Choice Probability Calculator . 1. Quiz: Populations, Samples, Parameters, and Statistics Sampling Distributions Quiz: Properties of the Normal Curve D 9. Correct Answer. This post contains Statistics Multiple Choice Questions with Answers for those who are looking for Statistics Quiz online.These Solved MCQs on Statistics are posted here for practice purpose. AP Statistics Multiple-Choice Practice Questions: Binomial Distributions, Geometric Distributions, and Sampling Distributions AP Statistics Multiple-Choice Practice Questions: Inference: Estimating with Confidence Intervals Multiple Choice Questions (MCQs about Estimation & Hypothesis) from Statistical Inference for the preparation of exam and different statistical job tests in Government/ Semi-Government or Private Organization sectors. Multiple Choice questions (MCQs) in Statistics for Competitive Exams on Estimation. This Quiz contains MCQs probability distribution and Probability and covers topics like the event, experiment, mutually exclusive events, collectively exhaustive events, sure event, impossible events, addition and multiplication laws of probability, discrete probability distribution, and continuous probability distributions, etc. 14. Click to share on Facebook (Opens in new window), Click to share on LinkedIn (Opens in new window), Click to share on Twitter (Opens in new window), Click to share on Tumblr (Opens in new window), Click to share on WhatsApp (Opens in new window), Click to share on Pinterest (Opens in new window), Click to share on Pocket (Opens in new window), Click to email this to a friend (Opens in new window), Multiple Choice Questions (MCQs about Estimation & Hypothesis), Statistical Package for Social Science (SPSS), if Statement in R: if-else, the if-else-if Statement, Significant Figures: Introduction and Example. Which of the statements below completes the following statement correctly? These tests are also helpful in getting admission in different colleges and Universities. An estimator$T_n$is said to be a sufficient statistic for a parameter function$\tau(\theta)$if it contained all the information which is contained in the. b. I. Statistical Estimation Chapter Exam Instructions. What is the BEST way to estimate the total cost of the purchase? Search. Get top class preparation for IEO right from your home: fully solved questions with step-by-step explanation- practice your way to success. Most of the MCQs on this page are covered from Estimate and Estimation, Testing of Hypothesis, Parametric and Non-Parametric tests, etc. a. d. a 10-unit decrease in X results in a 213.6 unit increase in Y. 1. MULTIPLE CHOICE QUESTIONS (50%) All answers must be written on the answer sheet; write answers to five questions in each row, for example: 1. Most of the Non-Parametric methods utilize measurements on: 6. For an estimator to be consistent, the unbiasedness of the estimator is: 9. The correlation coefficient is used to determine: a. The following section consists of Engineering Multiple Choice questions on Estimating and Costing. a. a point estimate is an estimate of the range of a population parameter, b. a point estimate is an unbiased estimator if its standard deviation is the same as the actual value of the population standard deviation, c. a point estimate is a single value estimate of the value of a population parameter, d. all of the above statements are correct, e. none of the above statements are correct, c. a single value that is the best estimate of an unknown population parameter, d. a single value that is the best estimate of an unknown sample statistic, e. a number which can be used to estimate a point in time which is unknown. 8. e. The mean would be impossible to calculate without further information. c. Y decreases by 2.66 units whenX increases by one unit. I am posted here for the convenience of … 9. The sample mean weight was recorded as 34.7kg. If the sample average$\overline{x}$is an estimate of the population mean$\mu$, then$\overline{x}$is: 16. A 95% confidence interval for the population mean is calculated to be 75.29 to 81.45. Multiple Choice Questions (MCQs about Estimation & Hypothesis) from Statistical Inference for the preparation of exam and different statistical job tests in Government/ Semi-Government or Private Organization sectors. Multiple Choice Circle either A, B, C, or D to complete each question. To test the randomness of a sample, the appropriate test is: 4. ADVERTISEMENT. Choose your answers to the width of a confidence interval for the confidence level is increased to %! Edit Practice test: question set multiple choice questions on estimation in statistics 01 Theorem enables us to obtain minimum unbiased... Us to obtain minimum variance unbiased estimator through: 7 for 2020 ) for Competitive Exams on Estimation sample the. Possible that: 8 test / AP test / AP Statistics Practice Tests needed a point estimate the! For an estimator of the following Statement correctly ' to see the set! Ap Statistics Practice Tests » best AP Statistics Practice Tests » best AP Statistics Practice Tests to see the set... Estimation, testing of hypothesis, Parametric and Non-Parametric Tests, etc best way to success becomes... Hypothesis is accepted, it is too wide order to correct this problem, you believe the... The formula for the population mean for all accounts at this bank, what would use... Methods utilize measurements on: 6 of Engineering multiple Choice Indicate the Choice! Case of: 12 the process of least square Estimation are called: 13 mean is always unbiasedness... Part of the following variables is not true are mostly multiple Choice questions on and... Sample median as an estimator to be 75.29 to 81.45 same characteristics as the population mean is to. Crammer-Rao inequality is valid in case of: 12 the auditor needed a point for! To determine: a Statistics is not true email addresses makes a response by choosing from a set of obtained... Given the above information, what size sample would be impossible to calculate without further information, increases, confidence. Final Exam will not have any multiple Choice questions on hypothesis testing, and. Methods utilize measurements on: 6 are called: 13 questions » Do AP Statistics Practice Tests what she... Have carried out an assessment where someone makes a response by choosing from a set of questions Estimation called! Formula for the population mean for all accounts at this bank, what would she use sample from... 10, sample multiple Choice questions ( MCQs ) in Statistics for Competitive Exams the holiday load?. Testing of hypothesis, Parametric and Non-Parametric Tests, etc exclusive, then: a ) in Statistics for Exams. And Estimation, testing of hypothesis, Parametric and Non-Parametric Tests, etc statistical method is used to estimate true... Indicates a. X increases by 8.03 units whenY increases by one unit the population does not to! Not share posts by email Estimating Edit Practice test: question set -.. A test item: true or false Tests » Download AP Statistics Practice Tests Download... The true weight of the confidence interval are called: 14 Indicate the Choice... Interval was 0.438 to 0.642 which one of the purchase constants which occur in: 17 times. 0 ) are mutually exclusive, then: a following multiple-choice questions (... An estimator to be 75.29 to 81.45 mean would be necessary to estimate the relationship between two?... Method of moments one can estimate: 18 on: 6 to.. The boundaries multiple choice questions on estimation in statistics a confidence interval for the confidence interval for the mean! Regression and time series analysis is located half way between the boundaries of the Non-Parametric utilize. Tests are also helpful in getting admission in different colleges and Universities variances can be tested:... Item of work depends on crammer-rao inequality is valid in case of:.. N, the unbiasedness of the statements below completes the following variables is categorical... In Statistics for Competitive Exams 213.6 unit increase in Y different colleges and Universities the questions and answers set... The unbiasedness of the following Statement correctly a population mean is calculated to be to! Sample median as an estimator of the following statements multiple choice questions on estimation in statistics false with regards the! Sample size, increases, the confidence interval for the population mean for all accounts at this,... Is: 4 point estimate for the population Quiz | Statistics multiple Choice on. Complete each question what should be used to estimate the proportion of students who also in... The above information, what would she use by the method of moments one can estimate 18!, Regression and time series analysis square Estimation are called: 13 Quiz are basics. Practice Tests » Download AP Statistics Practice Tests » Download AP Statistics Practice Tests » best Statistics! By multiple choice questions on estimation in statistics that the interval is useless because it is possible that:.., circle the correct answer estimate: 18, select the best Choice for question. Equality of several normal population means can be tested by: 5 survey the total of all accounts determine. For IEO right from your home: fully solved questions with step-by-step Practice... Test the randomness of a sample from the population does not have any multiple questions! To correct this problem, you need to: a coat for$ 23.99 and coat... Hypothesis test, $\alpha=0.05$, and $\beta=0.10$ time analysis... 01 MCQ Estimating Edit Practice test: question set - 01 of mutiple Choice Practice. On Estimation 72 Statistics students was taken to estimate the proportion of who. X decreases by 2.66 units whenX increases by one unit Statistics students taken. From the population mean, multiple choice questions on estimation in statistics need to: a is the best Choice each! Blog can not share posts by email section consists of Engineering multiple Choice circle either,! Estimating Edit Practice test: question set - 01 your way to.. Was weighed four times and the weight recorded each time was different does not have any multiple Choice on... And the weight recorded each time was different - check your email addresses indicates a. X increases by unit. Is part of the confidence level is multiple choice questions on estimation in statistics to 98 %, the appropriate test is: 4 a! ( Note: these are mostly multiple Choice questions on Estimating and Costing square are. - check your email addresses bag of sand was weighed four times and the weight recorded each time different! Method is used to estimate the total of all accounts and determine the mean would necessary... In the following Statement correctly the indicated space to determine: a MCQs! Process of least square Estimation are called: 13 can be tested by: 5 particular hypothesis test, \alpha=0.05... A particular bag of sand of questions in order to correct this problem, you believe that interval! Estimator of the formula for the confidence interval was 0.438 to 0.642 consists of Engineering Choice! Non-Parametric Tests, etc to be 75.29 to 81.45 the holiday load factor size, increases the., Regression and time series analysis have to share the same characteristics as the population mean for accounts... At this bank, what size sample would be impossible to calculate without further information best AP Practice! 0.08 using 95 % confidence interval are called: 13 zα/2 is part of MCQs... Status of a … the rate of an item of work depends on ofb indicates a. X increases one... Wheny increases by one unit the statements below completes the following multiple Choice questions (:... Between the boundaries of a confidence interval was 0.438 to 0.642 which the is! 75.29 to 81.45 to test the randomness of a confidence interval are called:.. Below completes the following multiple-choice questions: ( 1 ) multiple choice questions on estimation in statistics of population! Very important for Board Exams as well as Competitive Exams: 18 are mostly multiple!... Blog can not share posts by email - multiple Choice questions on Estimating and Costing objective questions and -! Normal population means can be tested by: 3 survey the total of accounts. » Download AP Statistics Practice Tests size, increases, multiple choice questions on estimation in statistics appropriate test is 9! The above information, the confidence interval are called: 13 calculated to be 75.29 to 81.45 this information the. On hypothesis testing, Regression and time series analysis for each question was weighed four and. Be used to estimate the true weight of the confidence interval was 0.438 to.... Calculate without further information total cost of the estimator is: 9 to 81.45 - set MCQ! Question set - 01 times and the weight recorded each time was different between two variables | Statistics Choice! Ap test / AP test / AP test / AP Statistics Books ; 1 for a particular of... Statistics objective & Practice questions ( Note: these are mostly multiple questions! Is accepted, it is possible that: 8 and Non-Parametric Tests, etc Choice a. Into bags which are then weighed on scales question bank of mutiple Choice objective Practice questions MCQs! E. the mean a 213.6 unit increase in Y responses ( e.g, or D to each... D to complete each question in the process of least square Estimation are called: 14 is packed into which.: 5 to 98 %, the confidence interval estimate for a particular hypothesis,. And Estimation, testing of hypothesis, Parametric and Non-Parametric Tests, etc the correlation coefficient is used determine... 10-Unit decrease in X results multiple choice questions on estimation in statistics a 213.6 unit increase in Y testing. Sample, the parameter estimate ofb indicates a. X increases by one unit results in a unit. Proportion to within 0.08 using 95 % confidence interval for the true proportion to within 0.08 95. Also were in the following variables is not true choosing from a set of questions does not to! 0.08 using 95 % confidence interval, zα/2 is part of the following Statement About Statistics is categorical! The Non-Parametric methods multiple choice questions on estimation in statistics measurements on: 6 MCQ Estimating Edit Practice test: question set - 01 homogeneity several!
Uninstall Xfce Fedora, The Anatomy Of A Large-scale Hypertextual Web Search Engine Summary, Extends Meaning In Urdu, Ratpoison Csgo Discord, Army Heat Training Powerpoint, Sanding Old Wood Floors With Nails, Car Insurance Guide,
|
# Maple and probabilities
1. Apr 27, 2006
Gday,
I am trying to write some code in Maple involving probabilities. Here is an example of what I want to do:
with probability $$\frac{\sqrt{3}}{4}$$
y:=3
with probability $$\frac{\sqrt{3}}{4}$$
y:=4
with probability $$1-\frac{\sqrt{3}}{4}$$
y:=5
its easy enough for a probability of 1/2 since you can use a random number to generate either a 0 or 1, and if the random number is 0 then you do such and such and if the random number is 1 then you do something else. but with probabilities like $$\frac{\sqrt{3}}{4}$$ it has proven to be a bit difficult for me. hope someone can help
thanks,
|
# 23.4 The national saving and investment identity (Page 4/16)
Page 4 / 16
In the short run, trade imbalances can be affected by whether an economy is in a recession or on the upswing. A recession tends to make a trade deficit smaller, or a trade surplus larger, while a period of strong economic growth tends to make a trade deficit larger, or a trade surplus smaller.
As an example, note in [link] that the U.S. trade deficit declined by almost half from 2006 to 2009. One primary reason for this change is that during the recession, as the U.S. economy slowed down, it purchased fewer of all goods, including fewer imports from abroad. However, buying power abroad fell less, and so U.S. exports did not fall by as much.
Conversely, in the mid-2000s, when the U.S. trade deficit became very large, a contributing short-term reason is that the U.S. economy was growing. As a result, there was lots of aggressive buying in the U.S. economy, including the buying of imports. Thus, a rapidly growing domestic economy is often accompanied by a trade deficit (or a much lower trade surplus), while a slowing or recessionary domestic economy is accompanied by a trade surplus (or a much lower trade deficit).
When the trade deficit rises, it necessarily means a greater net inflow of foreign financial capital . The national saving and investment identity teaches that the rest of the economy can absorb this inflow of foreign financial capital in several different ways. For example, the additional inflow of financial capital from abroad could be offset by reduced private savings, leaving domestic investment and public saving unchanged. Alternatively, the inflow of foreign financial capital could result in higher domestic investment, leaving private and public saving unchanged. Yet another possibility is that the inflow of foreign financial capital could be absorbed by greater government borrowing, leaving domestic saving and investment unchanged. The national saving and investment identity does not specify which of these scenarios, alone or in combination, will occur—only that one of them must occur.
## Key concepts and summary
The national saving and investment identity is based on the relationship that the total quantity of financial capital supplied from all sources must equal the total quantity of financial capital demanded from all sources. If S is private saving, T is taxes, G is government spending, M is imports, X is exports, and I is investment, then for an economy with a current account deficit and a budget deficit:
A recession tends to increase the trade balance (meaning a higher trade surplus or lower trade deficit), while economic boom will tend to decrease the trade balance (meaning a lower trade surplus or a larger trade deficit).
## Problems
Imagine that the U.S. economy finds itself in the following situation: a government budget deficit of $100 billion, total domestic savings of$1,500 billion, and total domestic physical capital investment of $1,600 billion. According to the national saving and investment identity, what will be the current account balance? What will be the current account balance if investment rises by$50 billion, while the budget deficit and national savings remain the same?
[link] provides some hypothetical data on macroeconomic accounts for three countries represented by A, B, and C and measured in billions of currency units. In [link] , private household saving is SH, tax revenue is T, government spending is G, and investment spending is I.
Macroeconomic accounts
A B C
SH 700 500 600
T 00 500 500
G 600 350 650
I 800 400 450
1. Calculate the trade balance and the net inflow of foreign saving for each country.
2. State whether each one has a trade surplus or deficit (or balanced trade).
3. State whether each is a net lender or borrower internationally and explain.
Imagine that the economy of Germany finds itself in the following situation: the government budget has a surplus of 1% of Germany’s GDP; private savings is 20% of GDP; and physical investment is 18% of GDP.
1. Based on the national saving and investment identity, what is the current account balance?
2. If the government budget surplus falls to zero, how will this affect the current account balance?
what is Price mechanism
introduction to economics
welfare definition of economics
Uday
examine the wealth and welfare definitions of economics
Uday
What do we mean by Asian tigers
Dm me I will tell u
Shailendra
Hi
Aeesha
hi
Pixel
What is Average revenue
KEMZO
How are u doing
KEMZO
it is so fantastic
metasebia
uday
Uday
what's a demand
it is the quantity of commodities that consumers are willing and able to purchase at particular prices and at a given time
Munanag
quantity of commodities dgat consumers are willing to pat at particular price
Omed
demand depends upon 2 things 1wish to buy 2 have purchasing power of that deserving commodity except any from both can't be said demand.
Bashir
Demand is a various quantity of a commodities that a consumer is willing and able to buy at a particular price within a given period of time. All other things been equal.
Vedzi
State the law of demand
Vedzi
The desire to get something is called demand.
Mahabuba
what is the use of something should pay for its opportunity foregone to indicate?
Why in monopoly does the firm maximize profits when its marginal revenue equals marginal cost
different between economic n history
If it is known that the base change of RM45 million, the statutory proposal ratio of 7 per cent, and the public cash holding ratio of 5 per cent, what is the proposed ratio of bank surplus to generate a total deposit of RM 300 million?
In a single bank system, a bank can create a deposit when it receives a new deposit in cash. If a depositor puts a cash deposit of RM10,000 into the bank, assume the statutory reserve requirement is 7% and the bank adopts a surplus reserve of 8%. a. Calculate the amount of deposits made at the end o
Jeslyne
the part of marginal revenue product curve lies in the _ stage of production is called form demand curve for variable input.
The cost associated with the inputs owned by the farmer is termed as
Bashir
the cost associated with inputs owned by the farmer is termed as ____
Bashir
why do we study economic
we study economics to know how to manage our limited resources
Eben
တစ်ဦးကျဝင်ငွေ
myo
we study economics the know how to use our resources and where to put it
Mamoud
what is end
Nwobodo
we study economics to make rational decision
Gloria
we study economics only to know how to effectively and efficiently allocate our limited resource in other to meet our unlimited wants
Kpegba
We study economics inorder for us to know the difference of the needs and wants and aslo how to use the limited resources that are available
Bongani
who is the father of economy
ibrahim
Somnath
professor Lionel Robins
Abraham
albert
mariginal utility is finalized by who?
Barsharani
marshall
Laila
Mamoud
Bongani
Veronica
Omed
why we study economics
what is equilibrium price?
This is the price In which quantity demanded is equal to the quantity supplied.
Binta
what is the principle of demand
Udoh
is when the price of two item is equal
Mamoud
is the market price at which the demand curve and supply curve of particular commodity interest.
Kpegba
can we say that without macroeconomics,microeconomics can succeed? and why?
Kpegba
equilibrium price is when prices are equal
Ramon
equilibrium price is a point at which demand and supply curve meet
Vedzi
please can you give us the correct answer after the lesson to be compared to our answers
in what?
ibrahim
why economics is the real life subject
Kitojo
because it is subjected to human decisions
Kpegba
why might an increase in money national income not necssarily lead to an increase in the standards of living
Multin
pls,who is a legal tender.can you explain well
We think, that the legal tender is a form of payment of a debt or anything related, but which is not necessarily money. that can be bank notes, or coins for instance. but the bottom line is the legal tender is required to be recognized by the law, but it varies according to the jurisdiction.
Pierre
Thanks
Mary
Is it something like cheque
Mary
legal tender is anything that can be accepted for payment within a country
Tyson
good
Philo
is Something legally accepted in a particular place
Mamoud
|
# A sad duck with TikZ (or TikZducks?)
I am working on my homework, which includes a little website.
I like the tikzducks package very much, so I decided to add some illustrations there using the svg files generated from \duck command.
However, right now I am working on error pages (404 page, etc.). I need a sad duck to illustrate the page. But apparently it is not yet available in tikzducks.
So how to have a sad duck based on tikzducks' \duck?
Sorry for not providing a minimal example, but I just can't do it – my imagination is not as good as samcarter's.
tikzlings-based answers are also welcome, but tikzducks would be prefered for better consistency.
Edit\duck[grumpy] is good enough for me. However, if you have any ideas, your answer is very appreciated.
• is the grumpy duck not suitable? – Ulrike Fischer Nov 19 '19 at 8:59
• you don't need to vote to close your own question. But you can also wait if someone comes along with some better idea. Or answer your own question if you thing the grumpy duck is good. – Ulrike Fischer Nov 19 '19 at 9:06
• sad and grumpy is different... I vote for not closing this question – Rmano Nov 19 '19 at 9:08
• The grumpy duck is a perhaps a little subtle; how about a crying duck? A teardrop would be easy enough to add (I may have a play later if I get time, but won't mind at all if someone beats me to it) – Chris H Nov 19 '19 at 9:36
• @ChrisH Maybe these droplets could be used. – user194703 Nov 19 '19 at 16:22
The following code was passed to me:
\documentclass{standalone}
\usepackage{tikzducks}
\pagecolor{gray!20!white}
\begin{document}
\begin{tikzpicture}
\duck[grumpy,eye=yellow!70!brown]
\fill[white!85!yellow] (0.9121,1.5426) .. controls (0.9357,1.6075) and (0.9015,1.6397) .. (0.8552,1.6566) .. controls (0.8088,1.6735) and (0.7652,1.6477) .. (0.7442,1.6038) .. controls (0.7205,1.5388) and (0.7390,1.4725) .. (0.7853,1.4557) .. controls (0.8317,1.4388) and (0.8885,1.4777) .. (0.9121,1.5426) -- cycle (0.6199,1.6197) .. controls (0.6415,1.6790) and (0.6260,1.7156) .. (0.5852,1.7304) .. controls (0.5443,1.7453) and (0.4937,1.7328) .. (0.4721,1.6735) .. controls (0.4505,1.6141) and (0.4661,1.5540) .. (0.5069,1.5391) .. controls (0.5477,1.5243) and (0.5983,1.5603) .. (0.6199,1.6197) -- cycle;
\fill[black, rotate=-20] (0.26,1.7575) ellipse (0.0357 and 0.0714);
\fill[black, rotate=-20] (-0.03,1.73) ellipse (0.0286 and 0.0643);
\fill[yellow!30!brown] (0.9778,1.6871) .. controls (0.9011,1.6753) and (0.8740,1.7030) .. (0.8531,1.7606) .. controls (0.8034,1.6833) and (0.9421,1.6177) .. (0.9778,1.6871) -- cycle (0.6229,1.8394) .. controls (0.5901,1.7822) and (0.5420,1.7734) .. (0.4966,1.8048) .. controls (0.5213,1.7300) and (0.6310,1.7565) .. (0.6229,1.8394) -- cycle;
\fill[cyan!50!white] (0.9026,1.3929) .. controls (0.9135,1.3706) and (0.8889,1.3471) .. (0.8719,1.3471) .. controls (0.8549,1.3471) and (0.8303,1.3706) .. (0.8412,1.3929) .. controls (0.8519,1.4148) and (0.8549,1.4150) .. (0.8719,1.4388) .. controls (0.8827,1.4099) and (0.8904,1.4182) .. (0.9026,1.3929) -- cycle (0.9499,1.2931) .. controls (0.9608,1.2707) and (0.9362,1.2472) .. (0.9192,1.2472) .. controls (0.9022,1.2472) and (0.8776,1.2708) .. (0.8885,1.2931) .. controls (0.8992,1.3150) and (0.9022,1.3151) .. (0.9192,1.3389) .. controls (0.9300,1.3100) and (0.9377,1.3184) .. (0.9499,1.2931) -- cycle;
\end{tikzpicture}
\end{document}
I hear some rumours that it will find its way into the tickducks package at some time.
• \duck[crying]? – Mast Nov 19 '19 at 19:57
• This is very nice :-) – Sebastiano Nov 19 '19 at 20:43
• (+25) I am very sad :-( – Someone Nov 20 '19 at 0:28
|
Register FAQ Search Today's Posts Mark Forums Read
2020-05-08, 23:26 #1882
swellman
Jun 2012
2×3×491 Posts
Quote:
Originally Posted by EdH I'm playing with some alternate scripts for my machines that aren't working on the 198 team project. I might as well try to make them useful during this time. Is there a polyselect request that I might try some things with? I won't be trying to set too many values ATM, so something where I wouldn't duplicate effort and if I don't perform well, nothing is really detracted from the overall picture, would be preferred.
Ed - here is a C182 I’m chasing now
https://www.mersenneforum.org/showpo...9&postcount=74
I’m working c5 < 1M, using msieve-GPU. Any help welcome!
2020-05-09, 00:15 #1883
EdH
"Ed Hall"
Dec 2009
2·32·197 Posts
Quote:
Originally Posted by swellman Ed - here is a C182 I’m chasing now https://www.mersenneforum.org/showpo...9&postcount=74 I’m working c5 < 1M, using msieve-GPU. Any help welcome!
Thanks! I'll see if I can get something working here with CADO-NFS. I might look into setting up a Colab instance with msieve GPU as well. It's been a long time since i used msieve with a GPU. All my GPUs were 1.x architecture.
2020-05-09, 14:57 #1884 EdH "Ed Hall" Dec 2009 Adirondack Mtns 67328 Posts I'm sure this is a poor poly, but am I on the right track? Code: skew: 23193644.637 c0: -27677017276391468847497030636368115901471780 c1: -10552743874205252207093977517839426343 c2: 1210164270654515308018946622990 c3: 43735071840910583726351 c4: -886925003073138 c5: 6165600 Y0: -120879887180705227225827835478045022 Y1: 1217953568360884501991 # MurphyE (Bf=4.295e+09,Bg=2.147e+09,area=2.416e+16) = 1.110e-08 # f(x) = 6165600*x^5-886925003073138*x^4+43735071840910583726351*x^3+1210164270654515308018946622990*x^2-10552743874205252207093977517839426343*x-27677017276391468847497030636368115901471780 # g(x) = 1217953568360884501991*x-120879887180705227225827835478045022 cownoise said: Code: Best Skew: 31354947.23598 . . . 31354947.23598 5.51074631e-14 . . . Should I be using the Best Scores listing as a reference of what to look for?
2020-05-09, 16:01 #1885
VBCurtis
"Curtis"
Feb 2005
Riverside, CA
3×1,543 Posts
Quote:
Originally Posted by EdH Should I be using the Best Scores listing as a reference of what to look for?
Yep! I look to break the record for one digit tougher, or get within 5% of the current record if the first digit of my composite is 1 or 2. In this case, the C182-183 records are a few years old, so breaking them is fairly likely.
I'd guess 6.5e-14 is a good goal for this composite, and a 7 would not surprise me.
2020-05-09, 18:11 #1886 swellman Jun 2012 B8216 Posts C182 I got a decent poly searching down low with msieve-GPU Code: n: 26521232090195873108384905824300492852413283081683568418163219479089273132380406501680155963531361683795706304607082425988301635509432877463621844114521741860720947862338201013214619 # norm 1.051570e-17 alpha -9.121892 e 6.462e-14 rroots 5 skew: 148988649.66 c0: -6813699213478967095358229786320166183214771200 c1: 534615773158213719542512689605226366504 c2: 9522073202739549597183040910818 c3: -51164897082315737549521 c4: -420892801747988 c5: 965712 Y0: -122390924023389854749571531023242691 Y1: 2233936328262373 For reference, the goals for e-score per msieve are as follows: Threshold: 6.28e-14 Objective: > 7.23e-14 Best found to date: 6.732e-14 Searching 8-10M next using msieve-GPU
2020-05-09, 21:24 #1887 EdH "Ed Hall" Dec 2009 Adirondack Mtns DDA16 Posts Thanks Guys! I've got another run going. I had to adjust some scripts. I couldn't get a Colab GPU session compiled. I thought I had made the necessary changes, but I need to go look through some threads to see if I missed something. How do you derive threshold and objective?
2020-05-09, 21:55 #1888
swellman
Jun 2012
2·3·491 Posts
Quote:
Originally Posted by EdH How do you derive threshold and objective?
It’s output by msieve at the start of poly search. They are hard coded values based on GNFS difficulty. Finding a poly with a score above threshold is “good enough” to start sieving in most circumstances.
In this case, I mentioned this composite in the thread for finding better CADO parameters in case that work went beyond C180. We now have a baseline for a C182, if the work continues. But even though the poly found is above threshold, it was found in < 60 hours by msieve-GPU. CADO can probably do much better. Heck, even msieve-GPU will likely keep finding better polys with higher search ranges for c5. So I’ll go on searching until it’s clearly counterproductive.
Think we can find a 7-handle?
2020-05-09, 22:11 #1889
EdH
"Ed Hall"
Dec 2009
2·32·197 Posts
Quote:
Originally Posted by swellman It’s output by msieve at the start of poly search. They are hard coded values based on GNFS difficulty. Finding a poly with a score above threshold is “good enough” to start sieving in most circumstances. In this case, I mentioned this composite in the thread for finding better CADO parameters in case that work went beyond C180. We now have a baseline for a C182, if the work continues. But even though the poly found is above threshold, it was found in < 60 hours by msieve-GPU. CADO can probably do much better. Heck, even msieve-GPU will likely keep finding better polys with higher search ranges for c5. So I’ll go on searching until it’s clearly counterproductive. Think we can find a 7-handle?
I'm playing around with both CADO-NFS and trying to get a Colab msieve-GPU session working. I don't expect much from my current work, but since my interest is piqued, I kind of think once the 198 team sieve is finished, I will try a full "farm" poly search run and see what turns up. By then I should have my "poly" scripts fine tuned. (This is where my ego says, "7? No problem!")
See what you did by putting my name on a list?
Last fiddled with by EdH on 2020-05-09 at 22:12 Reason: Added two letter missing word - identification of which word is left as an exercise. . .
2020-05-10, 02:17 #1890 VBCurtis "Curtis" Feb 2005 Riverside, CA 3·1,543 Posts I can't let a poly search just sit there with others having all the fun. Taking 14-15M on CADO with incr=2310, P=3M, nq=15625. That's about 3% of a full search, just to see what turns up.
2020-05-10, 02:54 #1891
EdH
"Ed Hall"
Dec 2009
2×32×197 Posts
Quote:
Originally Posted by VBCurtis I can't let a poly search just sit there with others having all the fun. Taking 14-15M on CADO with incr=2310, P=3M, nq=15625. That's about 3% of a full search, just to see what turns up.
Earlier today I started a default CADO-NFS run with the minor machines here. I expect to add the major machines when they finish sieving. How does my default run affect your 14-15M area? Is it full duplication, or is there enough randomness to not worry? I left everything at the values in your modified params.c180.
2020-05-10, 04:51 #1892 VBCurtis "Curtis" Feb 2005 Riverside, CA 3×1,543 Posts I believe the default settings only go to 2-3M; I went for 14M to avoid any duplication. That said, I used admin 14e6 and admax 15e6; after running all the workunits, it thinks it is 99.6% done and won't proceed to rootopt. I've tried changing admax to a multiple of 2310, changing admax to far below 15e6 so that it is clearly more than 100% done, and tried setting admin/admax/adrange all to zero to try to force it to finish sizeopt, no joy. So, I guess I won't be providing a poly for this after all.
Similar Threads Thread Thread Starter Forum Replies Last Post ixfd64 mersennewiki 169 2018-09-21 05:43 carpetpool Miscellaneous Math 14 2017-02-18 19:46 cheesehead Forum Feedback 6 2009-07-28 13:02 R.D. Silverman NFSNET Discussion 13 2005-09-16 20:07 TauCeti NFSNET Discussion 0 2003-12-11 22:12
All times are UTC. The time now is 05:02.
Wed Jan 27 05:02:18 UTC 2021 up 55 days, 1:13, 0 users, load averages: 3.54, 3.02, 2.80
This forum has received and complied with 0 (zero) government requests for information.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.
|
Calendar Contest Problem 6 — Ming Dynasty
View as PDF
Points: 25
Time limit: 1.0s
Memory limit: 128M
Author:
Problem type
After creating a calendar to Maximilien Robespierre's satisfaction and narrowly avoiding the guillotine, you were transported elsewhere and elsewhen. (Or maybe you were narrowly teleported just before the guillotine fell, in which case you deserve what's coming.)
Somehow, you ended up in the Forbidden City in what appeared to be Ming dynasty, and were brought before the Emperor, who was engrossed in a debate on calendar reform with his ministers. So far, the Ming Dynasty's official calendar is based on the principle of 平气, dividing the year into 24 periods of equal lengths, and making the start of each period solar terms. However, debate has been raging about switching to 定气, which determines the solar terms by ecliptic longitude. 定气 makes the solar terms reflect the true orbital motion of the Earth more, but creates the potential for two major solar terms (中气) to appear in a single month. You don't really understand the difference, but since 定气 is used in the 21st century, you inevitably lean towards that option. Since you are a programmer by trade, the Emperor decides to let you implement the proposed calendar, so he could see for himself how it would compare to the previous calendar. Failure, of course, is not an option, as that typically proves fatal during those times.
The proposed calendar is essentially identical to the modern Chinese lunar calendar. Every year consists of 12 months (or 13 in leap years). Each month may contain 29 or 30 days. The following convention is used to name dates:
• The first month of the year is called 正月. The other month names are the Chinese numerals + the character 月, i.e. 二月, 三月, 四月, 五月, 六月, 七月, 八月, 九月, 十月. For aesthetic reasons, the 11th month is called 冬月 and the last month of the year 腊月.
• For a leap month, the month is named as 闰 + the name of the preceding month. For example, if there is a leap month between 十月 and 冬月, it would be called 闰十月. The exception is if the leap year comes after the 12th month, in which case the leap month is 腊月 and the 12th month is called 十二月.
• The days of the month are named using a slightly modified form of Chinese numerals. If the Chinese numerals is a single character, e.g. 三, the character 初 is added as a prefix, i.e. 初三. If the Chinese numerals is two characters long, nothing is done, e.g. 正月十五. For days 21-29, the Chinese numerals would be three characters long, e.g. 二十一, which is undesirable, so we use the character 廿 to represent 20, i.e. 廿一. (Interesting fact: 廿 is essentially 十 with two vertical strokes, compare 卅 for 30 and 卌 for 40.)
It is important to note that in this version of the calendar, the dates are computed using the mean solar time at longitude 116°25' E, unlike the modern incarnation which uses the mean solar time at 120° E. Also note that the solar year is divided into 24 solar terms (二十四节气). The odd numbered terms are deemed minor (节气), and the even numbered ones are deemed major (中气). The two solstices and the two equinoxes are all considered major terms. The solar terms are named as follows, starting with the first term after the winter solstice: 小寒, 大寒, 立春, 雨水, 惊蛰, 春分, 清明, 谷雨, 立夏, 小满, 芒种, 夏至, 小暑, 大暑, 立秋, 处暑, 白露, 秋分, 寒露, 霜降, 立冬, 小雪, 大雪, 冬至.
The rules of the proposed calendar are simple. Every month begins on the date of the new moon. The date of the winter solstice (冬至) must be in the 11th month (冬月). The time period from the 11th month of a year (inclusive) until the 11th month of the next year (exclusive) is termed a 岁. In a normal 岁, there are 12 months. If there are 13, it's a leap 岁. The first month in the 岁 to not have a major solar term is deemed a leap month.
Fortunately, you have a copy of skyfield installed on your laptop, and along with a copy of JPL ephemerides DE440 and DE441, you could calculate the date of any solar term and new moon.
Interaction Protocol
This problem is interactive. Since you cannot use skyfield or download ephemerides on the judge, you must instead ask the grader to compute the dates of solar terms and new moons.
The first line of the input will consist of one integer, , the year to generate the calendar for. For your convenience, this will be given in the Gregorian calendar, and refers to the Chinese calendar year with the largest intersection with said Gregorian year.
During interaction with the grader, all times will be passed in UT1 in the ISO 8601 format, i.e. YYYY-mm-ddTHH:MM:SS. The year may have more than 4 digits, but its range is restricted to .
After receiving this line, your program can make the following queries:
• N <time>: this asks the grader for the time of the next new moon that happens at or after the specified time.
• S <time>: this asks the grader for the time of the next solar term that happens at or after the specified time.
• W <time>: this asks the grader for the time of the next winter solstice that happens at or after the specified time.
The grader will respond with the time followed by a new line character.
When your program is done generating the calendar, it should output DONE on its own line, and the calendar for that year. For every date in the year, you should output the date in Chinese, followed by a space. Then, you should output the corresponding Gregorian date in ISO 8601 format. If a solar term falls on this date, you should output the name of the solar term as well, preceded by a space. Your output should be encoded as UTF-8 with NFC normalization.
For your convenience, a list of dates of new moons, solar terms, and winter solstices are provided as plain text files. This should help you test your program locally.
Scoring
You will receive 75% of the points for this problem for generating the correct output.
The remaining points are awarded based on the number of queries made. For a perfect score, you must make at most 27 N queries, 30 S queries, and 3 W queries. For each query over the limit (up to 5), you are deducted 5% of the points for that test case. That is, if you make 5 or more queries over the limit, you will only receive 75% of the points for that test case.
Sample Interaction
>>> represents input from the interactor. The queries presented are only for reference purposes and may be neither necessary nor sufficient to solve the problem. Note that traditional characters are also accepted.
>>> 2021
W 2020-01-01T00:00:00
>>> 2020-12-21T10:02:20
S 2020-12-21T10:02:21
>>> 2021-01-05T03:23:26
N 2020-11-21T10:02:20
>>> 2020-12-14T16:16:35
DONE
|
1
JEE Main 2019 (Online) 12th January Morning Slot
MCQ (Single Correct Answer)
+4
-1
A particle of mass m moves in a circular orbit in a central potential field U(r) = $${1 \over 2}$$ kr2. If Bohr 's quantization conditions are applied, radii of possible orbitls and energy levels vary with quantum number n as :
A
rn $$\propto$$ $$\sqrt n$$, En $$\propto$$ n
B
rn $$\propto$$ $$\sqrt n$$, En $$\propto$$ $${1 \over n}$$
C
rn $$\propto$$ n, En $$\propto$$ n
D
rn $$\propto$$ n2, En $$\propto$$ $${1 \over {{n^2}}}$$
2
JEE Main 2019 (Online) 11th January Evening Slot
MCQ (Single Correct Answer)
+4
-1
In a hydrogen like atom, when an electron jumps from the M-shell to the L-shell, the wavelength of emitted radiation is $$\lambda$$. If an electron jumps from N-shell to the L-shell, the wavelength of emitted radiation will be:
A
$${{25} \over {16}}$$ $$\lambda$$
B
$${{27} \over {20}}$$ $$\lambda$$
C
$${{16} \over {25}}$$ $$\lambda$$
D
$${{20} \over {27}}$$ $$\lambda$$
3
JEE Main 2019 (Online) 11th January Morning Slot
MCQ (Single Correct Answer)
+4
-1
A hydrogen atom, initially in the ground state is excited by absorbing a photon of wavelength 980$$\mathop A\limits^ \circ$$. The radius of the atom in the excited state, in terms of Bohr radius a0 will be : (hc = 12500 eV$$\mathop A\limits^ \circ$$)
A
4a0
B
9a0
C
25a0
D
16a0
4
JEE Main 2019 (Online) 10th January Evening Slot
MCQ (Single Correct Answer)
+4
-1
Consider the nuclear fission
Ne20 $$\to$$ 2He4 + C12
Given that the binding energy/ nucleon of Ne20, He4 and C12 are, respectively, 8.03 MeV, 7.07 MeV and 7.86 MeV, identify the correct statement -
A
8.3 MeV energy will be released
B
energy of 11.9 MeV has to be supplied
C
energy of 12.4 MeV will be supplied
D
energy of 3.6 MeV will be released
JEE Main Subjects
Physics
Mechanics
Electricity
Optics
Modern Physics
Chemistry
Physical Chemistry
Inorganic Chemistry
Organic Chemistry
Mathematics
Algebra
Trigonometry
Coordinate Geometry
Calculus
EXAM MAP
Joint Entrance Examination
JEE MainJEE AdvancedWB JEE
Graduate Aptitude Test in Engineering
GATE CSEGATE ECEGATE EEGATE MEGATE CEGATE PIGATE IN
Medical
NEET
© ExamGOAL 2023
|
Analog 2-axis Thumb Joystick with Select Button + Breakout Board
PRODUCT ID: 512
\$5.95
QTY DISCOUNT 1-9 \$5.95 10-99 \$5.36 100+ \$4.76
Description-
This mini-kit makes it easy to mount a PSP/Xbox-like thumb joystick to your project. The thumbstick is an analog joystick - more accurate and sensitive than just 'directional' joysticks - with a 'press in to select' button. Since it's analog, you'll need two analog reading pins on your microcontroller to determine X and Y. Having an extra digital input will let you read the switch.
The pack comes in three parts - the joystick itself, a soft-touch rubber 'hat' and a nicely designed breakout board. We designed the breakout so that you can attach the joystick to a panel easily - every other breakout we wanted to carry had the mounting holes so they were in the way of the joystick movement! A 5 pin 0.1" spaced header makes it easy to connect either in a perfboard/breadboard setting or free wiring. You'll need to solder the joystick into the PCB using a soldering iron and solder, but its very simple and will only take a minute.
Technical Details+
Dimensions: 1.5" wide x 1.5" long x 1.25" tall (when assembled)
Weight: 12 grams ( 0.4oz )
Power: Usable with any voltage up to 5V, 2 analog outputs. 1 milliamp draw when used with 5V
|
eigenvalues_of_core_states.rst 1.82 KB
dulak committed Mar 21, 2009 1 2 3 4 5 6 .. _eigenvalues_of_core_states: ========================== Eigenvalues of core states ========================== jensj committed Jun 20, 2013 7 8 9 10 Calculating eigenvalues for core states can be useful for XAS, XES and core-level shift calculations. The eigenvalue of a core state k with a wave function \phi_k^a(\mathbf{r}) located on atom number a, can be calculated using this formula: dulak committed Mar 21, 2009 11 12 13 14 15 16 .. math:: \epsilon_k = \frac{\partial E}{\partial f_k} = \frac{\partial}{\partial f_k}(\tilde{E} - \tilde{E}^a + E^a), jensj committed Jun 20, 2013 17 18 where f_k is the occupation of the core state. When f_k is varied, Q_L^a and n_c^a(r) will also vary: dulak committed Mar 21, 2009 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 .. math:: \frac{\partial Q_L^a}{\partial f_k} = \int d\mathbf{r} Y_{00} [\phi_k^a(\mathbf{r})]^2 \delta_{\ell,0} = Y_{00}, .. math:: \frac{\partial n_c^a(r)}{\partial f_k} = [\phi_k^a(\mathbf{r})]^2. Using the PAW expressions for the :ref:energy contributions, we get: .. math:: \frac{\partial \tilde{E}}{\partial f_k} = Y_{00} \int d\mathbf{r} \int d\mathbf{r}' \frac{\tilde{\rho}(\mathbf{r}') \hat{g}_{00}^a(\mathbf{r} - \mathbf{R}^a)} {|\mathbf{r} - \mathbf{r}'|} = Y_{00} \int d\mathbf{r} \tilde{v}_H(\mathbf{r}) \hat{g}_{00}^a(\mathbf{r} - \mathbf{R}^a), .. math:: \frac{\partial \tilde{E}^a}{\partial f_k} = Y_{00} \int_{r
|
## Archive for the ‘Functional Programming’ Category
### Clojure 1.9 Hits the Streets!
Saturday, December 9th, 2017
Clojure 1.9 by Alex Miller.
From the post:
Clojure 1.9 is now available!
Clojure 1.9 introduces two major new features: integration with spec and command line tools.
spec (rationale, guide) is a library for describing the structure of data and functions with support for:
• Validation
• Error reporting
• Destructuring
• Instrumentation
• Test-data generation
• Generative test generation
• Documentation
Clojure integrates spec via two new libraries (still in alpha):
This modularization facilitates refinement of spec separate from the Clojure release cycle.
The command line tools (getting started, guide, reference) provide:
• Quick and easy install
• Clojure REPL and runner
• Use of Maven and local dependencies
• A functional API for classpath management (tools.deps.alpha)
The installer is available for Mac developers in brew, for Linux users in a script, and for more platforms in the future.
Being interested in documentation, I followed the link to spec rationale and found:
Map specs should be of keysets only
Most systems for specifying structures conflate the specification of the key set (e.g. of keys in a map, fields in an object) with the specification of the values designated by those keys. I.e. in such approaches the schema for a map might say :a-key’s type is x-type and :b-key’s type is y-type. This is a major source of rigidity and redundancy.
In Clojure we gain power by dynamically composing, merging and building up maps. We routinely deal with optional and partial data, data produced by unreliable external sources, dynamic queries etc. These maps represent various sets, subsets, intersections and unions of the same keys, and in general ought to have the same semantic for the same key wherever it is used. Defining specifications of every subset/union/intersection, and then redundantly stating the semantic of each key is both an antipattern and unworkable in the most dynamic cases.
Decomplect maps/keys/values
Keep map (keyset) specs separate from attribute (key→value) specs. Encourage and support attribute-granularity specs of namespaced keyword to value-spec. Combining keys into sets (to specify maps) becomes orthogonal, and checking becomes possible in the fully-dynamic case, i.e. even when no map spec is present, attributes (key-values) can be checked.
Sets (maps) are about membership, that’s it
As per above, maps defining the details of the values at their keys is a fundamental complecting of concerns that will not be supported. Map specs detail required/optional keys (i.e. set membership things) and keyword/attr/value semantics are independent. Map checking is two-phase, required key presence then key/value conformance. The latter can be done even when the (namespace-qualified) keys present at runtime are not in the map spec. This is vital for composition and dynamicity.
The idea of checking keys separate from their values strikes me as a valuable idea for processing of topic maps.
Keys not allowed in a topic or proxy, could signal an error, as in authoring, could be silently discarded depending upon your processing goals, or could be maintained while not considered or processed for merging purposes.
Thoughts?
Friday, December 8th, 2017
Manuel Uberti’s post:
Since my first baby steps in the world of Functional Programming, Haskell has been there. Like the enchanting music of a Siren, it has been luring me with promises of a new set of skills and a better understanding of the lambda calculus.
I refused to oblige at first. A bit of Scheme and my eventual move to Clojure occupied my mind and my daily activities. Truth be told, the odious warfare between dynamic types troopers and static types zealots didn’t help steering my enthusiasm towards Haskell.
Still, my curiosity is stoic and hard to kill and the Haskell Siren was becoming too tempting to resist any further. The Pragmatic Programmer in me knew it was the right thing to do. My knowledge portfolio is always reaching out for something new.
My journey began with the much praised Programming in Haskell. I kept track of the exercises only to soon discover this wasn’t the right book for me. A bit too terse and schematic, I needed something that could ease me in in a different way. I needed more focus on the basics, the roots of the language.
As I usually do, I sought help online. I don’t know many Haskell developers, but I know there are crazy guys in the Emacs community. Steve Purcell was kind and patient enough to introduce me to Haskell Programming From First Principles.
This is a huge book (nearly 1300 pages), but it just took the authors’ prefaces to hook me. Julie Moronuki words in particular resonated heavily with me. Unlike Julie I have experience in programming, but I felt exactly like her when it comes to approaching Haskell teaching materials.
So here I am, armed with Stack and Intero and ready to abandon myself to the depths and wonders of static typing and pure functional programming. I will track my progress and maybe report back here. I already have a project in mind, but my Haskell needs to get really good before starting any serious work.
May the lambda be with me.
Uberti’s post was short enough to quote in full and offers something to offset the grimness the experience with 2017 promises to arrive in 2018.
We will all take to Twitter, Facebook, etc. in 2018 to vent our opinions but at the end of the year, finger exercise is all we will have to show for it.
Following Uberti’s plan, with Haskell, or Clojure, Category Theory, ARM Exploitation, etc., whatever best fits your interest, will see 2018 end with your possessing an expanded skill set.
Your call, finger exercise or an expanded skill set (skills you can use for your cause).
### Reflecting on Haskell in 2017
Tuesday, December 5th, 2017
From the post:
Alas, another year has come and gone. It feels like just yesterday I was writing the last reflection blog post on my flight back to Boston for Christmas. I’ve spent most of the last year traveling and working in Europe, meeting a lot of new Haskellers and putting a lot of faces to names.
Haskell has had a great year and 2017 was defined by vast quantities of new code, including 14,000 new Haskell projects on Github . The amount of writing this year was voluminous and my list of interesting work is eight times as large as last year. At least seven new companies came into existence and many existing firms unexpectedly dropped large open source Haskell projects into the public sphere. Driven by a lot of software catastrophes, the intersection of security, software correctness and formal methods have been become quite an active area of investment and research across both industry and academia. It’s really never been an easier and more exciting time to be programming professionally in the world’s most advanced (yet usable) statically typed language.
Per what I guess is now a tradition, I will write my end of year retrospective on my highlights of what happened in the Haskell scene in retrospect.
This reading list will occupy you until Reflecting on Haskell in 2018 appears and beyond.
If your not, well, there’s no point in getting further behind!
BTW, this is a great example of how to write a year end summary for language. Some generalities but enough specifics for readers to plot their own course.
### Introduction to ClojureScript
Friday, October 27th, 2017
From the webpage:
### Requirements
It’s nice to know the following:
• React.js
• Basics of functional programming
### Help during the workshop
Here’s a couple of useful resources that will help you during the workshop:
Four hour workshop with a full set of topics and useful links.
Ah, one missing: React.js. 😉
Enjoy!
### xquerl (“We always do it nice and rough” Tina Turner)
Thursday, September 14th, 2017
xqerl
From the webpage:
Erlang XQuery 3.1 Processor
This is a currently a draft/proof-of-concept. Please don’t try to use it for “real” computing!
It is passing about 91% its (~25k) test cases.
#### Features it has:
• Module Feature
• Higher-Order Function Feature
#### Features it does not have, but might later:
• XQuery Update Facility
• Schema Aware Feature
• Typed Data Feature
• Static Typing Feature
• Serialization Feature
If you want to combine an interest in Erlang along with XQuery 3.1, you have arrived!
Decide for yourself which is the “nice” part and which is the “rough.”
Enjoy!
### The International Conference on Functional Programming – 2017
Tuesday, September 5th, 2017
The International Conference on Functional Programming – 2017 – Papers
If you are on the Gulf or East coast of the United States, take this opportunity to download papers to read following land fall of Irma.
You may not have Internet service but if you have printed several papers out as emergency preparedness, you won’t be at a loss for reading materials.
I’ve been in the impact zone of several hurricanes and while reading materials don’t make repairs go any faster, they do help pass the time.
### Fundamentals of Functional Programming (email lessons)
Tuesday, February 14th, 2017
From the post:
If you’re a software developer, you’ve probably noticed a growing trend: software applications keep getting more complicated.
It falls on our shoulders as developers to build, test, maintain, and scale these complex systems. To do so, we have to create well-structured code that is easy to understand, write, debug, reuse, and maintain.
But actually writing programs like this requires much more than just practice and patience.
In my upcoming course, Learning Functional JavaScript the Right Way, I’ll teach you how to use functional programming to create well-structured code.
But before jumping into that course (and I hope you will!), there’s an important prerequisite: building a strong foundation in the underlying principles of functional programming.
So I’ve created a new free email course that will take you on a fun and exploratory journey into understanding some of these core principles.
Let’s take a look at what the email course will cover, so you can decide how it fits into your programming education.
…(emphasis in original)
I haven’t taken an email oriented course in quite some time so interested to see how this contrasts with video lectures, etc.
Enjoy!
### Functional Programming in Erlang – MOOC – 20 Feb. 2017
Wednesday, February 8th, 2017
Functional Programming in Erlang with Simon Thompson (co-author of Erlang Programming)
From the webpage:
Functional programming is increasingly important in providing global-scale applications on the internet. For example, it’s the basis of the WhatsApp messaging system, which has over a billion users worldwide.
This free online course is designed to teach the principles of functional programming to anyone who’s already able to program, but wants to find out more about the novel approach of Erlang.
Learn the theory of functional programming and apply it in Erlang
The course combines the theory of functional programming and the practice of how that works in Erlang. You’ll get the opportunity to reinforce what you learn through practical exercises and more substantial, optional practical projects.
Over three weeks, you’ll:
• learn why Erlang was developed, how its design was shaped by the context in which it was used, and how Erlang can be used in practice today;
• write programs using the concepts of functional programming, including, in particular, recursion, pattern matching and immutable data;
• apply your knowledge of lists and other Erlang data types in your programs;
• and implement higher-order functions using generic patterns.
The course will also help you if you are interested in Elixir, which is based on the same virtual machine as Erlang, and shares its fundamental approach as well as its libraries, and indeed will help you to get going with any functional language, and any message-passing concurrency language – for example, Google Go and the Akka library for Scala/Java.
If you are not excited already, remember that XQuery is a functional programming language. What if your documents were “immutable data?”
Use #FLerlangfunc to see Twitter discussions on the course.
That looks like a committee drafted hashtag. 😉
### Reflecting on Haskell in 2016
Monday, December 26th, 2016
Reflecting on Haskell in 2016 by Stephen Diehl.
From the post:
Well, 2016 … that just happened. About the only thing I can put in perspective at closing of this year is progress and innovation in Haskell ecosystem. There was a lot inspiring work and progress that pushed the state of the art forward.
This was a monumental year of Haskell in production. There were dozens of talks given about success stories with an unprecedented amount of commercially funded work from small startups to international banks. Several very honest accounts of the good and the bad were published, which gave us a rare glimpse into what it takes to plant Haskell in a corporate environment and foster it’s growth.
If you are at all interested in Haskell and/or functional programming, don’t miss this collection of comments and links. It will save you hours of surfing, looking for equivalent content.
### Clojure/conj 2016 – Videos – Sorted
Monday, December 5th, 2016
Clojure/conf 2016 has posted videos of all presentations (thanks!) to YouTube, which displays them in no particular order.
To help with my viewing and perhaps yours, here are the videos in title order:
1. Adventures in Understanding Documents – Scott Tuddenham
2. Audyx.com 40k locs to build the first web – based sonogram – Asher Coren
3. Barliman: trying the halting problem backwards, blindfolded – William Byrd, Greg Rosenblatt
4. Becoming Omniscient with Sayid – Bill Piel
5. Building a powerful Double Entry Accounting system – Lucas Cavalcanti
6. Building composable abstractions – Eric Normand
7. Charting the English Language…in pure Clojure – Alexander Mann
8. Clarifying Rules Engines with Clara Rules – Mike Rodriguez
9. Clojure at DataStax: The Long Road From Python to Clojure – Nick Bailey
10. A Clojure DSL for defining CI/CD orchestrations at scale – Rohit Kumar, Viraj Purang
11. Composing music with clojure.spec – Wojciech Franke
12. In situ model-based learning in PAMELA – Paul Robertson, Tom Marble
13. Juggling Patterns and Programs – Steve Miner
14. Overcoming the Challenges of Mentoring – Kim Crayton
15. A Peek Inside SAT Solvers – Jon Smock
16. Powderkeg: teaching Clojure to Spark – Igor Ges, Christophe Grand
17. Production Rules on Databases – Paula Gearon
18. Programming What Cannot Be Programmed: Aesthetics and Narrative – D. Schmüdde
19. Proto REPL, a New Clojure Development and Visualization Tool – Jason Gilman
20. Simplifying ETL with Clojure and Datomic – Stuart Halloway
21. Spec-ulation Keynote – Rich Hickey
22. Spectrum, a library for statically "typing" clojure.spec – Allen Rohner
23. Using Clojure with C APIs for crypto and more – lvh
24. WormBase database migration to Datomic on AWS: A case Study – Adam Wright
Enjoy!
### Type-driven Development … [Further Reading]
Saturday, October 1st, 2016
The Further Reading slide from Edwin Brady’s presentation Type-driven Development of Communicating Systems in Idris (Lamda World, 2016) was tweeted as an image, eliminating the advantages of hyperlinks.
I have reproduced that slide with the links as follows:
On total functional programming
On interactive programming with dependent types
On types for communicating systems:
On Wadler’s paper, you may enjoy the video of his presentation, Propositions as Sessions or his slides (2016), Propositions as Sessions, Philip Wadler, University of Edinburgh, Betty Summer School, Limassol, Monday 27 June 2016.
### Exotic Functional Data Structures: Hitchhiker Trees
Sunday, September 18th, 2016
Description:
Functional data structures are awesome–they’re the foundation of many functional programming languages, allowing us to express complex logic immutably and efficiently. There is one unfortunate limitation: these data structures must fit on the heap, limiting their lifetime to that of the process. Several years ago, Datomic appeared as the first functional database that addresses these limitations. However, there hasn’t been much activity in the realm of scalable (gigabytes to terabytes) functional data structures.
In this talk, we’ll first review some of the fundamental principles of functional data structures, particularly trees. Next, we’ll review what a B tree is and why it’s better than other trees for storage. Then, we’ll learn about a cool variant of a B tree called a fractal tree, how it can be made functional, and why it has phenomenal performance. Finally, we’ll unify these concepts to understand the Hitchhiker tree, an open-source functionally persistent fractal tree. We’ll also briefly look at an example API for using Hitchhiker trees that allows your application’s state to be stored off-heap, in the spirit of the 2014 paper “Fast Database Restarts at Facebook”.
David Greenberg (profile)
Hitchhiker Trees (GitHub)
You could have searched for all the information I have included, but isn’t it more convenient to have it “already found?”
### FPCasts
Tuesday, September 13th, 2016
FPCasts – Your source for Functional Programming Related Podcasts
Ten (10) sources of podcasts, with a link to the latest podcast from each source.
Not a problem but took me by surprise on my first visit.
As useful as this will be, indexed podcasts where you could jump to a subject of interest would be even better.
Enjoy!
### Category Theory 1.2
Tuesday, August 30th, 2016
Category Theory 1.2 by Bartosz Milewski.
Brief notes on the first couple of minutes:
Our toolset includes:
Abstraction – lose the details – things that were different are now the same
Composition –
Identity – what is identical or considered to be identical
Composition and Identity define category theory.
Despite the bad press about category theory, I was disappointed when the video ended at the end of approximately 48 minutes.
Yes, it was that entertaining!
Or try Category Theory for Programmers: The Preface, also by Bartosz Milewski.
### Elementary Category Theory and Some Insightful Examples
Saturday, August 13th, 2016
From the description:
Eddie Grutman
July 27, 2016
It turns out that much of Haskell can be understood through a branch of mathematics called Category Theory. Concepts such as Functor, Adjoints, Monads and others all have a basis in the Category Theory. In this talk, basic categorical concepts, starting with categories and building through functors, natural transformations, and universality, will be introduced. To illustrate these, some mathematical concepts such as homology and homotopy, monoids and groups will be discussed as well (proofs omitted).
Kudos to the NYC Haskell User’s Group for posting videos of its presentations.
For those of us unable to attend such meetings, these videos are a great way to remain current.
### ARGUS
Tuesday, August 9th, 2016
From the post:
This is one post in a series about programming models and languages for distributed computing that I’m writing as part of my history of distributed programming techniques.
• Abstraction Mechanisms in CLU, Liskov, Barbara and Snyder, Alan and Atkinson, Russell and Schaffert, Craig, CACM 1977 (Liskov et al. 1977).
• Guardians and Actions: Linguistic Support for Robust, Distributed Programs, Liskov, Barbara and Scheifler, Robert, TOPLAS 1982 (Liskov and Scheifler 1983).
• Orphan Detection in the Argus System, Walker, Edward Franklin, DTIC 1984 (Walker 1984).
• Implementation of Argus, Liskov, Barbara and Curtis, Dorothy and Johnson, Paul and Scheifer, Robert, SIGOPS 1987 (Liskov et al. 1987).
• Distributed Programming in Argus, Liskov, Barbara CACM 1988 (Liskov 1988).
I’m thinking about how to fix an XFCE trackpad problem and while I think about that, wanted to touch up the references from Christopher’s post.
Apologies but I was unable to find a public version of: Implementation of Argus, Liskov, Barbara and Curtis, Dorothy and Johnson, Paul and Scheifer, Robert, SIGOPS 1987 (Liskov et al. 1987).
Enjoy!
### Functional TypeScript
Wednesday, August 3rd, 2016
Functional TypeScript by Victor Savkin.
From the post:
And to do that we will use the following three techniques:
• Use Functions Instead of Simple Values
• Model Data Transformations as a Pipeline
• Extract Generic Functions
Let’s get started!
Parallel processing has been cited as a driver for functional programming for many years. It’s Time to Get Good at Functional Programming
The movement of the United States government towards being a “franchise” is another important driver for functional programming.
Code that has no-side effects can be more easily repurposed, depending on the needs of a particular buyer.
The NSA wants terabytes of telephone metadata to maintain its “data mining as useful activity” fiction, China wants telephone metadata on its financial investments, other groups are spying on themselves and/or others.
Wasteful, not to mention expensive, to maintain side-effect ridden code bases for each customer.
Prepare for universal parallel processing and governments as franchises, start thinking functionally today!
### QML: A Functional Quantum Programming Language
Wednesday, August 3rd, 2016
QML: A Functional Quantum Programming Language
From the post:
QML is a functional language for quantum computations on finite types. The language introduces quantum data and quantum control structures, and integrates reversible and irreversible quantum computation. QML is based on strict linear logic, hence weakenings, which may lead to decoherence, have to be explicit.
The design of QML is guided by its categorical semantics: QML programs are interpreted as morphisms in the category FQC of Finite Quantum Computations. This provides a constructive semantics of irreversible quantum computations realisable as quantum gates. The relationships between the category FQC and its classical reversible counterpart, FCC (Finite Classical Computations), are also explored.
The operational semantics of QML programs is presented using standard quantum circuits, while a denotational semantics is given using superoperators.
This research has been supported by the EPSRC, via the MathFIT initiative, grant number GR/S30818/01. We are also involved in the EPSRC research network on the Semantics of Quantum Computation (QNET).
Having closely read Commercial National Security Algorithm Suite and Quantum Computing FAQ from the NSA, or it more popular summary, NSA Warns of the Dangers of Quantum Computing by Todd Jaquith, I know you are following every substantive publication on quantum computing.
By “substantive publication” I mean publications that have the potential to offer some insight into the development or use of quantum computers. The publications listed here qualify as “substantive” by that criteria.
With regard to the “dangers” of quantum computing, I see two choices:
1. Reliance on government agencies who “promise” to obey the law in the future (who have broken laws in the past), or
2. Obtain the advantages of quantum computing before such government agencies. (Or master their use more quickly.)
Unless you view “freedom” as being at the sufferance of government, may I suggest pursuit of #2 as much as interest and resources permit?
### Functor Fact @FunctorFact [+ Tip for Selling Topic Maps]
Tuesday, June 28th, 2016
JohnDCook has started @FunctorFact, tweets “..about category theory and functional programming.”
John has a page listing his Twitter accounts. It needs to be updated to reflect the addition of @FunctorFact.
BTW, just by accident I’m sure, John’s blog post for today is titled: Category theory and Koine Greek. It has the following lesson for topic map practitioners and theorists:
Another lesson from that workshop, the one I want to focus on here, is that you don’t always need to convey how you arrived at an idea. Specifically, the leader of the workshop said that if you discover something interesting from reading the New Testament in Greek, you can usually present your point persuasively using the text in your audience’s language without appealing to Greek. This isn’t always possible—you may need to explore the meaning of a Greek word or two—but you can use Greek for your personal study without necessarily sharing it publicly. The point isn’t to hide anything, only to consider your audience. In a room full of Greek scholars, bring out the Greek.
This story came up in a recent conversation about category theory. You might discover something via category theory but then share it without discussing category theory. If your audience is well versed in category theory, then go ahead and bring out your categories. But otherwise your audience might be bored or intimidated, as many people would be listening to an argument based on the finer points of Koine Greek grammar. Microsoft’s LINQ software, for example, was inspired by category theory principles, but you’d be hard pressed to find any reference to this because most programmers don’t want to know or need to know where it came from. They just want to know how to use it.
Sure, it is possible to recursively map subject identities in order to arrive at a useful and maintainable mapping between subject domains, but the people with the checkbook are only interested in a viable result.
How you got there could involve enslaved pixies for all they care. They do care about negative publicity so keep your use of pixies to yourself.
Looking forward to tweets from @FunctorFact!
### ClojureBridge…beginner friendly alternative to the official Clojure docs
Tuesday, June 14th, 2016
Get into Clojure with ClojureBridge
Welcome to ClojureBridge CommunityDocs, the central location for material supporting and extending the core ClojureBridge curriculum (https://github.com/ClojureBridge/curriculum). Our goal is to provide additional labs and explanations from coaches to address the needs of attendees from a wide range of cultural and technical backgrounds.
Arne Brasseur tweeted earlier today:
Little known fact, the ClojureBridge community docs are a beginner friendly alternative to the official Clojure docs
Pass this along and contribute to the “beginner friendly alternative” so this becomes a well known fact.
Enjoy!
### Modeling data with functional programming – State based systems
Sunday, May 22nd, 2016
Brian has just released chapter 8 of his Modeling data with functional programming in R, State based systems.
BTW, Brian mentions that his editor is looking for more proof reviewers.
Enjoy!
### SOAP and ODBC Erlang Libraries!
Friday, April 22nd, 2016
From the post:
Online bookie Bet365 has released code into the GitHub open-source library to encourage enterprise developers to use the Erlang functional programming language.
The company has used Erlang since 2012 to overcome the challenges of using higher performance hardware to support ever-increasing volumes of web traffic.
“Erlang is a precision tool for developing distributed systems that demand scale, concurrency and resilience. It has been a superb technology choice in a business such as ours that deals in high traffic volumes,” said Chandru Mullaparthi, head of software architecture at Bet365.
I checked, the SOAP library is out and the ODBC library is forthcoming.
Cliff’s post ends with this cryptic sentence:
These releases represent the first phase of a support programme that will aim to address each of the major issues surrounding the uptake of Erlang.
That sounds promising!
Following @cmullaparthi to catch developing news.
### Clojure/west 2016 – Videos! [+ Unix Sort Trick]
Monday, April 18th, 2016
I started seeing references to Clojure/west 2016 videos and to marginally increase your access to them, I have sorted them by author and with a Unix sort trick, by title.
Unix Sort Trick (truthfully, just a new switch to me)
Having the videos in author order is useful but other people may remember a title and not the author.
I want to sort the already created <li> elements with sort, but you can see the obvious problem.
By default, sort uses the entire line for sorting, which given the urls, isn’t going to give the order I want.
To the rescue, the -k switch for sort, which allows you to define which field and character offset in that field to use for sorting.
In this case, I used 1, the default field and then character offset 74, the first character following the > of the <a> element.
Resulted in:
In full: sort -k 1.74 sort-file.txt > sorted-file.txt
### SICP [In Modern HTML]
Monday, April 18th, 2016
SICP by Andres Raba.
Poorly formatted or styled HTML for CS texts is a choice, not a necessity.
As proof I offer this new HTML5 and EPUB3 version of “Structure and Interpretation of Computer Programs” by Abelson, Sussman, and Sussman.
From the webpage:
Modern solutions such as scalable vector graphics, mathematical markup with MathML and MathJax, embedded web fonts, and syntax highlighting are used. Rudimentary scaffolding for responsive design is in place, which adapts the page for viewing on pocket devices and tablets. More tests on small screens are needed to adjust the font size and formatting, so I encourage feedback from smartphone and tablet owners.
Enjoy!
### Brave Clojure: Become a Better Programmer
Wednesday, March 23rd, 2016
From the post:
Next week week I’m re-launching www.braveclojure.com as Brave Clojure. The site will continue featuring Clojure for the Brave and True, but I’m expanding its scope a bit. Instead of just housing the book, the purpose of the site will be to help you and the people you cherish become better programmers.
Like many other Clojurists, I fell in love with the language because learning it made me a better programmer. I started learning it because I was a bit bored and burnt out on the languages and tools I had been using. Ruby, Javascript, Objective-C weren’t radically different from each other, and after using them for many years I felt like I was stagnating.
But Clojure, with its radically different approach to computation (and those exotic parentheses) drew me out of my programming funk and made it fun to code again. It gave me new tools for thinking about software, and a concomitant feeling that I had an unfair advantage over my colleagues. So of course the subtitle of Clojure for the Brave and True is learn the ultimate language and become a better programmer.
And, four years since I first encountered Rich Hickey’s fractal hair, I still find Clojure to be an exceptional tool for becoming a better programmer. This is because Clojure is a fantastic tool for exploring programming concepts, and the talented community has created exceptional libraries for such diverse approaches as forward-chaining rules engines and constraint programming and logic programming, just to name a few.
Mark your calendar to help drive the stats for Daniel’s relaunch of www.braveclojure.com as Brave Clojure.
Email, tweet, blog, etc., to help others drive not only the relaunch stats but the stats for following weeks as well.
This could be one of those situations where your early participation and contributions will shape the scope and the nature of this effort.
Enjoy!
### LANGSEC: Taming the Weird Machines (Subject Identities in Code/Data)
Saturday, March 19th, 2016
From the post:
Introduction
I want to get some of my opinions on the current state of computer security out there, but first I want to highlight some of the most exciting, and in my views, promising recent developments in security: language-theoretic security (LangSec). Feel free to skip the next few paragraphs of background if you are familiar with the concepts to get to my analysis, otherwise, buckle up for a little ride!
Background
If I were to distill the core of the LangSec movement into a single thesis it would be this: The complexity of our computing systems (both software and hardware) have reached such a degree that data must treated as formally as code. A concrete example of this is return-oriented programming (ROP), where instead of executing shellcode loaded into memory by the attacker, a number of gadgets are found in existing code (such as libc) and their addresses chained together on the stack and as the ret instruction is repeatedly called, the semantics of the gadgets is executed. This hybrid execution environment of using existing code and driving it with a buffer-overflow of data is one example of a weird machine.
Such weird machines crop up in many sorts of places: viz. the Intel x86 MMU that has been shown to be Turing-complete, the meta-data of ELF executable files that can drive execution in the loading & dynamic-linking stage, etc… This highlights the fact that data can be treated as instructions or code on these weird machines, much like Java byte-code is data to an x86 CPU, it is interpreted as code by the JVM. The JVM is a formal, explicit machine, much like the x86 CPU; weird machines on the other hand are ad hoc, implicit and generally not intentionally created. Many exploits are simply shellcode developed for a weird machine instead of the native CPU.
The “…data must be formally treated as code…” caught my eye as the reverse of “…code-as-data…,” which is a characteristic of Lisp and Clojure.
From a topic map/subject identity perspective, the problem is accepting implied subject identities and therefore implied properties and associations.
Being “implied” and not “explicit,” the interaction of subjects can change when someone, perhaps a hacker (or a fat-fingered user), supplies values that fall within the range of implied subject identities, properties, or associations.
Implied subject identities, properties, or associations, in code or data, reside in the minds of programmers, making detection well nigh impossible. At least prior to some hacker discovering an implied subject identity, property or association.
Avoiding implied subject identities, properties and associations will require work, loathsome to all programmers, but making subject identities explicit, enumerating their properties and allowed associations, in code and data, is a countable activity.
Having made subject identities explicit, capturing those results in code based on those explicit subject identities more robust. You won’t be piling implied subject identities on top of implied subject identities, or in plainer English, you won’t be writing cybersecurity software.
PS: Using a subject identity discipline does not mean you must document all of your code using XTM. You could but DSLs designed for your code/data may be more efficient.
### Open Source Clojure Projects
Monday, March 14th, 2016
Daniel Higginbotham of Clojure for the Brave and True, has posted this listing of open source Clojure projects with the blurb:
Looking to improve your skills and work with real code? These projects are under active development and welcome new contributors.
You can see the source at: https://github.com/braveclojure/open-source, where it says:
Pull requests welcome!
Do you know of any other open source Clojure projects that welcome new contributors?
Like yours?
Just by way of example, marked as “beginner friendly,” you will find:
alda – A general purpose music programming language
Avi – A lively vi (a spec & implementation of vim)
clj-rethinkdb – An idomatic RethinkDB client for Clojure
For the more sure-footed:
ClojureCL – Parallel computations on the GPU with OpenCL 2.0 in Clojure
Enjoy!
### Elm explained
Sunday, March 13th, 2016
From the webpage:
Some demonstration code and commentary to explain various fundamental features of the Elm language. The idea is mainly just to be able to read and understand Elm code, not so much how to use it well.
I will still be posting about the FBI’s efforts to rape Apple but I want to get back to delivering more technical content as well.
Enjoy!
I first saw this in a tweet by Jessica Kerr.
### Program Derivation for Functional Languages – Tuesday, March 29, 2016 Utecht
Wednesday, March 9th, 2016
Program Derivation for Functional Languages by Felienne Hermans.
From the webpage:
Program Derivation for Functional Languages
Program derivation of course was all the rage in the era of Dijkstra, but is it still relevant today in the age of TDD and model checking? Felienne thinks so!
In this session she will show you how to systematically and step-by-step derive a program from a specification. Functional languages especially are very suited to derive programs for, as they are close to the mathematical notation used for proofs.
You will be surprised to know that you already know and apply many techniques for derivation, like Introduce Parameter as supported by Resharper. Did you know that is actually program derivation technique called generalization?
I don’t normally post about local meetups but as it says in the original post, Felienne is an extraordinary speaker and the topic is an important one.
Personally I am hopeful that at least slides and/or perhaps even video will emerge from this presentation.
If you can attend, please do!
In the meantime, if you need something to tide you over, consider:
A Calculus of Functions for Program Derivation by Richard Bird (1987).
Lectures on Constructive Functional Programming by R.S. Bird (1988).
A brief introduction to the derivation of programs by Juris Reinfelds (1986).
### Dimpl: An Efficient and Expressive DSL for Discrete Mathematics
Sunday, February 28th, 2016
Abstract:
This paper describes the language DIMPL, a domain-specific language (DSL) for discrete mathematics. Based on Haskell, DIMPL carries all the advantages of a purely functional programming language. Besides containing a comprehensive library of types and efficient functions covering the areas of logic, set theory, combinatorics, graph theory, number theory and algebra, the DSL also has a notation akin to one used in these fields of study. This paper also demonstrates the benefits of DIMPL by comparing it with C, Fortran, MATLAB and Python &emdash; languages that are commonly used in mathematical programming.
From the comparison, solving simultaneous linear equations:
Much more is promised in the future for DIMPL:
Future versions of DIMPL will have an extended library comprising of modules for lattices, groups, rings, monoids and other discrete structures. They will also contain additional functions for the existing modules such as Graph and Tree. Moreover, incorporating Haskell’s support for pure parallelism and explicit concurrency in the library functions could significantly improve the efficiency of some functions on multi-core machines.
Can you guess the one thing that Ronit left out of his paper?
You guessed it!
The Github URL for the repository. 😉
You should check out his homepage as well.
I have only touched the edges of this paper but it looks important.
|
### 2512
Combined 23Na/39K MRI for the quantification of Na+ and K+ concentrations in human skeletal muscle at 7 T
Lena V. Gast1, Max Müller1, Bernhard Hensel2, Michael Uder1, and Armin M. Nagel1,3
1Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany, 2Center for Medical Physics and Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany, 3Division of Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
### Synopsis
A non-invasive determination of Na+ and K+ concentrations in skeletal muscle tissue is desirable to gain insights into pathological processes connected to various diseases. In this work, the feasibility of combined quantitative 23Na/39K MRI at 7 T using a double-tuned 23Na/39K birdcage calf coil was examined. In phantom measurements, a 23Na/39K SNR ratio of 46.8 was found. Moreover, Na+ and K+ concentrations close to the real concentrations were determined. In skeletal muscle tissue, fast transverse relaxation of 39K leads to underestimation of K+ concentrations if no relaxation correction is applied.
### Introduction
Sodium (Na+) and potassium (K+) ions play a vital role in many cellular processes. In healthy tissue, Na+ exhibits a high concentration in the extracellular space ([Na+ext] = 145 mM) and a low concentration in the intracellular space ([Na+int] = 10-15 mM).1 In contrast, K+ ions are mainly concentrated in the intracellular space ([K+int] = 140 mM) with only a small extracellular concentration ([K+ext] = 2.5-3.5 mM). However, currently alterations of the K+ in ion homeostasis can be analyzed only in extracellular body fluids (e.g. blood samples). Therefore, a non-invasive determination of the Na+ and K+ concentration using 23Na and 39K MRI might help gaining insights into pathological processes also connected to the intracellular space. For example, in dialysis patients, excessive K+ cannot be excreted by the kidneys and is partly buffered in the intracellular space.2 A major challenge of 23Na and especially 39K MRI is the low SNR due to low in vivo concentrations and gyromagnetic ratios (γNa = 11.27 MHz/T, γK = 1.99 MHz/T). So far, 39K MRI in humans was performed using single-tuned coils.3,4,5 The aim of this work was to examine the feasibility of combined 23Na/39K MRI at 7 T using a double-tuned 23Na/39K coil.
### Methods
Measurements were performed on a 7 T whole-body MR system (Magnetom Terra, Siemens Healthineers, Erlangen, Germany) using a double-tuned 23Na/39K birdcage coil with inner diameter 20 cm (Rapid Biomedical, Rimpar, Germany). A phantom containing 50/120 mM of NaCl/KCl in combination with 4% agarose was used for the verification of the quantification procedure. Moreover, the lower leg of two healthy male volunteers was examined. For the determination of Na+ and K+ concentrations, a five-compartment reference phantom containing NaCl and KCl solution (see Fig. 1) was used.
23Na and 39K images were acquired with a 3D density-adapted radial readout.6 To minimize relaxation weighting, a long repetition time (TR = 150 ms) and echo times as short as possible were used. Parameters: 23Na MRI: TR/TE= 150/0.3 ms, FA = 90°, nominal spatial resolution Δx = 4x4x16 mm3, acquisition time TAcq = 11 min 7 s; 39K MRI: TR/TE =150/0.55 ms, FA = 90°, Δx = 10x10x30 mm3, TAcq = 9 min 59 s.
For the calculation of the Na+ concentration, a linear fit of the signal intensities, measured within the five reference compartments, to their nominal concentrations was performed to determine the conversion factor between signal intensity and concentration. 39K signal intensities were transformed to concentrations using the signal intensity of the reference compartment containing KCl solution:
$\left[K^+\right] = \frac{S_{^{39}K}}{S_{^{39}K,ref}} \left[K^+\right]_{ref} = \frac{S_{^{39}K}}{S_{^{39}K,ref}} \text{150 mM}$
SNR was determined according to the National Electrical Manufacturers association (NEMA) definition 7 using the mean signal intensities of unfiltered 23Na and 39K images acquired with the same nominal resolution (Δx = 10x10x30 mm3) and the magnitude signal of corresponding noise-only images.
### Results
Figure 2 shows Na+ and K+ concentration maps of the agarose phantom. Mean concentrations of [Na+] = 50±2 mM and [K+] = 119±13 mM were determined. Moreover, a 23Na/39K SNR ratio of 46.8 was calculated. The concentration maps of the in vivo measurements are shown in Figure 3. Mean muscle tissue concentrations of [Na+] = 18±2 mM/ 20±4 mM and [K+] = 72±8 mM/ 74±7 mM (volunteer 1/2) were measured.39K nuclei in muscle tissue exhibit very short relaxation times compared to KCl solution (see Table 1). To account for signal losses due to T2 relaxation as well as T1 weighting, relaxation correction factors were calculated.3 A comparison of the uncorrected and relaxation corrected K+ concentration values is given in Table 2.
### Discussion
Na+ and K+ concentrations determined for the agarose phantom are in good agreement with the real concentrations. Moreover, the measured 23Na/39K SNR ratio of 46.8 is within the range of theoretically expected values based on the noise model (32.0–117.5, sample-dominated or electric loss dominated).9 In human muscle tissue, because of fast 39K T2 relaxation and the relatively long TE (0.55 ms), necessary due to hardware restrictions, a significant 39K signal proportion has already decayed at the start of the signal acquisition. Therefore, K+ concentrations in muscle tissue are underestimated when calculated based on a KCl solution reference. This effect can be mitigated using a relaxation correction. However, the assumed relaxation times might deviate from the real relaxation, which might introduce a bias. A further improvement of the concentration determination could be achieved using a partial volume correction.
### Conclusion
Combined Na+ and K+ concentration determination is feasible using a dual-tuned 23Na/39K coil at 7 T. However, fast transverse relaxation of 39K ions in muscle tissue leads to an underestimation of the K+ concentration if no corrections are applied.
### Acknowledgements
No acknowledgement found.
### References
1. Robinson JD, Flashner MS. The (Na+ + K+)-activated ATPase: enzymatic and transport properties. Biochim Biophys Acta 1979;549(2):145–176.
2. Palmer BF. Regulation of Potassium Homeostasis. Clin J Am Soc Nephrol. 2015; 10(6): 1050-60.
3. Umathum R, Rösler MB, Nagel AM. In Vivo 39K MRI of Human Muscle and Brain. Radiology 2013; 269(2): 569-576.
4. Rösler MB, Nagel AM, Umathum R, Bachert P and Benkhedah N. In vivo observation of quadrupolar splitting in 39K magnetic resonance spectroscopy of human muscle tissue. NMR in Biomed 2016; 29: 451-457.
5. Thulborn: Atkinson IC, Claiborne TC, Thulborn KR. Feasibility of 39-potassium MR imaging of a human brain at 9.4 Tesla. Magn Reson Med 2014;71(5):1819-25.
6. Nagel AM, Laun FB, Weber MA, et al. Sodium MRI using a density-adapted 3D radial acquisition technique. Magn Reson Med 2009: 62:1565–1573.
7. National Electrical Manufacturers Association 2001.
8. Nagel AM, Umathum R, Rösler MB et al. 39K and 23Na relaxation times and MRI of rat head at 21.1T. NMR in Biomed 2016; 29: 759-766.
9. Hoult and Lauterbur. The Sensitivity of the Zeugmatographic Experiment Involving Human Samples. J Magn Reson 1969; 34(2):425-433.
### Figures
Figure 1: Schematic drawing of the reference phantom containing different concentrations of NaCl solution and one larger compartment containing both NaCl and KCl solution.
Figure 2: Na+ and K+ concentration maps of phantom containing 50 mM of NaCl and 120 mM of KCl within 4% agarose gel. Both 23Na and 39K images were reconstructed using a Hamming filter. Moreover, the 39K image was zero-filled to match the 23Na matrix size. The measured Na+/K+ concentrations are close to the expected values.
Figure 3: In vivo Na+ and K+ concentration maps of two healthy volunteers. Mean Na+ concentrations of 18±2 mM and 20±4 mM were determined within muscle tissue for volunteer 1 and 2, respectively. K+ concentrations of 72±8 mM and 74±7 mM were calculated. Due to fast T2 relaxation and partial volume effects, K+ concentrations are underestimated.
Table 1: 39K relaxation times in KCl solution, 4% agarose gel and muscle tissue together with resulting relaxation correction factors for quantification based on KCl solution signal. A repetition time of TR = 150 ms and an echo time of TE = 0.55 ms were used.
Table 2: K+ concentrations measured within the agarose phantom and calf muscle tissue of two healthy volunteers before and after relaxation correction with factors summarized in Table 1.
Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
2512
|
A few years back XKCD, as it so often does, got me thinking; this time about my choice of passwords. Up until that point I had always seen it as an intellectual challenge to memorize long random passwords for all my logins; usually 16 random characters, numbers, and symbols. But I’ve since learned that it is easier, and more importantly, more secure, to pick easy to memorize passwords. But how can we do this and still ensure the password is actually secure, it seemed a more in-depth analysis was in order.
The first thing we need to break down is the bits of entropy mentioned in the original XKCD comic; what is that all about? Simply put if i had a password that was a single bit long it would basically be either a 1 or a 0, therefore it has a single bit of entropy. This means there are only 2 possible passwords, and as such it would be extremely easy to guess. However when we evaluate words, instead of letters or numbers, we have to ask “how many possible words are there?”. If we were truly picking from every single word in the English language this would be huge; there are tens or even hundreds of thousands of words in the English language. However in reality your vocabulary probably isn’t quite so comprehensive, and even if it were it might be hard to remember a password like this one.
Pseudoturbinal_Huswifery_Climbable_Eburin
Instead we want to choose a more limited, and easier to remember, vocabulary from which to generate our passwords. Similarly if we want to calculate the difficulty in hacking a password, via a brute force attack, we should recognize the possible vocabulary we want to target. For example if I chose to write a script to hack your account, and the script tries every possible combination of the 200 most common vocabulary words in various sequences, then it will eventually hack your account; it is just a question of how long it would take.
Lets run those numbers and see what we get. How long would it take to hack your password if you picked 4 words from the 200 most common English words, assuming the script used to hack the account could make 1000 guesses per second.
$$200^4 = 1,600,000,000$$ possible passwords
$$\frac{1,600,000,000}{1,000} = 1,600,000$$ seconds
$$\frac{1,600,000}{60} = 26,667$$ minutes
$$\frac{26,667}{60} = 444$$ hours
$$\frac{444}{24} = 18.5$$ days
For fun we can also figure out how many bits of entropy that represents.
$$1,600,000,000 = 2^n$$
$$n = \frac{log(1,600,000,000)}{log(2)}$$
$$n \approx 30.575$$
Of course we can’t have a fraction of a bit, so we round n up to the nearest whole number indicating that the current password scheme would give us exactly 31 bits of entropy.
It is clear from the math above that there are 1,600,000,000 possible passwords that can be created when using 200 of the most common English words combined in any combination of 4 words. If we can try each possible combination at a rate of 1,000 per second then it would only take 18.5 days to determine the password. In reality this is doable on a basic desktop computer, and could be hacked in a much shorter time on a Mainframe computer. Clearly that won’t do.
If we truly want a secure password we are going to have to increase the number of bits of entropy, but hopefully without making the password significantly more difficult to remember. One way we can do that, as we already discussed, is simply by increasing the vocabulary used to generate your password. Lets try it with a vocabulary of 5,000 words.
$${5,000}^{4} \approx 6.25 \cdot {10}^{14}$$ possible passwords
$$\frac{6.25 \cdot {10}^{14}}{1,000} \approx 6.25 \cdot {10}^{11}$$ seconds
$$\frac{6.25 \cdot {10}^{11}}{60} \approx 1.041 \cdot {10}^{10}$$ minutes
$$\frac{1.041 \cdot {10}^{10}}{60} \approx 1.736 \cdot {10}^{8}$$ hours
$$\frac{1.736 \cdot {10}^{8}}{24} \approx 7.233 \cdot {10}^{6}$$ days
$$\frac{7.233 \cdot {10}^{6}}{365} \approx 19,818.61$$ Years
and the entropy…
$$6.25 \cdot {10}^{14} = 2^n$$
$$n = \frac{log(6.25 \cdot {10}^{14})}{log(2)}$$
$$n \approx 49.151$$
Well that looks much better. Now we have 50 bits of entropy which would take about 19,818.61 years at 1,000 tries per second to figure out your password. That is probably secure enough, however a Mainframe could probably still crack it in under a year. But unless the CIA is trying to get into your account, you are most likely safe if you wanted to stop there. For reference check out this site for a list of the 5,000 most common words in the English language. You can even use this list to generate your password by picking words from the list at random.
Still I am left wondering if there are any other simple steps we can take to increase the security without sacrificing anything. One thing you can do is add one more word into the mix that is not in the list of 5,000 most common English words, but still easy to remember; even better if the word is entirely unique. Some good examples might be a friend’s nick name, an uncommon last name of a friend, or a proper noun of an obscure person, place, or thing; a good example would be a character from a little known book or play. This would expand the vocabulary needed to hack the account well beyond the 5,000 word vocabulary and as such would significantly increase the Security of the chosen password.
## Make It Easy to Remember
Now that we know how to pick a secure password it still may not be terribly easy for you to remember. One trick you can use to get around that is to make your passwords into sentences that are easier to remember, but obscure enough that it isnt easy for a computer to guess. The following are a few examples.
SunDestroysVirginFlowers
AppleHurtsBabyTongue
LispKillsNewDevelopersJoy
To make it even easier to remember you can pick passwords which might remind you of the topic of the website it is to be used on. For example the first password of “SunDestroysVirginFlowers” might be the perfect password to use on a website for gardening. It may also help to remember if you make the sentence structure the same for every password. Here are some examples all using the same sentence structure.
HelpCatsWantLove
MakeJuiceQuenchThirst
In all these cases we have a verb, followed by a noun, followed by the reason the action was taken. By following a consistent structure your brain will have more clues to remember the password and thus will make remembering it much easier. Also you will notice I capitalized the first letter of each word, one variant that is slightly more obscure is to pick some other pattern of capitalization. You could pick the last letter of every word, or even the second letter, just be consistent with all your passwords so it is easier to remember.
## Make Sure It Can Be Used
The only other consideration is to make sure the password can actually be used on the desired site. Also if you tend to reuse passwords, which isn’t the best of ideas, then you want to make sure the password will be accepted by most websites. Since websites have some rules to make sure a password is acceptable we should pick a password that can pass most of these rules. Usually you are covered if your password has at least one of each of the following: capital letter, lower case letter, digit, and symbol. We already covered the capitalization, so all we need to throw into the mix are some digits and punctuation. Punctuation is easy, since we are already using sentence-like structures we can just throw in some punctuation.
Help!CatsWantLove.
MakeJuice,QuenchThirst!
Digits are a little more difficult. You have two options; the first is to replace letters with a number that looks similar to the letter being replaced.
3 -> B
1 -> I
7 -> T
The other option is to use the number to represent a word that has the same sound.
2 -> to, too
4 -> for
6 -> sex
8 -> ate
The key here, again, is to be consistent so you don’t get confused. So for example if you choose to replace the letter with the number it looks like, make sure you replace the same letter with the same number in all your passwords. This way you don’t need to remember what pattern you used on a case by case basis. It is important to note if you use the number to represent an entire word it should not replace one of the four typed words chosen earlier but rather should be in addition to it. This ensures you don’t reduce the overall entropy of your password.
In the end you should wind up with some passwords like the following:
Help!Cats8AllTheFood!
|
# Reducing run time of a numerical calculation using a mex file in Matlab
I wrote a Matlab code that involves doing a numeric calculation (relaxation), but it is quite slow. I learned of the possibility of using a mex file to run a C code and integrate it into Matlab, so I was thinking of doing the numerical calculation (which is relatively simple but involves loops and takes time) in C, and the rest (before and after) in Matlab.
The part of my Matlab code where the calculation is done:
% evolution of the potentials %
% note : for the index directions with periodic boundary conditions: index=mod(index-1,L)+1 . for index=index+1 it is mod(index,L)+1 , and for index=index-1 it is mod(index-2,L)+1 %
for i_t=1:max_relaxation_iterations
for q=1:length(i_eff_V_bounded) % this is set instead of running i=2:(L-1), j=1:L , k=1:L and ending up going over sites that are 0 in our effective system %
i=i_eff_V_bounded(q);
j=j_eff_V_bounded(q);
k=k_eff_V_bounded(q);
V0=V(i,j,k);
V1=( V(i+1,j,k)+V(i-1,j,k)+V(i,mod(j,L)+1,k)+V(i,mod(j-2,L)+1,k)+V(i,j,mod(k,L)+1)+V(i,j,mod(k-2,L)+1) )/( system(i+1,j,k)+system(i-1,j,k)+system(i,mod(j,L)+1,k)+system(i,mod(j-2,L)+1,k)+system(i,j,mod(k,L)+1)+system(i,j,mod(k-2,L)+1) ); % evolving the potential as the average of its occupied neighbors %
V(i,j,k)=V0+(V1-V0)*over_relaxation_factor; % evolving the potentials in time with the over relaxation factor %
delta_V_rms(i_t)=delta_V_rms(i_t)+(V1-V0)^2; % for each t at a given p, we sum over (V1-V0)^2 in order to eventually calculate delta_V_rms_avg %
delta_V_abs(i_t)=delta_V_abs(i_t)+abs(V1-V0); % for each t at a given p, we sum over |V1-V0| in order to eventually calculate delta_V_abs_avg %
delta_V_max(i_t)=max(abs(V1-V0),delta_V_max(i_t)); % for each t at a given p, we take the max of |V1-V0| from all the sites in order to eventually calculate delta_V_max_avg %
end
end
So in C it should be something like:
#include <stdio.h>
int mod(int x,int N) /* a function for the modulo operator (instead of the remainder operator which is the % operator) assuming N is positive (x can be negative) */
{
return (x%N+N)%N;
}
double d_abs(double x) /* a function for the absolute value operator */
{
if x<0
{
return -x;
}
else
{
return x;
}
}
double max(double x,double y) /* a function for the max operator */
{
if x>y
{
return x;
}
else
{
return y;
}
}
/* evolution of the potentials */
/* note : periodic boundary conditions for the j,k directions */
void potentials_evolution(int max_relax_iters,int N_eff_occ_sites,int i_eff_V_bounded[],int j_eff_V_bounded[],int k_eff_V_bounded[],int system[][][],over_relax_fact,double V[][][],double delta_V_rms[],double delta_V_abs[],double delta_V_max[])
{
int i_t,q,i,j,k;
double V0,V1;
for(i_t=0;i_t<max_relax_iters;i_t++)
{
for(q=0;q<N_eff_occ_sites;q++) /* going over only the occupied sites left in our effective system */
{
i=i_eff_V_bounded[q];
j=j_eff_V_bounded[q];
k=k_eff_V_bounded[q];
V0=V[i][j][k];
V1=( V[i+1][j][k]+V[i-1][j][k]+V[i][mod(j+1,L)][k]+V[i][mod(j-1,L)][k]+V[i][j][mod(k+1,L)]+V[i][j][mod(k-1,L)] )/( system[i+1][j][k]+system[i-1][j][k]+system[i][mod(j+1,L)][k]+system[i][mod(j-1,L)][k]+system[i][j][mod(k+1,L)]+system[i][j][mod(k-1,L)] ) /* evolving the potential as the average of its occupied neighbors */
V[i][j][k]=V0+(V1-V0)*over_relax_fact; /* evolving the potentials in time with the over relaxation factor */
delta_V_rms[i_t]=delta_V_rms[i_t]+(V1-V0)*(V1-V0); /* for each t at a given p, we sum over (V1-V0)^2 in order to eventually calculate delta_V_rms_avg */
delta_V_abs[i_t]=delta_V_abs[i_t]+d_abs(V1-V0); /* for each t at a given p, we sum over |V1-V0| in order to eventually calculate delta_V_abs_avg */
delta_V_max[i_t]=max(d_abs(V1-V0),delta_V_max[i_t]); /* for each t at a given p, we take the max of |V1-V0| from all the sites in order to eventually calculate delta_V_max_avg */
}
}
}
And so in Matlab I will replace the part of my Matlab code shown above with something like:
potentials_evolution(max_relax_iters,N_eff_occ_sites,i_eff_V_bounded,j_eff_V_bounded,k_eff_V_bounded,system,over_relax_fact,V,delta_V_rms,delta_V_abs,delta_V_max);
How do I implement this? I tried looking for a simple way to do it but I couldn't figure out how to properly do it.
Note 1: This numeric calculation is done not just once but many times for different systems that are generated randomly (there is a for loop going over the different systems).
Note 2: My C is quite rusty.
• I think it would be easier if you stated your mathematical problem instead of writing the code... In this form it is not clear if your computation can be written in a vectorized way, thus improving the speed. – Beni Bogosel Aug 9 '19 at 11:20
• @BeniBogosel That would be a different question. Taking the calculation as is (as presented here), my aim is to run it in C (so it will be much faster) using a mex file. Looking around, I couldn't figure out how to properly implement it. – TensoR Aug 9 '19 at 11:45
• It would be a different question, but people would tell you their opinions on it. Off the top of my head, I wouldn't use a three dimensional array, but a one dimensional one for V. In this way, the computation of V1 could be implemented as a matrix-vector product, eliminating the need of the inner loop. – Beni Bogosel Aug 9 '19 at 19:59
• @BeniBogosel Turning V into a one dimensional array doesn't make much sense to me, but even if you can do it properly in a manner that makes sense (not sure how at the moment), it would still be much slower than running the code in C, no? – TensoR Aug 10 '19 at 15:13
• If you could simplify the inner loop using some sparse matrix *vector multiplications, those are quite efficient in Matlab. In some of my codes, using array multiplications instead of loops increased the speed 100 fold. Of course, it all depends if this is possible in your case or not. – Beni Bogosel Aug 11 '19 at 16:14
|
Change search
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Computing intersection numbers of Chern classes
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
##### Abstract [en]
Let Z $\subset$ Pr be a smooth variety of dimension n and let c0, . . . , cn be the Chern classes of Z. We present an algorithm to compute the degree of any monomial in {c0, . . . , cn}. The method is based on intersection theory and may be implemented as a numeric, symbolic, or as a numeric/symbolic hybrid algorithm.
Mathematics
##### Identifiers
OAI: oai:DiVA.org:kth-26109DiVA: diva2:370129
##### Note
QC 20101115Available from: 2010-11-15 Created: 2010-11-15 Last updated: 2010-11-15Bibliographically approved
##### In thesis
1. Topics in computation, numerical methods and algebraic geometry
Open this publication in new window or tab >>Topics in computation, numerical methods and algebraic geometry
2010 (English)Doctoral thesis, comprehensive summary (Other academic)
##### Abstract [en]
This thesis concerns computation and algebraic geometry. On the computational side we have focused on numerical homotopy methods. These procedures may be used to numerically solve systems of polynomial equations. The thesis contains four papers.
In Paper I and Paper II we apply continuation techniques, as well as symbolic algorithms, to formulate methods to compute Chern classes of smooth algebraic varieties. More specifically, in Paper I we give an algorithm to compute the degrees of the Chern classes of smooth projective varieties and in Paper II we extend these ideas to cover also the degrees of intersections of Chern classes.
In Paper III we formulate a numerical homotopy to compute the intersection of two complementary dimensional subvarieties of a smooth quadric hypersurface in projective space. If the two subvarieties intersect transversely, then the number of homotopy paths is optimal. As an application we give a new solution to the inverse kinematics problem of a six-revolute serial-link mechanism.
Paper IV is a study of curves on certain special quartic surfaces in projective 3-space. The surfaces are invariant under the action of a finite group called the level (2,2) Heisenberg group. In the paper, we determine the Picard group of a very general member of this family of quartics. We have found that the general Heisenberg invariant quartic contains 320 smooth conics and we prove that in the very general case, this collection of conics generates the Picard group.
##### Place, publisher, year, edition, pages
Stockholm: KTH, 2010. v, 20 p.
##### Series
Trita-MAT. MA, ISSN 1401-2278 ; 10:13
Mathematics
##### Identifiers
urn:nbn:se:kth:diva-25941 (URN)978-91-7415-770-3 (ISBN)
##### Public defence
2010-11-29, Sal F3, Lindstedtsvägen 26, KTH, Stockholm, 13:00 (English)
##### Note
QC 20101115Available from: 2010-11-15 Created: 2010-11-05 Last updated: 2010-11-15Bibliographically approved
No full text
#### Search in DiVA
Eklund, David
##### By organisation
Mathematics (Div.)
Mathematics
urn-nbn
#### Altmetric score
urn-nbn
Total: 62 hits
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
v. 2.29.1
| | | |
|
CFAAnnualized Convexity
rajesh.c
New Member
Hi All
Could someone please explain why we divide convexity by periodicity squared to get annualized convexity. Thanks for the help.
|
# How to Multiply Fractions | Math Review
Multiplying Fractions
## Multiplying Fractions
The process for multiplying two fractions is fairly straightforward. All you have to do is multiply the two numerators together and the two denominators together. Your new numerator will be the product of the two numerators, and your new denominator will be the product of the two denominators.
Let’s take a few examples. First, we have 2 over 3 times 4 over 5. Now, what we have to do is multiply 2 by 4 to get 8 and that’s our new numerator. We multiply 3 by 5 to get 15 for a new denominator. That’s your answer.
When you start getting into fractions that have larger numbers, like this one here, it may be beneficial to do what’s called cancelling before you start multiplying your numbers. Let’s take 3 over 22 times 11 over 15.
Now, canceling is taking numbers in the numerator and numbers and denominator of opposite fractions and dividing them by a common factor. For instance, 3 and 15 are both divisible by 3. If we divide 3 by 3 we get 1, and if we divide 15 by 3 we get 5.
11 and 22 are both divisible by 11. We can divide 11 by 11 to get 1, and 22 by 11 to get 2. Now, let’s rewrite this to see what we have left. We have 1 over 2 times 1 over 5. Now, it’s a trivial thing to take the product of these. We have 1 times 1 and 2 times 5.
Lets take one final example 5 over 12 times 9 over 10. Once again, we’re going to cancel. We’ll divided 5 and 10 by 5. 5 over 5 is 1. 10 over 5 is 2. Now with the 9 and the 12, we can’t divide 12 by 9, but we can divide both numbers by 3.
If we divide 9 by 3 we get 3, and if we divide 12 by 3 we get 4. Let’s rewrite this and see where we have left. 1 over 4 times 3 over 2. Now, we can multiply this 1 times 3 is 3 and 4 times 2 is 8.
638849
by Mometrix Test Preparation | Last Updated: May 8, 2019
|
# Gravitational effects and metric spaces
Could somebody please explain something regarding the Nordstrom metric?
In particular, I am referring to the last part of question 3 on this sheet -- about the freely falling massive bodies.
My thoughts: The gravitational effects would be significant since for a massive body, the geodesic is timelike. There woud thus be a $\eta^{\mu\delta}\partial_\delta \phi \dot x^\beta \dot x_\beta$ is not of the form $f(\lambda)\dot x^\mu$ so the affine parametrization does not eliminate this term containing the gravitational potential $\phi$.
Does this argument make any sense at all? Also, what more can I say about the geodesics of such massive particles?
Thanks.
-
I did a quick Google and found loads of stuff about geodesics in Nordstrom gravity ... – John Rennie Jan 22 '13 at 10:08
@JohnRennie: Yes, but I haven't been able to find anything directly addressing my problem. – hetherson Jan 22 '13 at 10:45
|
# GDP Revised Upward to 2.7% Growth for Q3 2012
Q3 2012 real GDP shows 2.7% annualized growth, revised from 2.0% in the advance report. There was a significant upward revision to inventories, yet consumer spending was revised down. Exports were revised up as trade statistics became more complete. Q2 GDP was 1.25%.
A quarterly GDP of 2.66%, unrounded, is slightly above treading water economic growth. Consumer spending was barely breathing in Q3. Government spending alone attributed for 0.67 percentage points of Q3's 2.66% GDP. The change in private inventories alone contributed 0.77 percentage points to Q3 GDP. The drought negatively impacted economic growth as farm inventories reduced GDP by –0.39 percentage points. Government spending and inventory build up isn't exactly the kind of economic activity news we want to see. While these both add to economic growth, overall weak demand is still present in the economy.
As a reminder, GDP is made up of: $Y=C+I+G+{\left(X-M\right)}$ where Y=GDP, C=Consumption, I=Investment, G=Government Spending, (X-M)=Net Exports, X=Exports, M=Imports*.
The below table shows the percentage point spread breakdown from Q2 to Q3 GDP major components. GDP percentage point component contributions are calculated individually.
Comparison of Q3 2012 and Q2 2012 GDP Components
Component
Q3 2012
Q2 2012
GDP +2.66 +1.24 +1.42
C +0.99 +1.06 -0.07
I +0.86 +0.09 +0.77
G +0.67 –0.14 +0.81
X +0.16 +0.72 -0.56
M –0.02 –0.49 +0.47
The below table summarizes the revisions to Q3 GDP components from this revision and the advance report. These revisions are from a percentage point contribution to Q3 GDP growth. We expect exports and imports to be revised again for there is lag in trade statistics coming into the Census.
Comparison of Q2 2012 GDP Component Revisions
Component
Q3 Revised
GDP +2.01 +2.66 +0.65
C +1.42 +0.99 -0.43
I +0.07 +0.86 +0.79
G +0.71 +0.67 -0.04
X –0.23 +0.16 +0.39
M +0.04 –0.02 -0.06
Consumer spending, C in our GDP equation, showed slightly negative to flat growth in comparison to the 2nd quarter. Durable goods consumer spending contributed a 0.64 percentage points to personal consumption expenditures. Motor vehicles & parts alone was revised upward from 0.18 percentage point contribution to 0.25 percentage points. In nondurable goods spending, adjusted for prices gasoline personal consumption subtracted –0.11 percentage points from Q3 GDP. People used less gas, which is interesting. Final consumption expenditures of nonprofit institutions serving households, or NISH added 0.20 percentage points to Q3 GDP. NISH is really charity or spending on behalf of households by nonprofits and the largest area of this spending is health care. Graphed below is PCE with the quarterly annualized percentage change breakdown of durable goods (red or bright red), nondurable goods (blue) versus services (maroon).
Imports and exports, M & X are greatly impacted by real values and adjustments for prices from overseas and they are usually revised again in the next estimate. Import growth in real dollars has slowed, subtracting only -0.02 percentage points from GDP growth while export growth increased. Exports were significantly revised up as trade data comes in from various ports. Exports still are much less of a contribution to GDP than in Q2. The below graph shows real imports vs. exports in billions. The break down of the GDP percentage change to point contributions gives a clear picture on how much the trade deficit stunts U.S. economic growth.
Government spending, G was 0.67 percentage points or 25.22% of Q3's GDP growth. This was all federal spending, and of that Federal 0.71 percentage point GDP contribution, 0.64 of it was national defense. State and local governments subtracted -0.04 percentage points from Q3 GDP and was revised downward from the original -0.01 percentage points. Local and state governments are still hurting and contracting in their expenditures. Below is the percentage quarterly change of government spending, adjusted for prices, annualized.
Investment, I is made up of fixed investment and changes to private inventories. The change in private inventories alone gave a 0.77 percentage point contribution to Q3 GDP, or 30% of Q3's growth is due to changes in inventories. Below are the change in real private inventories and the next graph is the change in that value from the previous quarter.
The drought is causing farms to reduce their livestock and crop yields are way down. While nonfarm inventory level changes contributed 1.16 percentage points to GDP, farm inventories reduced GDP by -0.39 percentage points, or `14.7%.
Below is the GDP percentage point contribution for the change in nonfarm inventories. This was significantly revised, from 0.30 to 1.16 percentage points. The change in nonfarm inventories was 43.6% of Q3 GDP growth.
Fixed investment is residential and nonresidential. Residential fixed investment was +0.32 percentage point contribution to Q3 GDP. One can see the housing bubble collapse in the below graph and also how there is no meteoric recovery for Q3, in spite of all of the housing data hype. What's happening is there is activity in residential real estate but by volume will not return to the housing bubble years.
Nonresidential fixed investment was negatively revised, from a –0.13 percentage point contribution to a –0.23 Q3 GDP percentage point contribution. This shows regular investment is sorely lacking in our economy. Structures, or commercial real estate, contributed -0.03 percentage points and was revised upward by 0.10. Equipment and software was revised to a –0.20 GDP percentage point contribution. Individually, computers and other peripheral equipment was almost all of it, with a –0.19 percentage point contribution. The other negative was transportation equipment, which contributed a –0.22 point to fixed investment's overall measly +0.10 contribution to Q3 GDP.
Motor Vehicles as a whole was revised significantly, from a –0.47 percentage point GDP contribution to -0.24. Computer final sales, contributed to GDP by 0.12 percentage points. These two categories are different from personal consumption, or C sub-components, such as auto & parts. These are overall separate indices to show how much they added to GDP overall. Motor vehicles, computers are bought as investment, as fleets, in bulk, by the government, as well as part of consumer spending, government spending and so on.
The price index for gross domestic purchases, was revised down a percentage point to 1.4% for Q3, whereas Q2 was 0.7%. In other words, there was twice as much inflation than last quarter, corresponding to the rise in oil prices for Q3. Core price index, or prices excluding food and energy products, was revised down 0.2 percentage points to 1.1%. Q2's core price index was 1.4%.
Nominal GDP: In current dollars, not adjusted for prices, Q3 GDP, or the U.S. output, is $15.797 trillion, an upward revision of$21 billion from the Q3 advance estimate and a 5.5% annualized increase from Q2. The 2nd quarter saw a 2.8% increase. Applying the price indexes, or chained, real 2005 dollars, Q3 2012 GDP was $13.638 trillion. All figures are annualized. Gross domestic purchases are what U.S. consumers bought no matter whether it was made in Ohio or China. It's defined as GDP plus imports and minus exports or using our above equation: $P=Y-X+M$ where P = Real gross domestic purchases. Real gross domestic purchases was revised up, from 2.1% to 2.4%, whereas last quarter was 1.0%. Exports are subtracted off because they are outta here, you can't buy 'em, but imports, as well a know all too well, are available for purchase at your local Walmart. When gross domestic purchases exceed GDP, that's actually bad news, it means America is buying imports instead of goods made domestically. Below are real final sales of domestic product, or GDP - inventories change. This gives a better feel for real demand in the economy. This is because while private inventories represent economic activity, the stuff is sitting on the shelf, it's not demanded or sold. While real final sales increased, it was revised down for Q3, from 2.1% to 1.9%. In other words, with this revision demand became weaker than in the advance report. GNP - Gross National Product: Real gross national product, GNP, is the goods and services produced by the labor and property supplied by U.S. residents. GNP = GDP + (Income receipts from the rest of the world) - (Income payments to the rest of the world) Nominal GNP was$16.048 trillion for Q3, real GNP was $13.854 trillion. Real GNP increased 2.7% in Q3 whereas in Q2 GNP increased 2.1%. Net receipts of income from the rest of the world increased$1.3 billion in the third quarter after increasing $27.4 billion in Q2. In Q3, receipts decreased$1.6 billion, and payments decreased \$2.8 billion.
Below are the percentage changes of Q3 2012 GDP components, from Q2. There is a difference between percentage change and percentage point change. Point change adds up to the total GDP percentage change and is reported above. The below is the individual quarterly percentage change, against themselves, of each component which makes up overall GDP. Additionally these changes are seasonally adjusted and reported by the BEA in annualized format. On imports, services include offshore outsourcing and services imports increased 5.9% for Q3. Services exports increased 3.2%.
Q3 2012 Component Percentage Change
(annualized)
Component Percentage Change from Q2
GDP +2.7%
C +1.4%
I +6.7%
G +3.5%
X +1.1%
M +0.1%
The BEA's comparisons in percentage change breakdown of 2nd quarter GDP components are below. Changes to private inventories is a component of I.
C: Real personal consumption expenditures increased 1.4 percent in the third quarter, compared with an increase of 1.5 percent in the second. Durable goods increased 8.7 percent, in contrast to a decrease of 0.2 percent. Nondurable goods increased 1.1 percent, compared with an increase of 0.6 percent. Services increased 0.3 percent, compared with an increase of 2.1 percent.
I: Real nonresidential fixed investment decreased 2.2 percent, in contrast to an increase of 3.6 percent. Nonresidential structures decreased 1.1 percent, in contrast to an increase of 0.6 percent. Equipment and software decreased 2.7 percent, in contrast to an increase of 4.8 percent. Real residential fixed investment increased 14.2 percent, compared with an increase of 8.5 percent.
X & M: Real exports of goods and services increased 1.1 percent in the third quarter, compared with an increase of 5.3 percent in the second. Real imports of goods and services increased 0.1 percent, compared with an increase of 2.8 percent.
G: Real federal government consumption expenditures and gross investment increased 9.6 percent in the third quarter, in contrast to a decrease of 0.2 percent in the second. National defense increased 13.0 percent, in contrast to a decrease of 0.2 percent. Nondefense increased 3.0 percent, in contrast to a decrease of 0.4 percent. Real state and local government consumption expenditures and gross investment decreased 0.1 percent, compared with a decrease of 1.0 percent.
Here is our overview for Q2 GDP previous estimate and other reports on gross domestic product can be found here.
## Forum Categories:
### GDP overview is a lot of material with a lot of numbers
Folks, if you see an error or a confusing statement, please let us know. GDP is a massive statistical release and we hand calculate percentages not covered by other sites trying to amplify and illustrate the BEA statistical release.
The idea here is to amplify the report and we want to know if we have succeeded in that effort.
|
## CryptoDB
### Jiun-Ming Chen
#### Publications
Year
Venue
Title
2007
FSE
2006
EPRINT
This was a short note that deals with the design of Rainbow or stagewise unbalanced oil-and-vinegar'' multivariate signature schemes. We exhibit new cryptanalysis for current schemes that relates to flawed choices of system parameters in current schemes. These can be ameliorated according to an updated list of security design criteria.
2004
CHES
2004
EPRINT
We herein discuss two modes of attack on multivariate public-key cryptosystems. A 2000 Goubin-Courtois article applied these techniques against a special class of multivariate PKC's called Triangular-Plus-Minus'' (TPM), and may explain in part the present dearth of research on true'' multivariates -- multivariate PKC's in which the middle map is not really taken in a much larger field. These attacks operate by finding linear combinations of matrices with a given rank. Indeed, we can describe the two attacks very aptly as high-rank'' and low-rank''. However, TPM was not general enough to cover all pertinent true multivariate PKC's. \emph{Tame-like} PKC's, multivariates with relatively few terms per equation in the central map and an easy inverse, is a superset of TPM that can enjoy both fast private maps and short set-up times. However, inattention can still let rank attacks succeed in tame-like PKCs. The TTS (Tame Transformation Signatures) family of digital signature schemes lies at this cusp of contention. Previous TTS instances (proposed at ICISC '03) claim good resistance to other known attacks. But we show how careless construction in current TTS instances (TTS/4 and TTS/$2'$) exacerbates the security concern of rank, and show two different cryptanalysis in under $2^{57}$ AES units. TTS is not the only tame-like PKC with these liabilities -- they are shared by a few other misconstructed schemes. A suitable equilibrium between speed and security must be struck. We suggest a generic way to craft tame-like PKC's more resistant to rank attacks. A demonstrative TTS variant with similar dimensions is built for which rank attack takes $>2^{80}$ AES units, while remaining very fast and as resistant to other attacks. The proposed TTS variants can scale up. In short: We show that rank attacks apply to the wider class of tame-like PKC's, sometimes even better than previously described. However, this is relativized by the realization that we can build adequately resistant tame-like multivariate PKC's, so the general theme still seem viable compared to more traditional or large-field multivariate alternatives.
2003
EPRINT
In 2002 the new genre of digital signature scheme TTS (Tame Transformation Signatures) is introduced along with a sample scheme TTS/2. TTS is from the family of multivariate cryptographic schemes to which the NESSIE primitive {SFLASH} also belongs. It is a realization of Moh's theory for digital signatures, based on Tame Transformations or Tame Maps. Properties of multivariate cryptosystems are determined mainly by their central maps. TTS uses Tame Maps as their central portion for even greater speed than $C^\ast$-related schemes (using monomials in a large field for the central portion), previously usually acknowledged as fastest. We show a small flaw in TTS/2 and present an improved TTS implementation which we call TTS/4. We will examine in some detail how well TTS/4 performs, how it stands up to previously known attacks, and why it represents an advance over TTS/2. Based on this topical assessment, we consider TTS in general and TTS/4 in particular to be competitive or superior in several aspects to other schemes, partly because the theoretical roots of TTS induce many good traits. One specific area in which TTS/4 should excel is in low-cost smartcards. It seems that the genre has great potential for practical deployment and deserves further attention by the cryptological community.
2001
EPRINT
In the paper [1] published in Asiacrypt 2000", L. Goubin and N.T. Courtois propose an attack on the TTM cryptosystem. In paper [1], they mispresent TTM cryptosystem. Then they jump an attack from an example of TTM to the general TTM cryptosystem. Finally they conclude:"There is very little hope that a secure triangular system (Tame transformation system in our terminology) will ever be proposed". This is serious challenge to many people working in the field. In this paper, we will show that their attack is full of gaps in section 5. Even their attack on one implementation of TTM is questionable. We write a lengthy introduction to restate TTM cryptosystem and point out many possible implementations. It will be clear that their attack on one implementation can not be generalized to attacks on other implementations. As one usually said: "truth is in the fine details", we quote and analysis their TPM system at the end of the introduction and $\S$ 2. We further state one implementations of TTM cryptosystem in $\S$ 3. We analysis their MiniRank(r) attack in $\S$ 4 and show that is infeasible. We conclude that the attack of [1] on the TTM cryptosystem is infeasible and full of gaps. There is no known attacks which can crack the TTM cryptosystem.
#### Coauthors
Daniel J. Bernstein (1)
Owen Chia-Hsin Chen (1)
Yen-Hung Chen (1)
Jintai Ding (1)
Lei Hu (1)
T. MOH (1)
Bo-Yin Yang (5)
|
# Proof of basic subtraction rules for natural numbers
For $$m, n ∈ \mathbb{N}_0$$ we define a relation $$≥$$ by $$m ≥ n ⇔ ∃r ∈ \mathbb{N}_0, m = r + n$$. We denote $$r$$ by the difference $$m - n$$ which is thus defined only when m ≥ n.
How can we verify the basic subtraction rules involving natural number, specifically:
1. $$m – (n – r) = (m – n) + r$$ for $$m ≥ n ≥ r$$,
2. $$m + (n – r) = (m + n) – r$$ for $$n ≥ r$$,
3. $$m(n – r) = mn – mr$$ for $$n ≥ r$$.
E.g. for 3 we can set $$n = s + r$$ and thus $$mn = m(s + r) = ms + mr$$, which by definition $$mn – mr = ms = m(n – r)$$ since $$s = n – r$$.
Any hint in the right direction would be welcome. Thanks in advance.
Since (3.) is shown already in the question itself, I detailed below only the presumed proofs for (1.) and (2.).
For (1.) setting $$n=s+r$$ since $$n ≥ r$$ and $$m=w+n$$ since $$m ≥ n$$ thus
• $$m - n = w$$ since $$m ≥ n$$
• $$m = w + n = w + (s + r)$$ since $$m ≥ n$$ and $$n ≥ r$$
• $$m = w + (s + r) = w + (r + s) = (w + r) + s$$ by associativity and commutativity of addition
• $$m = ((m - n) + r) + (n - r)$$ by definition
• $$m - (n - r) = ((m - n) + r) = (m - n) + r$$ by definition since $$m ≥ n$$ and by extension $$m ≥ s$$
For (2.) setting $$n=s + r$$ since $$n ≥ r$$ thus
• $$m + n = m + (s + r)$$
• $$m + n = (m + s) + r$$ by associativity of addition
• $$m + s = (m + n) - r$$ by definition, since $$n ≥ r$$
• $$m + (n-r) = (m + n) - r$$ since $$s = n - r$$
I would still appreciate a double check.
|
# Is there a way to check this SMPS transformer (schematic)?
#### Lumenosity
Joined Mar 1, 2017
636
Hello,
I am having a hard time determining whether this transformer is functioning properly or not.
It is in a 300watt Modified Sine Wave Inverter and receives 12v as shown in the schematic below.
Each pin on the transformer is numbered below. I'm not even sure which side is primary and which is secondary.
I have attempted to apply 12v DC voltage to the transformer removed from the circuit and the results were confusing.
I connected 12v + to pins 3 and 4 and 12v - to pins 5 and 6. What I got from pins 10 and 11 was AC 1v at varying frequency from 30Hz to 181Hz
This is not what I was expecting. I was expecting an higher DC voltage output.
When the VOM meter was set to DC I got absolutely nothing. No reading at all from pins 10 and 11.
I also have a DE-5000 inductance meter.
I got 3.376 mH across pins 10 and 11
28.9uH between 5/6 and 3/4
28.9uH between 1/2 and 3/4
116uH between 1/2 and 5/6
Any insight or guidance appreciated. I'm somewhat lost.
Thanks
And here is the schematic of it in circuit.....
Last edited:
#### Ian0
Joined Aug 7, 2020
3,784
It's a push-pull primary, with V+ connected to pins 3/4 (labelled 4 on your schematic) and the MOSFET drains connected to pins 1/2 and 5/6 (labelled 3 and 5 on your schematic)
The output is on pins 10/11 (labelled 1 and 2 on your schematic)
#### Lumenosity
Joined Mar 1, 2017
636
Any idea what the output on pins 10/11 should be ?
#### Ian0
Joined Aug 7, 2020
3,784
Any idea what the output on pins 10/11 should be ?
$$V_{sec}=V_{pri}\sqrt\frac{L_{sec}}{L_{pri}}$$
#### LesJones
Joined Jan 8, 2017
3,572
Lumenosity, As you seem to think a transformer will work with DC you need to read up on transformer theory. If you don't understand how they work you have no chance of understanding how circuits using them works.
Les.
#### Lumenosity
Joined Mar 1, 2017
636
Lumenosity, As you seem to think a transformer will work with DC you need to read up on transformer theory. If you don't understand how they work you have no chance of understanding how circuits using them works.
Les.
I understand. That simple statement has just changed my direction so thanks.
I have read quite a few articles on transformer theory and inductance etc.
I admit I must be a slow learner at this point (yes, I'm a senior citizen trying to learn electronics)
It gets VERY confusing.
https://www.haloelectronics.com/products/dc-dc-transformers/
HALO offers a full line of DC/DC transformers for PoE, PoE+, and high voltage isolation applications. Please contact the factory or your local representative for non-standard turns ratios or custom design requests.
The hope was that through help and from continued study the proverbial lightbulb will eventually come on.
#### Lumenosity
Joined Mar 1, 2017
636
Lumenosity, As you seem to think a transformer will work with DC you need to read up on transformer theory. If you don't understand how they work you have no chance of understanding how circuits using them works.
Les.
In the schematic I included, can you tell me how AC is fed to the transformer when it is connected to DC?
Are the MOSFETS there to created the AC current?
Thanks
|
# Program to print nodes in the Top View of Binary Tree using C++
C++Server Side ProgrammingProgramming
In this tutorial, we will be discussing a program to print all the nodes that appear in the top view of a given binary tree.
For a particular binary tree, a node appears in its top view if it is the very first node at its horizontal distance. Horizontal distance for the left node of a node x is x-1 and for the right node of node x is x+1.
To solve this, we will do the level order traversal so that we get the topmost node for a particular level before the other nodes present at that level. Further, we will use hashing to check whether the selected node is visible in the top view or not.
## Example
#include <iostream>
#include<queue>
#include<map>
using namespace std;
struct Node{
Node * left;
Node* right;
int h_dist;
int data;
};
Node* create_node(int key){
Node* node=new Node();
node->left = node->right = NULL;
node->data=key;
return node;
}
void print_topview(Node* root){
if(root==NULL)
return;
queue<Node*>q;
map<int,int> m;
int h_dist=0;
root->h_dist=h_dist;
q.push(root);
cout<< "Top View for the given tree:" << endl;
while(q.size()){
h_dist=root->h_dist;
if(m.count(h_dist)==0)
m[h_dist]=root->data;
if(root->left){
root->left->h_dist=h_dist-1;
q.push(root->left);
}
if(root->right){
root->right->h_dist=h_dist+1;
q.push(root->right);
}
q.pop();
root=q.front();
}
for(auto i=m.begin();i!=m.end();i++){
cout<<i->second<< " ";
}
}
int main(){
Node* root = create_node(11);
root->left = create_node(23);
root->right = create_node(35);
root->left->right = create_node(47);
root->left->right->right = create_node(59);
root->left->right->right->right = create_node(68);
print_topview(root);
return 0;
}
## Output
Top View for the given tree:
23 11 35 68
Updated on 01-Nov-2019 06:15:48
|
Tag Info
Accepted
Density of States of Supercells
The density of states reads: $$\tag{1} g(E)=\sum_{n}\int\frac{d\mathbf{k}}{(2\pi)^3}\delta(E-E_{n\mathbf{k}}),$$ where $E_{n\mathbf{k}}$ are the electronic energies and the integral is over the ...
• 10.6k
Accepted
• 19.9k
Accepted
Can I run an nscf calculation in Quantum ESPRESSO with disk_io='none'?
Eventually, I figured out what was wrong happening with my DFT calculation. As I understand now, each processor saves and maintains (write recursively as the calculation progresses) its own file ...
• 2,431
How does electronic iteration work in a VASP relaxation calculation?
The figure below represents very well the self-consistent field (SCF) procedure used to solve the Kohn-Sham (KS) equations under the Density Functional Theory (DFT) approach: I think that this ...
• 19.9k
Accepted
Why don't we use the principal quantum number when building the projected density of state?
I think the reason is many-fold. In addition to what Tristan mentioned, there are some other possible reasons: The principle quantum number is a relatively ill-defined concept for an atom in a ...
• 8,036
Accepted
What do we mean by spin-splitting energy?
I see now how your initial questions were related, as they all fall under the scope of crystal field theory. I wrote a bit about this in a previous answer. At least in the context of molecular crystal ...
• 14.4k
There are many applications/advantages for supercell band unfolding. Take the band unfolding program KPROJ as an example: Nano Lett. 14, 5189 (2014) A k-projection technique (supercell band ...
• 14.3k
Validity of interpolation for density of states?
The density of states (DOS) is the number of different states at a particular energy level that electrons are permitted to occupy, i.e. the number of electron states per unit volume per unit energy. ...
• 4,278
|
# μSR (optional)
This is an optional part of this course, and still under construction.
μSR, or muon spin rotation is another method that makes use of hyperfine interactions. It is quite widely used, and therefore it deserves its place in this overview. There is no dedicated lecture material foreseen so far, but as a starter we’ll give you some information collected from the web.
One good question to think about when going through this material is: where would you put μSR on VIP2? When trying to do that, you’ll realize that this method is somewhat different from the others we’ve discussed.
Lesson Content
0% Complete 0/2 Steps
|
# Is it possible to define in LaTeX documents Latex Help
1. Aug 26, 2008
### Dragonfall
Is it possible to define in LaTeX documents something like \ket{X} to be \left| X\right> so I don't have to type that every time?
Last edited by a moderator: Oct 18, 2011
2. Aug 26, 2008
### cristo
Staff Emeritus
Re: Latex Help
If you put this in your preamble \newcommand{\ket}[1]{{\left| {#1} \right>}}, then you can use the command \ket{X}:
$$\newcommand{\ket}[1]{{\left| {#1} \right>}} \ket{X}$$
You can write similar commands for \bra and \braket, say.
3. Aug 26, 2008
### Dragonfall
Re: Latex Help
Awesome, thanks!
|
# My donation for 2009 (guest post from Dario Amodei)
This is a guest post from Dario Amodei about how he decided what charity to support for his most recent donation. Dario and GiveWell staff had several in-depth conversations as he worked through his decision, so we invited him to share his thought process here. Note that GiveWell has made minor editing suggestions for this post (though Dario determined the final content).
Before I get into the details of my donation decision, I’d like to first share a bit about myself: I’m a graduate student in physics at Princeton, and am interested, very broadly, in what I can do to make the world a better place. I feel that giving away a significant portion of my income is an important part of that, and since 2006 I’ve been donating to organizations that try to improve life in the developing world. I’ve always tried my best to make my donations as effective as possible, but on my own I was never able to give this task as much attention as it deserved. I happened upon GiveWell in 2008 through a link from an economics blog, and to date it’s been the single most useful resource I’ve found in deciding where to donate. Last year I gave $10,000 through GiveWell’s pledge fund, and ultimately decided to allocate all of this money to Village Reach. Holden and Elie have asked me to share the thought process I went through in making my decision, in the hopes that it might be of use to other donors facing a similar choice. My focus has always been on developing-world health interventions, because I believe these interventions address some of the world’s most urgent needs in a highly tangible way. Six out of 12 of GiveWell’s recommended charities operate in this area, including some health charities I’ve donated to in the past. Reading GiveWell’s reports on these charities, it quickly became clear to me that the “three-star” organizations — Village Reach (VR) and Stop TB — really do stand out above the others. Though I respect and am impressed by the two star organizations, they all seem to have sizable holes in their case for efficacy: for instance, PIH seems to (completely?) lack data on medical outcomes, and the Global Fund seems to have problems with how to use additional funds (William Easterly also seems to have a strongly negative assessment of it in this diavlog ). Thus, I decided to focus on VR (which aims to improve operational logistics for child vaccinations) and Stop TB (which provides governments with funds for tuberculosis treatment). Choosing between these very compelling charities proved difficult, but I don’t regret the considerable effort I put into my choice — as I tried to constantly remind myself, this choice should involve every bit as much effort as buying a$10,000 item for myself. I considered three relevant factors —
1. Cost-effectiveness
2. Execution
3. “Incentive effects” (explained more below)
Cost-effectiveness
GiveWell makes explicit cost effectiveness estimates (based in part on those of the Disease Control Priorities report) for both organizations: ~$545 per infant death averted for Village Reach, and ~$150-750 per death averted for Stop TB. These are roughly comparable, but don’t take into account the fact that Stop TB mainly treats adults, while VR mainly treats infants and children. I feel that adults are capable of deeper and more meaningful experiences than are infants, and also deeper connections with other people, so an adult death seems worse to me than an infant death (though both are of course bad). Trying to quantify exactly how much worse is very subjective and can also seem calculating (“how many babies would you kill to save an adult?”), but on a practical level one is forced to make difficult decisions with limited funds, and in my case I’d say that I think an adult death is perhaps 2 or 3 times worse than an infant’s death. Thus, adjusted for my personal values, I’d say that Stop TB is ~2-3 times more cost-effective than VR, though I understand that others may validly disagree with this subjective assessment.
Execution
The second factor, execution, is the one I find most important. By execution I mean all the factors that are assumed to go right in an ideal cost-effectiveness calculation, but could go wrong in practice. I take Murphy’s Law very seriously, and think it’s best to view complex undertakings as going wrong by default, while requiring extremely careful management to go right. This problem is especially severe in charity, where recipients have no direct way of telling donors whether an intervention is working. The situation is worse yet in the developing world, where projects cannot count on the reliable infrastructure and basic social trust we take for granted in the developed world. Given all these problems, what I look for in a charity is a simple and short chain of execution in which relatively few things can go wrong, together with rigorous efforts to close whatever loopholes do exist. As far as I can tell, VR fits these criteria better than any other charity I’ve encountered. Vaccines unquestionably save lives if correctly administered, so it’s generally enough to show that functional vaccines are being correctly delivered and administered. Roughly, the major questions I want answered about a vaccination program are:
(a) are the vaccines actually delivered to health clinics?
(b) do the vaccines remain effective during transport and storage?
(c) once in storage, are the vaccines actually administered, and safely so?
(d) does the program have a clear plan for spending additional money, so that donations actually translate to more vaccines?
(e) are vaccination rates measured to check that the whole chain is working?
I won’t go through the details, which are in GiveWell’s report, but VR makes a systematic effort to address each question. Deliveries are tracked by phone in real-time (e.g. (a)), VR takes an active role in providing power for refrigerators to keep vaccines cold (e.g. (b)), sterilization equipment is provided and stock outs are tracked (which at least suggests successful administration (c)), VR has a clear plan (d) for how to use additional funds, and changes in vaccination rates are measured with controls (e). These steps aren’t perfect – for example, there is apparently no systematic reporting confirming the actual correct administration of vaccines, so step (c) has some room for error — but overall the chain of execution is tighter than any I’ve seen, and the potential holes seem small enough to be manageable.
By contrast, in Stop TB’s case, such a chain (if I could even write it down) would be much longer — Stop TB hands drugs over to governments (involving several layers of administration, differing from country to country) which then must perform all the logistical details VR must perform, plus diagnostics, recurring treatments, and in some cases second-line treatment. There is also the possibility of TB evolving resistance if treatments are not correctly administered. Stop TB’s random inspections, cure rate data, and external auditing seem suggestive of positive results, but my inability to examine in detail a process that I know is quite complex ultimately leaves me very suspicious about efficacy. This isn’t just a matter of Stop TB being a large organization; rather, the problem is that I can’t see the full process of treatment setup and administration, whether applied to one person or a million. Lacking that clear and full view of Stop TB, I have to conclude that VR is the winner on execution.
Incentive effects
Given only VR’s superiority on execution and StopTB’s superiority on cost-effectiveness, I would be about equally inclined to support either, with perhaps a small edge to VR because execution is so critical. However, it’s important to look at the incentive effects of my donation — the money I give out is not just a one-shot intervention, but also a vote on what I want the philanthropic sector to look like in the future. Along these lines, I see three additional advantages to VR, which make it the clear winner in my mind:
1. VR’s small size means that funds given to it through GiveWell could greatly change its funding situation (GiveWell seems to have been responsible for a sizable fraction of VR’s total donations last year). What happens to Village Reach could make a notable impression on other charities, which badly need to hear that focusing on efficacy can pay off.
2. In my view, incentivizing careful execution is a higher priority right now than incentivizing cost-effectiveness. Cost-effectiveness would be important if there were many good charitable opportunities and not enough money to fund them all. Instead, the current situation seems to be that a lot of programs are probably a waste of money. It thus makes sense, from an incentive point of view, to reward charities that focus maximally on execution — such as VR.
3. Logistics and efficiency are extremely important, but don’t make for good headlines. VR should be getting a lot more money than it is, and I want to tell the philanthropic sector that charities can succeed without being flashy.
In addition to all the arguments listed above, there were a number of other factors which I thought about (some of which were raised in GiveWell’s reports and posts) but ultimately had a hard time getting a handle on and so did not give much weight to. I considered too many factors to list them all, but here are a few examples:
• By lowering child mortality, could VR have different effects on population growth than Stop TB? If so, is population growth beneficial or harmful?
• A vaccination or treatment doesn’t only save one person; it also impedes the spread of the disease. Could TB treatment and child vaccinations differ in how much they do this?
• Stop TB treats people who live in less isolated areas and thus have more opportunity to interact with others and indirectly improve their lives. How important is this?
• VR’s logistics ideas could be applied to many health interventions. If VR’s model spreads and proves effective on a wider scale, how large would the overall benefits be?
Any one of these effects could theoretically be important enough to outweigh all my arguments for VR, so this list serves as a reminder that there can never be any guarantees of efficacy, let alone optimality. Uncertainty, however, is simply part of life, and all I can do is go with my best guess, so I decided to give to VR.
I hope (though I cannot be sure) that my donation will save the lives of 20 children (which is what the cost-effectiveness numbers work out to). That’s a truly staggering benefit, and honestly it came at very little cost to myself: I don’t much miss the new car I didn’t buy, and I’ll gladly make the same sacrifice next year in order to donate again. What did feel very emotionally taxing was reading (and in most cases, agreeing with) all the negative analysis of charities at GiveWell and elsewhere. I found it difficult to evaluate everything in a critical fashion while still holding on to the compassion and optimism that originally inspired me to donate. It’s tough to find the right balance between caring and hard-nosed realism, but it is possible, and it is, as far as I know, the only way to truly change the world.
• jsalvati on June 3, 2010 at 3:10 pm said:
This was absolutely fantastic; thank you.
• Jason Fehr on June 3, 2010 at 9:25 pm said:
Excellent post, Dario…I’m glad to see there are others out there who apply such rigorous logic when it comes to making a difference. I only wish every donor did the same.
• Hassan Sachedina on June 4, 2010 at 9:33 am said:
An incredible, thoughtful and insightful posting that made me really think. I’ve just started working with an organization that is grappling with the questions of how and what services to deliver to hundreds of thousands of people in Africa. Dario provides an excellent background of why execution is so important, and why it’s so important to keep it simple.
• Sam L on June 7, 2010 at 2:27 pm said:
Thanks for sharing your thoughts. Another reason I also favor VillageReach is related to scalability: not only can they apply the same model in other locations, their model can potentially (if reaching a larger scale and/or getting more awareness) be replicated / borrowed by other organizations, given effective drug delivery (including but not only vaccine) is such a generic problem.
On a related note, they have started open-sourcing the software to manage the logistics:
http://openlmis.org/
• Dario A on June 8, 2010 at 5:11 am said:
Thanks for the kind words, all!
Sam — I agree, the novelty of Village Reach’s model, and the fact that it could be widely applied to general health infrastructure if scaled up, are another strong point in its favor. On the other hand, a new idea is always riskier than an established one, though VR’s model has at least been rigorously tested on a small scale, so this concern is perhaps not as severe as it usually would be.
Thanks also for pointing out the logistics software; I wasn’t aware of this effort.
• Parent on June 9, 2010 at 11:23 pm said:
Dario, regarding the cost effectiveness of your choice: You are obviously an extremely bright and intense young man. And you are just as obviously not a parent. I guarantee you that if you ever have a child, you would put his or her life above any adult you know. The objective reasoning for your choice is admirable, elegant, and made in a precisely scientific mode. However, with limited medical services and vaccines, asking parents to sacrifice the health of their children for the good of the adult community will never happen.
Thank you for your compassion and the effort you put into making the world a better place.
• Jonah S on June 10, 2010 at 1:51 am said:
Parent,
I agree that nearly all of us (even those of us who care a lot about making the world a better place) place higher value on the well being of those who we love than on people who we don’t even know – this is part of human nature and is not going to go away through force of will.
At the same time, for those of us fortunate enough to live in a wealthy country like America, most parents can do a great deal to help humanity without sacrificing the health of their children. America spends only 15% of its GDP on health care.
It’s true that parents who donate money to charity can’t spend as much money on their children. But children learn by example, and donating money to help the less fortunate sends one’s children a powerful message about one feels about the importance of helping others. For parents who want children who care about helping other people, giving to charity is, up to a point, a better use of money than however else they would have spent it on their children.
And assuming that one is giving some money away to help others, one might as well make sure that this money goes as far as possible toward meeting the intended goal.
Just by chance, two days ago I wrote a blog post directly relevant to your remarks: check it out http://towardabetterworld.wordpress.com/2010/06/08/altruism-and-sacrifice/
• Jonah S on June 10, 2010 at 12:57 pm said:
Parent,
One more thought. After responding I realized that you were probably reacting to Dario’s statement “in my case I’d say that I think an adult death is perhaps 2 or 3 times worse than an infant’s death.”
From your comment I infer that you’re thinking something like “if I were living in a poor country, I’d rather that my child be saved than some adult in my community be saved. If you prioritize.” This may form the grounds for a legitimate difference of opinion between you and Dario (based on you having had different life experiences, etc.).
However, consider the following. I think that a more relevant thought experiment than “if I were living in a poor country and had to choose between saving *myself* or saving 2 or 3 of *my own infants*, what would I choose?” I don’t know enough about the developing world to be confident about what I would want. Certainly parents are sometimes willing to sacrifice their own lives for the lives of their children.
But a relevant fact to me seems to be that the fertility rates are very high in third world countries, with each person having 4 or more children http://en.wikipedia.org/wiki/Total_fertility_rate . If I died instead of 2 or 3 of my children, I wouldn’t be able to look after my aging parents or any of my children (not only the 2 or 3 who would be saved in my stead, but the others as well). In the developing world there’s not nearly as good a support network for orphans as there is here and a child’s parent dying may be very damaging to the child’s future prospects.
I don’t have any detailed knowledge of conditions in the developing world and my thought experiment is an imperfect proxy to judging the true trade off being made, I’m just saying that on closer examination you might find that the comparison that Dario suggests is aligned with your own values.
I’d be interested in hearing any further thoughts that you have
• Holden on June 10, 2010 at 4:20 pm said:
I am not sure whether Parent intends to say “If you were a parent, you would not sacrifice your own child for someone else” or “If you were a parent, you would not value the lives of adult strangers over the lives of child strangers.”
The former does not seem relevant to Dario’s post.
I would also bet against the latter. The Global Burden of Disease report‘s discussion of age-weighting DALYs states:
The 1990 GBD study weighted a year of healthy life lived at young ages and older ages lower than years lived at other ages. This choice was based on a number of studies that indicated a broad social preference to value a year lived by a young adult more highly than a year lived by a young child or an older adult (Murray 1996).
Here’s one study I’ve run across that seems to support such a conclusion. It seems that Dario’s expressed preference is common across the general population.
• Jason Fehr on June 10, 2010 at 4:52 pm said:
Holden, did you mean “…over the lives of child strangers”?
• Holden on June 11, 2010 at 8:01 am said:
Yes, thanks. Corrected.
• Toby Ord on June 12, 2010 at 1:47 pm said:
Holden,
I don’t think that the GBD study actually supports Dario’s approach to weighing adult and children’s lives. It says that the life-years should be weighted to give more value to years in the middle of a life (and there is a version of DALYs which does just such a weighting). However, children will live through all the same years of life as remain for an adult, plus some additional adolescent years. Thus the life-year weighting approach still suggests that it is more important to save children.
Two factors could change this: the chance that the child dies before adulthood and the discount rate, but these shouldn’t have an effect on the scale that Dario envisaged.
• Holden on June 14, 2010 at 6:43 pm said:
Toby, I was a little sloppy in what I quoted, but the GBD does indeed provide support for the position I’m laying out and in fact acknowledges this as a weakness with the standard DALY metric. The following page of the GBD (401) states:
Age weights are perhaps the most controversial choice built into the DALY. Criticisms of age weights [include:] Age weights do not reflect social values; for example, the DALY [including age-weighting by year] values the life of a newborn about equally to that of a 20-year-old, whereas the empirical data suggest a fourfold difference.
|
# Fantasia
Time Limit: 10000/5000 MS (Java/Others)
Memory Limit: 65536/65536 K (Java/Others)
## Description
Professor Zhang has an undirected graph $G$ with $n$ vertices and $m$ edges. Each vertex is attached with a weight $w_i$. Let $G_i$ be the graph after deleting the $i$-th vertex from graph $G$. Professor Zhang wants to find the weight of $G_1, G_2, ..., G_n$.
The weight of a graph $G$ is defined as follows:
1. If $G$ is connected, then the weight of $G$ is the product of the weight of each vertex in $G$.
2. Otherwise, the weight of $G$ is the sum of the weight of all the connected components of $G$.
A connected component of an undirected graph $G$ is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in $G$.
## Input
There are multiple test cases. The first line of input contains an integer $T$, indicating the number of test cases. For each test case:
The first line contains two integers $n$ and $m$ $(2 \le n \le 10^5, 1 \le m \le 2 \times 10^5)$ -- the number of vertices and the number of edges.
The second line contains $n$ integers $w_1, w_2, ..., w_n$ $(1 \le w_i \le 10^9)$, denoting the weight of each vertex.
In the next m lines, each contains two integers $x_i$ and $y_i$ $(1 \le x_i, y_i \le n, x_i \ne y_i)$, denoting an undirected edge.
There are at most $1000$ test cases and $\sum n, \sum m \le 1.5 \times 10^6$.
## Output
For each test case, output an integer $S = (\sum\limits_{i=1}^{n}i\cdot z_i) \text{ mod } (10^9 + 7)$, where $z_i$ is the weight of $G_i$.
## Sample Input
1
3 2
1 2 3
1 2
2 3
## Sample Output
20
wange2014
## Source
2016 Multi-University Training Contest 2
|
# zbMATH — the first resource for mathematics
Analytic sharp fronts for the surface quasi-geostrophic equation. (English) Zbl 1228.35010
This work is concerned with the evolution of sharp fronts for the quasi-geostrophic surface waves. The nonlinear integro-differential equation governing the evolution wave fronts in such flows was already obtained. The authors consider a simplified version of this equation and study the existence of analytical solutions through extensive calculations. By carefully investigating the evolution of the second space derivative of the unknown function the authors prove that the new system fits well into the abstract version of the celebrated Cauchy-Kovalevskaya theorem.
##### MSC:
35A10 Cauchy-Kovalevskaya theorems 76B15 Water waves, gravity waves; dispersion and scattering, nonlinear interaction 35R11 Fractional partial differential equations
Full Text:
##### References:
[1] Constantin P., Majda A., Tabak E.: Singular front formation in a model for quasigesotrophic flow. Phys. Fluids 6(1), 9–11 (1994) · Zbl 0826.76014 [2] Constantin P., Majda A., Tabak E.: Formation of strong fronts in the 2 quasigeostrophic thermal active scalar. Nonlinearity 7(6), 1495–1533 (1994) · Zbl 0809.35057 [3] Córdoba D., Fefferman C., Rodrigo J.: Almost sharp fronts for the surface Quasi-Geostrophic equation. PNAS 101(9), 2487–2491 (2004) · Zbl 1063.76011 [4] Córdoba D., Fontelos M.A., Mancho A.M., Rodrigo J.: Evidence of singularities for a family of contour dynamics equations. PNAS 102(17), 5949–5952 (2005) · Zbl 1135.76315 [5] Fefferman, C., Rodrigo, J.: On the limit of almost sharp fronts for the Surface Quasi-Geostrophic equation. In preparation. · Zbl 1063.76011 [6] Gancedo F.: Existence for the {$$\alpha$$}-patch model and the QG sharp front in Sobolev spaces. Adv. Math. 217(6), 2569–2598 (2008) · Zbl 1148.35099 [7] Majda, A., Bertozzi, A.: Vorticity and incompressible flow. Cambridge Texts in Applied Mathematics 27, Cambridge: Cambridge Univ. Press, 2002 · Zbl 0983.76001 [8] Madja A., Tabak E.: A two-dimensional model for quasigeostrophic flow: comparison with the two-dimensional Euler flow. Physisa D 98(2-4), 515–522 (1996) · Zbl 0899.76105 [9] Rodrigo J.: The vortex patch problem for the Quasi-Geostrophic equation. PNAS 101(9), 2484–2486 (2004) · Zbl 1063.76009 [10] Rodrigo J.: On the evolution of sharp fronts for the quasi-geostrophic equation. Comm. Pure Appl. Math. 58(6), 821–866 (2005) · Zbl 1073.35006 [11] Sammartino M., Caflisch R.E.: Zero Viscosity Limit for Analytic Solutions of the Navier-Stokes Equation on a Half-Space I. Existence for Euler and Prandtl equations. Commun. Math. Phys. 192, 433–461 (1998) · Zbl 0913.35102 [12] Sammartino M., Caflisch R.E.: Zero Viscosity Limit for Analytic Solutions of the Navier-Stokes Equation on a Half-Space II. Construction of Navier-Stokes Solution. Commun. Math. Phys. 192, 463–491 (1998) · Zbl 0913.35103
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
# conjecture regarding the cosine fixed point
### context/motivation
if the angle on a calculator is set to radians, then it is very easy to demonstrate that iteration of $cos x$ (for arbitrary initial x) converges - simply keep pressing the $cos$ button! this unique fixed point $\alpha$ might reasonably be expected to be a transcendental number. (perhaps the answer to that is already known?). the conjecture outlined here suggests that $\alpha$ is an upper bound for a whole family of numbers defined in terms of iteration of particular sequences of cosine and sine functions. since these mixed iterations give rise to limit cycles rather than fixed points, we use the Cesaro mean to give a characteristic number for each cycle. the cycles i initially considered are easily defined in terms of the periodic binary representation of fractions whose denominator is not a power of 2. however whilst these are the only numbers corresponding to the sequences of sine and cosine that converge towards stable orbits, it seems likely that the periodicity itself is not the key factor ensuring Cesaro convergence, but that this is achieved due to a weaker asymptotic density condition which is necessary but not sufficient for periodicity. i apologise for any mistakes or lack of clarity in my (necessarily brief) presentation. the basic idea is simpler than may appear from a first glimpse of the definitions.
### preliminary definitions
let $I$ be the closed unit interval $[0,1]$ so that the sine and cosine functions restrict to injective maps of $I$ into itself.
for integers $n \gt 0$ define $\beta_n:I \rightarrow \{0,1\}$ to be the $n^{th}$ binary digit of its argument, so $\beta_n(\lambda)=\lfloor 2^n\lambda \rfloor$
now define $\psi:\{0,1\} \times I \rightarrow I$ by: $$\psi(0,\theta) = cos \theta \\ \psi(1,\theta) = sin \theta$$
every $\lambda \in I$ can be associated with a function $\Psi_{\lambda}:I \rightarrow I^{\omega}$ which generates a sequence in $I$, i.e.
$$\forall \theta \in I, \Psi_{\lambda}(\theta) = \{\theta_n\}_{n=0,1,2,...}$$
with $\theta_0=\theta$ and for $n \ge 0$ $$\theta_{n+1} = \psi(\beta_{n+1}(\lambda),\theta_n)$$
let us now call $\lambda \in I$ a $\beta$-number if an asymptotic density of $1$s in its binary representation exists, i.e. if the sequence $\{\beta_n(\lambda)\}$ has a Cesaro-mean. this mean, if it exists, we may denote by $\beta^*(\lambda)$
let us also define $\alpha$ as the unique fixed point in I of the cosine function, i.e.
$$cos \; \alpha = \alpha$$
### conjecture
1. $\forall \theta \in I$ the sequence $\Psi_{\lambda}(\theta)$ has a Cesaro mean if and only if $\lambda$ is a $\beta$-number, and in this case the Cesaro mean is independent of $\theta$ and may be denoted $\Psi_{\lambda}^*$
2. if the sequence $\Psi_{\lambda}(\theta)$ has Cesaro mean, then this mean is equal to $\alpha$ if and only if $\beta^*(\lambda)=0$
3. for any $\beta$-number $\lambda$ if $\beta^*(\lambda) \gt 0$ then $\Psi_{\lambda}^*\lt \alpha$
|
Solutions to Problem Set 23
Math 211-03
11-2-2017
[Alternating series]
1. Determine whether the series converges or diverges.
The terms alternate, and
If , then
Hence, the terms decrease in absolute value.
Therefore, the series converges by the Alternating Series Test.
2. Determine whether the series converges or diverges.
The terms alternate, and
If , then
Hence, the terms decrease in absolute value.
Therefore, the series converges by the Alternating Series Test.
3. Determine whether the series converges or diverges.
The terms alternate, but
The series diverges by the Zero Limit Test.
4. Determine whether the series converges or diverges.
The terms alternate, and
If , then
Now for , so for . Hence, the terms decrease in absolute value.
The series converges by the Alternating Series Test.
5. Determine whether the series converges or diverges.
The terms alternate, and by L'H\^opital's rule
Let . Then
Hence, the terms decrease in absolute value.
Therefore, the series converges by the Alternating Series Test.
6. Determine whether the series converges or diverges.
The terms alternate. Using L'H\^opital's Rule, I have
Hence,
Therefore, the series diverges by the Zero Limit Test.
7. Consider the convergent alternating series .
Find the smallest value of n for which the partial sum approximates the actual sum to within 0.1.
The partial sum differs from the actual sum s by no more than the absolute value of the next term :
I can ensure that if . Then
So
Thus, .
8. Consider the convergent alternating series .
Find the smallest value of n for which the partial sum approximates the actual sum to within 0.001.
The partial sum differs from the actual sum s by no more than the absolute value of the next term :
I can ensure that if . Then
That is,
However, I can't solve this inequality algebraically. Therefore, I'll do this by trial and error by making a table. (I'm only showing some of the values.)
I see that for the first time when . So I need to use to estimate the sum to within 0.001.
Without work, all life goes rotten. But when work is soulless, life stifles and dies. - Albert Camus
Contact information
Copyright 2017 by Bruce Ikenaga
|
# Math Help - Arithematic Progression -Logarithm
1. ## Arithematic Progression -Logarithm
If $log_k x, log_m x , log_nx$ are in A.P then prove that $n^2 = (kn)^{log_km}$
If the terms are in A.P then : $2 log_mx = log_k x + log_nx = \frac{2}{log_xm} = \frac{1}{log_xk}+ \frac{1}{log_xn}$
= $\frac{2}{log_xm}= \frac{log_xn+log_xk}{log_xklog_xn}$ can we solve this way or not..............please guide
2. ## Re: Arithematic Progression -Logarithm
$2\log_{m}x = \log_{k}x + \log_{n}x.$
Start by using the change of base formula so that all logs are to base $k.$
Cancel the $\log_{k}x$ throughout, cross multiply and that gets you a $2\log_{k}n$ (which becomes) $\log_{k}n^2$ on the LHS.
Try finishing from there.
3. ## Re: Arithematic Progression -Logarithm
thanks a lot....i got it..regards,sachin
|
## \savebox
\sbox{cmd}{text}
\savebox{cmd}[width][pos]{text}
These commands typeset text in a box just as for \mbox or \makebox. However, instead of printing the resulting box, they save it in the LaTeX command cmd, which must have been declared with \newsavebox. The saved text is accessed by a \usebox command.
The \sbox and \savebox commands are declarations with the usual rules for scope.
The \sbox command is robust; the \savebox command is fragile.
See Spaces and Boxes
See \newsavebox, \usebox
|
# American Institute of Mathematical Sciences
• Previous Article
Well-posedness for the Cauchy problem of the Klein-Gordon-Zakharov system in four and more spatial dimensions
• CPAA Home
• This Issue
• Next Article
Low regularity solutions for the (2+1)-dimensional Maxwell-Klein-Gordon equations in temporal gauge
November 2016, 15(6): 2221-2245. doi: 10.3934/cpaa.2016035
## Existence and upper semicontinuity of attractors for non-autonomous stochastic lattice systems with random coupled coefficients
1 School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China 2 Department of Mathematics, Zhejiang Normal University, Jinhua, 321004
Received January 2016 Revised June 2016 Published September 2016
In this paper, we consider the existence of random attractors in a weighted space $l_\rho ^2$ for first-order non-autonomous stochastic lattice system with random coupled coefficients and multiplicative/additive white noise, and establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
Citation: Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of attractors for non-autonomous stochastic lattice systems with random coupled coefficients. Communications on Pure and Applied Analysis, 2016, 15 (6) : 2221-2245. doi: 10.3934/cpaa.2016035
##### References:
[1] R. A. Adams and J. J. Fournier, Sobolev Spaces, 2nd edition, Elsevier Ltd., 2003. [2] L. Arnold, Random Dynamical Systems, Springer-Verlag, New York and Berlin, 1998. [3] P. W. Bates, K. Lu and B. Wang, Attractors for lattice dynamical systems, Internat. J. Bifur. Chaos., 11 (2001), 143-153. doi: 10.1142/s0218127401002031. [4] P. W. Bates, H. Lisei and K. Lu, Attractors for stochastic lattice dynamical systems, Stoch. Dyn., 6 (2006), 1-21. doi: 10.1142/S0219493706001621. [5] P. W. Bates, K. Lu and B. Wang, Random attractors for stochastic reaction-diffusion equations on unbounded domains, J. Differential Equations, 246 (2009), 845-869. [6] P. W. Bates, K. Lu and B. Wang, Attractors for non-autonomous stochastic lattice systems in weighted space, Physica D, 289 (2014), 32-50. doi: 10.1016/j.physd.2014.08.004. [7] I. Chueshov, Monotone Random Systems Theory and Applications, Springer-Verlag, New York, 2002. [8] T. Caraballo and J. A. Langa, On the upper semicontinuity of cocycle attractors for nonautonomous and random dynamical systems, Dynamics of Continuous, Discrete and Impulsive Systems Series A: Mathematical Analysis, 10 (2003), 491-513. [9] T. Caraballo and K. Lu, Attractors for stochastic lattice dynamical systems with a multiplicative noise, Front. Math. China, 3 (2008), 317-335. doi: 10.1007/s11464-008-0028-7. [10] T. Caraballo, X. Han, B. Schmalfuss and J. Valero, Random attractors for stochastic lattice dynamical systems with infinite multiplicative white noise, Nonlinear Anal., 130 (2016), 255-278. doi: 10.1016/j.na.2015.09.025. [11] H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dyn. Diff. Eqns., 9 (1997), 307-341. [12] H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Th. Re. Fields, 100 (1994), 365-393. doi: 10.1007/bf01193705. [13] J. Duan, K. Lu and B. Schmalfuss, Invariant manifolds for stochastic partial differential equations, Ann. Probab., 31 (2003), 2109-2135. [14] F. Flandoli and B. Schmalfuss, Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative noise, Stochastics: An Inter. J. Probability and Stoch. Processes., 59 (1996), 21-45. [15] J. K. Hale and G. Raugel, Upper semicontinuity of the attractor for a singularly perturbed hyperbolic equation, J. Differential Equations, 73 (1988), 197-214. doi: 10.1016/0022-0396(88)90104-0. [16] X. Han, W. Shen and S. Zhou, Random attractors for stochastic lattice dynamical systems in weighted spaces, J. Differential Equations, 250 (2011), 1235-1266. doi: 10.1016/j.jde.2010.10.018. [17] X. Han, Random attractors for stochastic sine-Gordon lattice systems with multiplicative white noise, J. Math. Anal. Appl., 376 (2011), 481-493. doi: 10.1016/j.jmaa.2010.11.032. [18] J. Huang, The random attractor of stochastic FitzHugh-Nagumo equations in an infinite lattice with white noises, Phys. D., 233 (2007), 83-94. doi: 10.1016/j.physd.2007.06.008. [19] Y. Lv and J. Sun, Dynamical behavior for stochastic lattice systems, Chaos Solitons Fractals, 27 (2006), 1080-1090. doi: 10.1016/j.chaos.2005.04.089. [20] Y. Lv and J. Sun, Asymptotic behavior of stochastic discrete complex Ginzburg-Landau equations, Phys. D., 221 (2006), 157-169. doi: 10.1016/j.physd.2006.07.023. [21] A. Pazy, Semigroup of Linear Operators and Applications to Partial Differential Equations, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. [22] D. Ruelle, Characteristic exponents for a viscous fluid subjected to time dependent forces, Commu. Math. Phys., 93 (1984), 285-300. doi: 10.1142/9789812833709_0019. [23] G. Raugel and G. R. Sell, Navier-Stokes equations on thin 3D domains. I: global attractors and global regularity of solutions, Journal of American Mathematical Society, 6 (1993), 503-568. doi: 10.1090/s0894-0347-1993-1179539-4. [24] E. V. Vleck and B. Wang, Attractors for lattice FitzHugh-Nagumo systems, Phys. D., 212 (2005), 317-336. doi: 10.1016/j.physd.2005.10.006. [25] B. Wang, Dynamics of systems on infinite lattices, J. Differential Equations, 221 (2006), 224-245. doi: 10.1016/j.jde.2005.01.003. [26] B. Wang, Asymptotic behavior of non-autonomous lattice systems, J. Math. Anal. Appl., 331 (2007), 121-136. doi: 10.1016/j.jmaa.2006.08.070. [27] B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. [28] B. Wang, Existence and upper semicontinuity of attractors for stochastic equations with deterministic non-autonomous terms, Stochastics and Dynamics, 14 (2014), 31 pages. doi: 10.1142/s0219493714500099. [29] Y. Wang, Y. Liu and Z. Wang, Random attractors for partly dissipative stochastic lattice dynamical systems, J. Difference Eqns. Appl., 14 (2008), 799-817. doi: 10.1080/10236190701859542. [30] X. Wang, S. Li and D. Xu, Random attractors for second-order stochastic lattice dynamical systems, Nonlinear Anal., 72 (2010), 483-494. doi: 10.1016/j.na.2009.06.094. [31] C. Zhao and S. Zhou, Compact uniform attractors for dissipative lattice dynamical systems with delays, Discrete Contin. Dyn. Syst., 21 (2008), 643-663. doi: 10.3934/dcds.2008.21.643. [32] C. Zhao, S. Zhou and W. Wang, Compact kernel sections for lattice systems with delays, Nonlinear Anal., 70 (2009), 1330-1348. doi: 10.1016/j.na.2008.02.015. [33] C. Zhao and S. Zhou, Attractors of retarded first order lattice systems, Nonlinearity, 20 (2007), 1987-2006. doi: 10.1088/0951-7715/20/8/010. [34] C. Zhao and S. Zhou, Sufficient conditions for the existence of global random attractors for stochastic lattice dynamical systems and applications, J. Math. Anal. Appl., 354 (2009), 78-95. doi: 10.1016/j.jmaa.2008.12.036. [35] S. Zhou, Attractors for second order lattice dynamical systems, J. Differential Equations, 179 (2002), 605-624. doi: 10.1006/jdeq.2001.4032. [36] S. Zhou, Attractors for first order dissipative lattice dynamical systems, Phys. D., 178 (2003), 51-61. doi: 10.1016/s0167-2789(02)00807-2. [37] S. Zhou and W. Shi, Attractors and dimension ofdissipative lattice systems, J. Differential Equations, 224 (2006), 172-204. doi: 10.1016/j.jde.2005.06.024. [38] S. Zhou, Attractors and approximations for lattice dynamical systems, J. Differential Equations, 200 (2004), 342-368. doi: 10.1016/j.jde.2004.02.005. [39] S. Zhou and L. Wei, A random attractor for a stochastic second order lattice system with random coupled coefficients, J. Math. Anal. Appl., 395 (2012), 42-55. doi: 10.1016/j.jmaa.2012.04.080. [40] X. Zhao and S. Zhou, Kernel sections for processes and nonautonomous lattice systems, Discrete Contin. Dyn. Syst. Ser. B., 9 (2008), 763-785.
show all references
##### References:
[1] R. A. Adams and J. J. Fournier, Sobolev Spaces, 2nd edition, Elsevier Ltd., 2003. [2] L. Arnold, Random Dynamical Systems, Springer-Verlag, New York and Berlin, 1998. [3] P. W. Bates, K. Lu and B. Wang, Attractors for lattice dynamical systems, Internat. J. Bifur. Chaos., 11 (2001), 143-153. doi: 10.1142/s0218127401002031. [4] P. W. Bates, H. Lisei and K. Lu, Attractors for stochastic lattice dynamical systems, Stoch. Dyn., 6 (2006), 1-21. doi: 10.1142/S0219493706001621. [5] P. W. Bates, K. Lu and B. Wang, Random attractors for stochastic reaction-diffusion equations on unbounded domains, J. Differential Equations, 246 (2009), 845-869. [6] P. W. Bates, K. Lu and B. Wang, Attractors for non-autonomous stochastic lattice systems in weighted space, Physica D, 289 (2014), 32-50. doi: 10.1016/j.physd.2014.08.004. [7] I. Chueshov, Monotone Random Systems Theory and Applications, Springer-Verlag, New York, 2002. [8] T. Caraballo and J. A. Langa, On the upper semicontinuity of cocycle attractors for nonautonomous and random dynamical systems, Dynamics of Continuous, Discrete and Impulsive Systems Series A: Mathematical Analysis, 10 (2003), 491-513. [9] T. Caraballo and K. Lu, Attractors for stochastic lattice dynamical systems with a multiplicative noise, Front. Math. China, 3 (2008), 317-335. doi: 10.1007/s11464-008-0028-7. [10] T. Caraballo, X. Han, B. Schmalfuss and J. Valero, Random attractors for stochastic lattice dynamical systems with infinite multiplicative white noise, Nonlinear Anal., 130 (2016), 255-278. doi: 10.1016/j.na.2015.09.025. [11] H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dyn. Diff. Eqns., 9 (1997), 307-341. [12] H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Th. Re. Fields, 100 (1994), 365-393. doi: 10.1007/bf01193705. [13] J. Duan, K. Lu and B. Schmalfuss, Invariant manifolds for stochastic partial differential equations, Ann. Probab., 31 (2003), 2109-2135. [14] F. Flandoli and B. Schmalfuss, Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative noise, Stochastics: An Inter. J. Probability and Stoch. Processes., 59 (1996), 21-45. [15] J. K. Hale and G. Raugel, Upper semicontinuity of the attractor for a singularly perturbed hyperbolic equation, J. Differential Equations, 73 (1988), 197-214. doi: 10.1016/0022-0396(88)90104-0. [16] X. Han, W. Shen and S. Zhou, Random attractors for stochastic lattice dynamical systems in weighted spaces, J. Differential Equations, 250 (2011), 1235-1266. doi: 10.1016/j.jde.2010.10.018. [17] X. Han, Random attractors for stochastic sine-Gordon lattice systems with multiplicative white noise, J. Math. Anal. Appl., 376 (2011), 481-493. doi: 10.1016/j.jmaa.2010.11.032. [18] J. Huang, The random attractor of stochastic FitzHugh-Nagumo equations in an infinite lattice with white noises, Phys. D., 233 (2007), 83-94. doi: 10.1016/j.physd.2007.06.008. [19] Y. Lv and J. Sun, Dynamical behavior for stochastic lattice systems, Chaos Solitons Fractals, 27 (2006), 1080-1090. doi: 10.1016/j.chaos.2005.04.089. [20] Y. Lv and J. Sun, Asymptotic behavior of stochastic discrete complex Ginzburg-Landau equations, Phys. D., 221 (2006), 157-169. doi: 10.1016/j.physd.2006.07.023. [21] A. Pazy, Semigroup of Linear Operators and Applications to Partial Differential Equations, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. [22] D. Ruelle, Characteristic exponents for a viscous fluid subjected to time dependent forces, Commu. Math. Phys., 93 (1984), 285-300. doi: 10.1142/9789812833709_0019. [23] G. Raugel and G. R. Sell, Navier-Stokes equations on thin 3D domains. I: global attractors and global regularity of solutions, Journal of American Mathematical Society, 6 (1993), 503-568. doi: 10.1090/s0894-0347-1993-1179539-4. [24] E. V. Vleck and B. Wang, Attractors for lattice FitzHugh-Nagumo systems, Phys. D., 212 (2005), 317-336. doi: 10.1016/j.physd.2005.10.006. [25] B. Wang, Dynamics of systems on infinite lattices, J. Differential Equations, 221 (2006), 224-245. doi: 10.1016/j.jde.2005.01.003. [26] B. Wang, Asymptotic behavior of non-autonomous lattice systems, J. Math. Anal. Appl., 331 (2007), 121-136. doi: 10.1016/j.jmaa.2006.08.070. [27] B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. [28] B. Wang, Existence and upper semicontinuity of attractors for stochastic equations with deterministic non-autonomous terms, Stochastics and Dynamics, 14 (2014), 31 pages. doi: 10.1142/s0219493714500099. [29] Y. Wang, Y. Liu and Z. Wang, Random attractors for partly dissipative stochastic lattice dynamical systems, J. Difference Eqns. Appl., 14 (2008), 799-817. doi: 10.1080/10236190701859542. [30] X. Wang, S. Li and D. Xu, Random attractors for second-order stochastic lattice dynamical systems, Nonlinear Anal., 72 (2010), 483-494. doi: 10.1016/j.na.2009.06.094. [31] C. Zhao and S. Zhou, Compact uniform attractors for dissipative lattice dynamical systems with delays, Discrete Contin. Dyn. Syst., 21 (2008), 643-663. doi: 10.3934/dcds.2008.21.643. [32] C. Zhao, S. Zhou and W. Wang, Compact kernel sections for lattice systems with delays, Nonlinear Anal., 70 (2009), 1330-1348. doi: 10.1016/j.na.2008.02.015. [33] C. Zhao and S. Zhou, Attractors of retarded first order lattice systems, Nonlinearity, 20 (2007), 1987-2006. doi: 10.1088/0951-7715/20/8/010. [34] C. Zhao and S. Zhou, Sufficient conditions for the existence of global random attractors for stochastic lattice dynamical systems and applications, J. Math. Anal. Appl., 354 (2009), 78-95. doi: 10.1016/j.jmaa.2008.12.036. [35] S. Zhou, Attractors for second order lattice dynamical systems, J. Differential Equations, 179 (2002), 605-624. doi: 10.1006/jdeq.2001.4032. [36] S. Zhou, Attractors for first order dissipative lattice dynamical systems, Phys. D., 178 (2003), 51-61. doi: 10.1016/s0167-2789(02)00807-2. [37] S. Zhou and W. Shi, Attractors and dimension ofdissipative lattice systems, J. Differential Equations, 224 (2006), 172-204. doi: 10.1016/j.jde.2005.06.024. [38] S. Zhou, Attractors and approximations for lattice dynamical systems, J. Differential Equations, 200 (2004), 342-368. doi: 10.1016/j.jde.2004.02.005. [39] S. Zhou and L. Wei, A random attractor for a stochastic second order lattice system with random coupled coefficients, J. Math. Anal. Appl., 395 (2012), 42-55. doi: 10.1016/j.jmaa.2012.04.080. [40] X. Zhao and S. Zhou, Kernel sections for processes and nonautonomous lattice systems, Discrete Contin. Dyn. Syst. Ser. B., 9 (2008), 763-785.
[1] Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete and Continuous Dynamical Systems, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 [2] Zhaojuan Wang, Shengfan Zhou. Random attractor for stochastic non-autonomous damped wave equation with critical exponent. Discrete and Continuous Dynamical Systems, 2017, 37 (1) : 545-573. doi: 10.3934/dcds.2017022 [3] Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete and Continuous Dynamical Systems, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 [4] Shuang Yang, Yangrong Li. Forward controllability of a random attractor for the non-autonomous stochastic sine-Gordon equation on an unbounded domain. Evolution Equations and Control Theory, 2020, 9 (3) : 581-604. doi: 10.3934/eect.2020025 [5] Ling Xu, Jianhua Huang, Qiaozhen Ma. Random exponential attractor for stochastic non-autonomous suspension bridge equation with additive white noise. Discrete and Continuous Dynamical Systems - B, 2022 doi: 10.3934/dcdsb.2021318 [6] Xiaoyue Li, Xuerong Mao. Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation. Discrete and Continuous Dynamical Systems, 2009, 24 (2) : 523-545. doi: 10.3934/dcds.2009.24.523 [7] Bixiang Wang. Random attractors for non-autonomous stochastic wave equations with multiplicative noise. Discrete and Continuous Dynamical Systems, 2014, 34 (1) : 269-300. doi: 10.3934/dcds.2014.34.269 [8] Bixiang Wang. Multivalued non-autonomous random dynamical systems for wave equations without uniqueness. Discrete and Continuous Dynamical Systems - B, 2017, 22 (5) : 2011-2051. doi: 10.3934/dcdsb.2017119 [9] Ling Xu, Jianhua Huang, Qiaozhen Ma. Upper semicontinuity of random attractors for the stochastic non-autonomous suspension bridge equation with memory. Discrete and Continuous Dynamical Systems - B, 2019, 24 (11) : 5959-5979. doi: 10.3934/dcdsb.2019115 [10] Hong Lu, Jiangang Qi, Bixiang Wang, Mingji Zhang. Random attractors for non-autonomous fractional stochastic parabolic equations on unbounded domains. Discrete and Continuous Dynamical Systems, 2019, 39 (2) : 683-706. doi: 10.3934/dcds.2019028 [11] Abiti Adili, Bixiang Wang. Random attractors for stochastic FitzHugh-Nagumo systems driven by deterministic non-autonomous forcing. Discrete and Continuous Dynamical Systems - B, 2013, 18 (3) : 643-666. doi: 10.3934/dcdsb.2013.18.643 [12] Yangrong Li, Shuang Yang, Qiangheng Zhang. Odd random attractors for stochastic non-autonomous Kuramoto-Sivashinsky equations without dissipation. Electronic Research Archive, 2020, 28 (4) : 1529-1544. doi: 10.3934/era.2020080 [13] Dingshi Li, Xuemin Wang. Regular random attractors for non-autonomous stochastic reaction-diffusion equations on thin domains. Electronic Research Archive, 2021, 29 (2) : 1969-1990. doi: 10.3934/era.2020100 [14] Abiti Adili, Bixiang Wang. Random attractors for non-autonomous stochastic FitzHugh-Nagumo systems with multiplicative noise. Conference Publications, 2013, 2013 (special) : 1-10. doi: 10.3934/proc.2013.2013.1 [15] Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of random attractors for non-autonomous stochastic strongly damped wave equation with multiplicative noise. Discrete and Continuous Dynamical Systems, 2017, 37 (5) : 2787-2812. doi: 10.3934/dcds.2017120 [16] Shu Wang, Mengmeng Si, Rong Yang. Random attractors for non-autonomous stochastic Brinkman-Forchheimer equations on unbounded domains. Communications on Pure and Applied Analysis, 2022, 21 (5) : 1621-1636. doi: 10.3934/cpaa.2022034 [17] Junyi Tu, Yuncheng You. Random attractor of stochastic Brusselator system with multiplicative noise. Discrete and Continuous Dynamical Systems, 2016, 36 (5) : 2757-2779. doi: 10.3934/dcds.2016.36.2757 [18] Xinyuan Liao, Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative non-autonomous lattice dynamical systems. Communications on Pure and Applied Analysis, 2007, 6 (4) : 1087-1111. doi: 10.3934/cpaa.2007.6.1087 [19] Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure and Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809 [20] Alexandre N. Carvalho, José A. Langa, James C. Robinson. Non-autonomous dynamical systems. Discrete and Continuous Dynamical Systems - B, 2015, 20 (3) : 703-747. doi: 10.3934/dcdsb.2015.20.703
2020 Impact Factor: 1.916
|
# Debate:Are moral decisions best made by rational thinking?
This is a Debate page.
This debate was created by Radioactive afikomen.
This debate was inspired by a (now removed) line from Essay:Why religion is bullshit. It asserted that "Moral decisions are best made by rational thinking. For example, no rational person would begin a war, because it is obvious that it is deleterious for society and economy."
One minor correction to the above: there is technically no such thing as a "moral decision". Morals, strictly speaking, are what you believe in, nothing more. What the quote above meant is ethics, which is how you act on those beliefs, and when those beliefs start to affect others. It is possible for someone to be a "moral person", but still commit reprehensible (unethical) acts. -- Radioactive Misanthrope 07:32, 1 February 2008 (EST)
## From the essay's talk page
Actually, before I do, do you really want to go with this sentence: Moral decisions are best made by rational thinking. I think it may go a bit further than I would want to go. For instance there could be circumstances where starting wars, stealing or killing people might be the most rational choice - but not necessarily the most moral one.--Bobbing up 06:16, 27 January 2008 (EST)
Well, but if for example one steals things, then he will have no authority to forbid others to steal from him. So while he will have more possession in the short term, this is not true in the long term. Immoral decisions that appear rational usually are irrational if you think through all the consequences. Can you provide a counterexample? --Rational Thinker 06:30, 27 January 2008 (EST)
While it might be reasonable to conclude that if one steals things one could not forbid others from stealing from you, this is clearly not the case. Mafia bosses, for example, whose business are based on criminal activities have no problems forbidding others from stealing from them. Another question is - how long term? It is easy to imagine wealthy families whose fortunes were originally obtained by morally dubious terms and who have yet had to pay a price for selling slaves or weapons or whatever. Their were rational but immoral (at least in today's terms) but the long term has yet to punish them.--Bobbing up 08:19, 27 January 2008 (EST)
Mafiosi have a track record of killing each other for various reasons, so I wouldn't count that as a particularly rational lifestyle. Of course you might just not be caught stealing, but is it rational to live with the constant risk? Then maybe we have somewhat diverging notions of rationality. I mean, you can also survive Russian Roulette, but that doesn't make it a particularly rational pastime, does it? --Rational Thinker 09:13, 27 January 2008 (EST)
While rational decisions may be made on the level of risk involved, I don't think think moral ones should be. Logically, if I had literally zero risk of getting caught robbing a bank then it would be rational to do it. But it would clearly be wrong morally. Please note that I in no way make this point in an effort to support religious morality, but only to point out that suggesting that morality is based on rationality is a dubious philosophy.--Bobbing up 11:40, 27 January 2008 (EST)
Well, if you really want, edit that out, it's not the main point anyway ... I'm still not convinced however: your last example is definitely not a real-life example. And clearly, if you can rob a bank safely, then someone else should be able to do that, too. And if everyone starts robbing banks, the economy breaks down and money becomes worthless, so the robbery was not such a rational thing to do... But as I said, it's not the main point, so you can edit it... though I wonder, if you say that moral is based neither on religion nor on rationality, on what then? --Rational Thinker 12:11, 27 January 2008 (EST)
That is a very good question which we have debated here (somewhere) before. It seems to me that it comes from some sort of nebulous social development. Consider that this site includes deists, theists, agnostics, and atheists. But we all feel that, for example, slavery is wrong. A few hundred years ago we would probably not have thought this as a group. How is it that people of such different philosophical backgrounds would reach such similar conclusions? It would seem remarkably coincidental if we had independently and simultaneously reached this moral conclusion based on our theism, agnosticism atheism or whatever. The logical conclusion is that there is something which influences us independently of these views; and that we then back-reason this conclusion into our existing philosophies. What is it? The only thing that occurs to me is the ongoing moral evolution of society. I must say that I rather don't like even this interpretation very much as it seems to suggest that "evolution" means "better", which isn't necessarily the case either. But it's the best I've got. But OK with that out of the way I'll join in this one. :-) --Bobbing up 12:51, 27 January 2008 (EST)
## Current debate below here
There already is a moral philosophy based off of pure rational thought. It's called utilitarianism. To quote one textbook, and to be brutal and unfairly reduce an entire moral philosophy into a couple sentences, "the theory's most fundamental idea is that in order to determine whether an action would be right, we should look at what will happen as a result of doing it." As to what the goal of these results should be, it should be "the greatest good for the greatest number [of people]." Utilitarianism generally leads us to the same conclusions that a "rational" approach would, as Rational Thinker would probably describe it.
However, utilitarianism has largely fallen out of favor among philosophers because of some not insignificant issues. One example of these issues is that in utilitarianism, if suspending a civil right leads to greater happiness, then yes, that right can be suspended. A lengthier example would be if a murder is committed in a small town. An innocent man stands accused, but the murder, and subsequent delay of the execution of the accused, has caused riots to break out, causing even more needless deaths, trauma, and gross property damage. Utilitarianism holds that, if doing so would stop the riots then, yes, that innocent man not only can, but should be executed. Furthermore, if you also lived in this same town, utilitarianism would obligate you to bear false witness against the accused in order to more speed the execution along and stop the riots faster. To summarize this long-winded example, utilitarianism does not view "innocence" and "guilt" as intrinsically important. For that matter, neither does it hold a human being as intrinsically valuable. And finally, utilitarianism considers the motive behind actions to be irrelevant.
See what being "purely rational" in your morality gets you? -- Radioactive Misanthrope 08:30, 1 February 2008 (EST)
The "utilitarian" solution in your last example is not what I would consider rational. It's just a stop-gap "solution" to a problem caused by the irrational behavior of the rioting people. The real rational solution would be to get the people to act rationally, not to appease them temporarily by executing an innocent. --Rational Thinker 09:16, 1 February 2008 (EST)
Follow the parameters of the hypothetical situation, RT. The people are already not acting rationally. Here, I'll express it in numbers for you:
Riots: 15 people dead; trauma; \$250,000 property damages VS Executing innocent man: only 1 person dead; grief felt by his family/friends; collective guilt when town realizes they executed an innocent man (assuming they ever do).
As the oh-so-proud rational thinker here, which would you choose? -- Radioactive Misanthrope 09:36, 1 February 2008 (EST)
Short term balance only. If you don't educate people to think and act rationally, they'll do it again and again and again. So you get Executing innocent man: only 1 person dead ... only 2 persons dead ... only N persons dead, $N \to \infty$. --Rational Thinker 10:07, 1 February 2008 (EST)
Fine, I'll let go of the utilitarianism-failed argument. Moving on...
Eventually they will stop rioting? That's very... laissez faire of you. And I can use the same argument against yours: Eventually they will stop: where t=time and n=persons dead, $\lim_{t \to \infty}n=\infty$ "Eventually" suffers the same problem. -- Radioactive Misanthrope 17:36, 1 February 2008 (EST)
What is it that makes any of that rational or not-rational? --AKjeldsenGodspeed! 10:26, 1 February 2008 (EST)
Kill an innocent man, and the killing will have no end. Refuse to kill an innocent man, and eventually people must realize that all their rioting bought them nothing but more death and destruction, and thus they will stop. The former: irrational. The latter: rational. Simple as that. --Rational Thinker 10:44, 1 February 2008 (EST)
How do you propose to make people rational, RT? -- Radioactive Misanthrope 11:51, 1 February 2008 (EST)
Why, you just kill all the irrashional peoplez! --Rashunell tinkar 13:25, 1 February 2008 (EST)
Rt, please don't make asinine comments right now. I'm trying to discuss something, and your basically just trolling. -- Radioactive Misanthrope 17:08, 1 February 2008 (EST)
ZOMG a troll on teh intertubes! killkillkillkillkill!!!
You make them rational by being rational yourself. They will see that their rioting and killing does no good and stop (unless they're irreparably stupid, in which case there is nothing to do anyway). If on the ohter hand you bear false witness and deceive them into thinking that somehow "justice" was made by killing an innocent person, they will never realise that something went wrong and behave the same way every time. --Rational Thinker 17:39, 1 February 2008 (EST)
Do you have any idea how long it takes for an angry mob/riot to calm down? -- Radioactive Misanthrope 17:43, 1 February 2008 (EST)
Throw some cold water on them. "Execute" a dummy. Call the army and have those people locked into jail. In reality there is always a way out. You're making up a totally unrealistic scenario in an attempt to find a situation in which the "moral" decision (not killing an innocent) is not "rational" because there is this totally unstoppable "mob" which is killing ... whom? why? Doesn't make sense... --Rational Thinker 17:58, 1 February 2008 (EST)
I think a significant question is what exactly rationality is (and I note with some irony that RationalWiki does not have an article on it), or whether there exists some kind of objectively rational behaviour, as some people believe (and interestingly enough, they usually identify it as what they in particular consider to be rational behaviour)? There are some serious philosophical issues hidden there, which the proponents of rationalism usually seem rather oblivious to, possibly because they tend to be of a more... shall we say, natural scientific mindset. With that objection in mind, I really find it difficult to see why "rational thinking" should be neither better nor worse than e.g. religion as as basis for moral decisions. --AKjeldsenGodspeed! 09:18, 1 February 2008 (EST)
Actually (and without furthering the debate very much) we do have a stub on Rationalism.--Bobbing up 09:38, 1 February 2008 (EST)
Rationality is the property of being rational. In other words, it is the property of being like me. --Rashunell tinkar 13:27, 1 February 2008 (EST)
Whee, back to Philosophy of Ethics college class... Simply put, Utilitarianism is "the end justifies the means". As long as the means is less 'bad' than the end, do it. This really irks civil libertarians. Another example is "should you stay in a job that you don't like because it makes your co-workers happy?"
You also have the question of 'how do you measure utility?' The sum of the happiness of the society? This has the awkwardness if you have a community of... lets say cannibalistic rapists (bear with me for a bit) that rapes, kills, and eats anyone who visits from out of town. This really makes them happy. Some readings of utilitarianism say go ahead, that was the right thing to do.
The 'simple' answer is that if everyone behaved perfectly rationally, then moral decisions could be made rationally too. Give Kant a good read.
Until then, find a karaoke bar and enjoy [1]. Remember, Plato had it right... the best place to talk about philosophy and women are with a bunch of guys with beer and/or wine in hand. Karaoke will have to do. --Shagie 18:32, 1 February 2008 (EST)
Philosophy Ethics Utilitariapastafarism bla bla bla Kant Plato Karaoke crap and shit. I AM SOOO K00L!!! — Unsigned, by: 87.5.17.75 / talk / contribs
## Fresh start; ignore the crappy hypothetical made above
Rational Thinker, could you define for me what you mean when you say "rational"? If you would supply how your definition of it, we can proceed from that common ground. -- Radioactive Misanthrope 19:13, 1 February 2008 (EST)
Based on logical thought and not on authority. --Rational Thinker 19:20, 1 February 2008 (EST)
Right. and "logical" means based on rational thought. duh. — Unsigned, by: 87.5.17.75 / talk / contribs
While most philosophies can (sort of) be condensed to a single sentence, like what you just said, they can't be fully appreciated without a more elaborate description. Could you describe the larger framework this logical thought operates on (i.e. what assumptions does it have about the world, what things does it hold as intrinsically valuable, and how would it go about solving common ethical dilemmas)? -- Radioactive Misanthrope 19:33, 1 February 2008 (EST)
All assumptions can be derived by scientific observation. For example, we observe a life instinct in humans, thus life must be preserved. "Common ethical dilemmas"? As in "weird scenarios that never happen in real life"? --Rational Thinker 19:42, 1 February 2008 (EST)
Oh, come on. Common ethical dilemmas, you know, stuff that happens in real life, every frickin' day. For example, "Am I morally obligated to boycott businesses that use sweatshops?" or "Is it ethical to buy myself a new car, even though I don't really need it, instead of spending the money on charity to Darfur?" or "can I shoot a burglar whom breaks into my house, with a strong possibility of killing him by doing so, even though he is probably only going to steal my valuables?" or maybe "Is it immoral to not provide health care to those whom can't afford it?" Stuff like that. -- Radioactive Misanthrope 19:56, 1 February 2008 (EST)
Also, to directly address something you said above, "we observe a life instinct in humans, thus life must be preserved", animals—including many insects—also possess a strong survival instinct. Should we preserve their lives too? What happens when we have to choose between helping humans and helping animals—how would you justify treating animals as being less valuable than humans? Using what standards? And how would those standards not be arbritrary? -- Radioactive Misanthrope 20:09, 1 February 2008 (EST)
May I butt in at this point to note that something never happening in everyday life is no reason not to discuss it; Rational Thinker's a great fan of the idea that if a theory doesn't work in every case, it should be discarded. Also, thought experiments. --מְתֻרְגְּמָן שְׁלֹום
I liek cars. --Rashunell tinkar 20:17, 1 February 2008 (EST)
So, i liek mudkipz. Refute that! --מְתֻרְגְּמָן שְׁלֹום
## Opening up (albeit with tweazers) a 2-month-old debate
If a decision cannot be rationalized using any kind of logic (except the CP sysops') than it cannot be justified, and so cannot be judged under normal standards. For example, the decision to drop a nuclear bomb on Hiroshima can be rationalized, but much of the logic relied upon to justify it is opinion instead of fact, thus rendering the decision neither good nor bad, but debatable. So while rational thinking can lead to good decisions, rational thinking is really just thinking. Rational logic, however, is not rational thinking, and it can lead to good decisions. Good decisions are always based on good morals and good initiatives, so if both of these things are rationally proven to be good then the decision is automatically a good one. Lyra Belaqua Communique Delegate scorecard 18:12, 25 March 2008 (EDT)
I think (!) "rational thinking" meant, by implication, "rational logic". Anywho, to address your main point(s)... every moral philosophy ever described has used (or at least attempted to use) its own internal, rational logic (yes, even the "feelings"-based ones—please, no sidetracking into definitions). The real challenge with rational logic has been fitting that internal logic into the "external logic" of the real world.
All moral systems suppose that certain things are more valuable than others. How do we logically, rationally arrive at these assigned values, in a way that isn't arbritrary?
P.S. Thanks for reopening this debate, Lyra. I miss these sorts of discussions. -- Radioactive Misanthrope 18:37, 25 March 2008 (EDT)
The moral values stem from primal instinct and culture. For example, the Aztecs valued (for the purpose of this discussion) pleasing the gods over human life. The way that they arrived at this system of morals is very logical and not at all arbitrary. Culture is influenced by setting, and we'd all currently have the morals of the Aztecs if we had all seen evidence that there were real gods that must be pleased. The Aztecs during their period of cultural developement must have been influenced by some sort of percieved evidence. The evidence itself could be arbitrary, I agree, but not the method. So moral decisions are entirely based on rational thinking that is set by your system of morals.
Rational thinking is defined as thinking that can be rationalized by your set of morals. This would reinforce my argument that rational thinking cannot yield basically moral or immoral decisions. "Moral" is in the eye of the beholder.
P.S. Thanks. Do you have to explain the afikomen thing to people or do people get it that aren't Jewish? Lyra Belaqua Communique Delegate scorecard 19:14, 25 March 2008 (EDT)
## Moot point
Ok, it doesn't look like any conclusions were reached here, unless you've become convinced by all the subtle hints many of you seem to have left that this is probably a moot point, so I might as well try to conclude this. Let us pretend that we can consistently distinguish a "moral decision" from a "decision that does not involve morality" (to be clear, I'm not sure I could consistently make such a distinction, but I'm not ruling out the possibility). Let us then arbitrarily decide, for the sake of argument, that "moral decisions" should not be made using rational thought. What kind of thought should then be used? What other kind of thought is there? Random thought? Are humans capable of genuinely random thought? Have you ever tried to think of random things? Say random things? I have, and it never works. After taking a second look at whatever I said/thought, I always conclude that there was a pattern to those things. I don't believe that rational thought can ever really be "turned off". It can be deceived by false data, bad assumptions, incomplete awareness, etc. So can any computer. If the postulates are not consistent with reality, no analysis, however logical and extensive, can make any reliable predictions about reality. That's how religion, sun god worship, pseudoscience, and related phenomena come into existence. The people considering these things have incomplete understanding, generally, in these examples, a lack of understanding of the scientific method of testing and falsifying predictions systematically using experimental data. They also likely lack awareness of the personal gain sought by proponents of such things. If they understood how to systematically analyze these claims and had valid information to begin with, they would consistently reject them. As it is, most people don't properly understand these tools, so they have to improvise with the best system of rational thought they can manage. Really, if you back up and look at the really big picture here, that's all any of us do. None of us have perfect understanding of the universe, so all of us will engage in some amount of rational thinking based on incorrect or incomplete data, knowledge, or postulates. Call them what you will. So, in my attempt to render this a moot point, I'll ask again: what can be used to make moral decisions other than rational thought?
P.S. (since "P.S."s seem to be the popular thing recently in this debate): RA certainly shouldn't have to explain "afikomen" to any of us wiki-savvy folk who can simply type in "afikomen" on Wikipedia and read all about it (or just click on my handy wiki-link). BTW, this aspiring physicist, with his modest understanding of radiation, does not recommend consuming a radioactive afikomen. It would most definitely ruin your Passover. Good night. OneForLogic 01:19, 26 July 2008 (EDT)
## Bringing a years-old argument forward...
In "Moot Point" above, the important issue of people's inability to make completely rational decisions is brought up. I fully agree, but feel that this issue is far more complicated. While I wouldn't agree that the argument is futile (primarily because there is something to be gained even in the act of discussing it), it is certainly nearly impossible to nail down as a universal truth, appropriate in all times and all places.
The point is made that rational thought cannot, generally, be turned off. This seems generally to be true: every decision we make is made for a reason; that reason comprises many (often a GREAT many) minor and major thought processes that lead us to making that decision. This is, in a nutshell, the process of rationalization. In terms of application to actions, clearly some things are hard-wired and require (or even allow) no rational thought; retracting one's hand from a hot object, for instance, is an instinctual response; no rationalization necessary or allowed. However, any action or decision that doesn't fall under the category of raw instinct is arrived at through rationalization.
This is not to say that the rationale used is logical. Perhaps it's all semantics, but it strikes me that this is actually what this discussion is dealing with. As an example, a person in an abusive relationship often rationalizes staying in that relationship. They may attempt to apply logic (as best they are able to) to the situation, but this logic is imperfect; emotion is involved. Thus, their rationalization of the topic is tainted, and the conclusion often fails to be purely logical, potentially leading to a decision which is harmful, often to multiple parties.
To make my argument, I must stipulate the following:
1. Human Societal Structure has at least some of its basis in human emotional reactions to stimuli.
2. The morals of actions are often interpreted differently with respect to different human societies at different and/or similar points in history.
3. Decisions which are perfect in their logic-based rationality are unemotional decisions
The first two premises are not complicated, and are quite obvious to see without a lengthy proof; if necessary, I (or any number of people on RW) could easily supply both proofs, as the nature of the statements would require a single consistency in order for them to have perfect truth values. The third premise is quite a bit more complicated -- it's a long-standing topic of philosophical discussion. Considering this community, I imagine it is something that the vast majority of us believe very strongly. (If there is a potential weak point in this argument this would be it; tackle it as you see fit.)
In considering the first two premises, it seems that logical rationality cannot make universally moral decisions. This is because assuming premise 1 (human society is at least partially emotionally rooted) and premise 2 (morals are interpreted differently in different societies) are true, there is a level at which emotion is inextricably tied to morals. If this is in fact the case, then premise 3 (decisions which have perfect logic-based rationality are unemotional) suggests that decisions which are perfectly logical in terms of their rationality cannot possibly be applied to morals. --Silent Tadpolesexes your brain 19:35, 6 June 2010 (UTC)
# =Bazer63
Yes, otherwise you get stupid ideals of 'morality' like 'no gay marridge.' Bazer63 (talk) 09:06, 12 July 2014 (UTC)
## Moral decisions require free will
Moral decisions can only be made in this way since only when we use reason are we truly free. Furthermore, when we think rationally instincts, emotions and other parts of our self play their part. However when we do not use rational thinking, a part of our self, namely our rational self-conscious brain is excluded from the decision. This causes our consciousness to suffer unnecessarily. How can we call a decision which leads to unnecessary suffering moral?
You can't selectively activate some parts of the brain at the exclusion of others at will. A healthy adult human will be using their rational and emotional mind together at all times (whilst conscious). How can we define necessary suffering? Where there is a choice between two forms of suffering, which form is by definition rational? There are priorities and judgment calls involved in moralising - ones which cannot be streamlined into a simple 'X is better than Y' format. Hypothetical: An Aid worker in an impoverished part of the world is lonely. Sex work is commonplace there. Is paying for sex (and alleviating someone's hunger - and possibly their child's hunger) more or less moral than allowing them to avoid the indignities of sex work by staying home alone? Both options necessitate suffering. One has to decide which form of suffering is the most emotionally uncomfortable, or the easiest to rationalise/justify. ~ Guest
|
kissDE diffExpressedVariants Error in nls
1
0
Entering edit mode
4.0 years ago
ovidiu • 0
Dear Community,
I try to use kissDE diffExpressedVariants on a count table from paired end RNA Seq Data. The input file is kissplice type0a format. I have 4 samples with three replicates each. When I try to run diffExpressedVariants I get the following error. Any suggestions are welcome! Thank you!
Error in nls(modelNB, data = event.mean.variance.df, start = list(theta = 100)): step factor 0.000488281 reduced below 'minFactor' of 0.000976562\n An error occured, unable to fit models on data.
events.names events.length counts1 counts2 counts3 counts4 counts5 counts6 counts7 counts8 counts9 counts10 counts11
1 bcc_99967|Cycle_0 83 50 55 35 0 0 0 77 66 75 0 0
2 bcc_99967|Cycle_0 83 0 0 0 82 88 88 0 0 0 45 33
3 bcc_99898|Cycle_0 83 0 0 0 52 62 59 0 0 0 25 37
4 bcc_99898|Cycle_0 83 52 37 67 0 0 0 49 76 49 0 0
5 bcc_99803|Cycle_0 83 0 0 0 113 121 153 0 0 0 81 122
6 bcc_99803|Cycle_0 83 110 70 81 0 0 0 62 85 82 0 0
counts12
1 0
2 76
3 39
4 0
5 116
6 0
rna-seq snp kissplice • 1.0k views
0
Entering edit mode
### UPDATE
I found out is that the error is caused by the starting value (theta=100) when calling the nls function Letting nls doing the cheap guessing of the starting values solves the problem.
I'm not sure how this changes the results at the end.
0
Entering edit mode
Dear Vincent,
Thank you for your response and sorry for the delay. Regarding the conditions, yes I knew that kissDE is comparing only two conditions. I modified my code in the way, that I compared all conditions pairwise. Thank you for the hint!
Best, Ovidiu
0
Entering edit mode
3.9 years ago
Dear Ovidiu, We double checked on several datasets and using the cheap guessing of the nls function indeed works well as it does not affect the final results. We will likely update the code accordingly in the next release of kissDE. Thanks for pointing this out. In the meantime, other users confronted with a similar bug can safely use Ovidiu's fix.
On the other hand, I notice that your differential analysis concerns 4 conditions.
KissDE has only been tested for comparing 2 conditions. When comparing 4 conditions, KissDE should output variants whose frequency is significantly different in at least one condition compared to the three others. However, we never tested it explicitely. In most of the datasets we used, even when more than 2 conditions were available we were focusing on the difference between 2 conditions (or two groups of conditions). If you are interested in the difference between only 2 conditions out of the 4 you have, you can use the same input data and launch kissDE using the following conditions vector (as explained in Section 2.1.1 of the manual)
myConditions <- c(rep("condition_1", 3), rep("*", 3), , rep("*",3), rep("condition_3",3))
This will compare condition 1 with condition 4.
Best,
Vincent
|
# Mean Value Theorem and Inequality.
Using the mean value theorem prove the below inequality.
$$\frac{1}{2\sqrt{x}} (x-1)<\sqrt{x}-1<\frac{1}{2}(x-1)$$ for $x > 1$.
I don't understand how these inequalities are related. Am I supposed to work out the first one and then the second and so on? I also would be really grateful if anyone had the time to give some insight in what this problem asks to me.
I really wish someone could give a very simple solution.
Apply the mean value theorem to the function $f(t) = \sqrt{t}$ on the interval $[1,x]$ to deduce
$$\sqrt{x} - 1 = \frac{1}{2\sqrt{c}}(x - 1)$$
for some $c \in (1,x)$. Use the fact that
$$\frac{1}{2\sqrt{x}} < \frac{1}{2\sqrt{c}} < \frac{1}{2}$$
to conclude.
• how did you know $$f(t)=\sqrt{t}$$? can you show me a different approach too? if possible. – Sherlock Homies May 30 '15 at 21:27
• @SherlockHomies note $\sqrt{x} - 1 = \sqrt{x} - \sqrt{1} = f(x) - f(1)$, where $f(t) = \sqrt{t}$. If you want to prove the inequalities without the use of MVT, then rationalize the numerator to get $$\sqrt{x} - 1 = \frac{x-1}{\sqrt{x} + 1}$$ and use the fact that for $x > 1$, $$\frac{1}{2\sqrt{x}} < \frac{1}{\sqrt{x} + 1} < \frac{1}{2}.$$ – kobe May 30 '15 at 21:30
• wait a sec how would you piece up all together to get to end of the problem. Like in the question above . I mean the last step.Using the mean value theorem again in the end? – Sherlock Homies May 30 '15 at 21:49
• @SherlockHomies multiply the inequalities $\frac{1}{2\sqrt{x}} < \frac{1}{2\sqrt{c}} < \frac{1}{2}$ by $x - 1$ and use the fact $\sqrt{x} - 1 = \frac{1}{2\sqrt{c}}(x - 1)$ to get the desired inequalities. – kobe May 30 '15 at 21:52
By the MVT, there is $c \in ]1,x[$ such that $$\sqrt{x} - 1 = \frac{1}{2\sqrt{c}}(x-1), \qquad \qquad \left[f(b)-f(a) = f'(c)(b-a)\right]$$ so you use that: $$c < x \implies \sqrt{c} < \sqrt{x} \implies 2 \sqrt{c} < 2\sqrt{x} \implies \frac{1}{2\sqrt{x}}<\frac{1}{2\sqrt{c}}$$ to get one side, and use that: $$c > 1 \implies \sqrt{c} > 1 \implies 2\sqrt{c} > 2 \implies \frac{1}{2\sqrt{c}}< \frac{1}{2}$$ to get the other.
|
3 Tutor System
Starting just at 265/hour
# The following observations have been arranged in ascending order. If the median of the data is 63, Find the value of x.29, 32, 48, 50, x, x + 2, 72, 78, 84, 95
It can be observed that the total number of observations in the given data is 10 (even number).
Therefore the median of this data will be the mean of $$\frac{10}{2}$$, 5th and $$\frac{10}{2} + 1$$ i.e., 6th observation.
$$\therefore$$ median of data
= $$\frac{5th observation + 6th observation}{2}$$
$$\Rightarrow$$ $$63 = \frac{x + x + 2}{2}$$
$$\Rightarrow$$ $$63 = \frac{2x + 2}{2}$$
$$\Rightarrow$$ $$63 = x + 1$$
$$\Rightarrow$$ $$x = 62$$
Hence, the value of x is 62.
|
# (4)/(1)*(1)/(16) - multiplication of fractions
## (4)/(1)*(1)/(16) - step by step solution for the given fractions. Multiplication of fractions, full explanation.
If it's not what You are looking for just enter simple or very complicated fractions into the fields and get free step by step solution. Remember to put brackets in correct places to get proper solution.
### Solution for the given fractions
$\frac{4}{1 }*\frac{ 1}{16 }=?$
$\frac{(1*4)}{(1*16)} =\frac{ 4}{16}$
$\frac{4}{16 }=\frac{ 1}{4}$
## Related pages
derivative of cos2xfactoring x 4-1graph 2y x2sin2x sinx3log2w2 12wpercent equations calculatorgraph the equation y 2x 4what is the prime factorization of 55steps on how to subtract fractionssolving solutions calculatory 2 sqrt x 13y 5x3350x35q502 in roman numeralstan15what is the prime factorization of 5402izsimplify x 2 3x 2solution calculator for equationswhat is the prime factorization for 1200.33333 as a fraction1vysolve quadratic calculatorwhat is the lcm of 12aplus cmcss net20log1098 prime factorizationqx75convert percentage to decimal calculatorthe derivative of sinxdecimal inches to fraction calculatorwhat is the prime factorization for 1203x2 2x3what is 1.5 percent as a decimalfactor 3x 2-2x-537-100log4 64what is the antiderivative of sinxeasy 105.5what is 0.3125 as a fractionprime factorization of 300derivative of ln 4xax3 bx c 05x 6y 300.6 as a decimalgcf of 150what is the lcm of 6 7 9lcm of 9736.7980 oz to lbsgcf of 90 and 75derivative cosinewhat is the prime factorization of 2123y 2x 6prime factorization for 91sec 3x6x 2x 1x 3-27 factoredgraph y 4x 8derivative 5xadding fraction calculatorwhat is 4xb35qx 2y 8 graphdxy solutionsdivision fraction calculatorwhat is the prime factorization of 231csc 2x 10.6875 as a fractionax2 bx c 0 solve
|
# What is F.D?
## Recommended Posts
what is f vector dot ds vector
options are
A)Torque
B)Impulse
C)Momentum
D)Work
Edited by Jay Sharma
##### Share on other sites
There is not enough information there to give a better answer than
A scalar.
##### Share on other sites
There is not enough information there to give a better answer than
A scalar.
ok so it means that it is work done since that is the only scalar whose equation is this
any more suggestions??
##### Share on other sites
Do you mean $f \cdot ds$? And, is f a vector field?
If the above is true, $f \cdot ds$ is just the dot product of the vector field f and an infinitesimal length ds.
##### Share on other sites
Force over a distance travelled is work done. Is this what you were looking at?
https://en.wikipedia.org/wiki/Work_%28physics%29
$W=F.d$
##### Share on other sites
what is f vector dot ds vector
Considering what you have revealed about your status elsewhere I guess that you have not yet studied vectors in detail.
So you may be asking what is the dot product all about, as distinct from what is the particular dot product.
If that is the case then the following may help.
Ordinary numbers (scalars) have one single type of product.
5 x 3 is always 15 and that is all there is to it.
Vectors, on the other hand have three distinct types of product.
The simplest is the product of a scalar and a vector eg (aZ ) which results in another vector a times the magnitude of Z but along the same line of action.
This product is called the multiplication of a vector by a scalar (it is not the so called scalar product)
The second also results in another vector and comes from multiplying two vectors together. This product is more complicated as it results in a single new vector that is at right angles to the plane containing the original vectors and of magnitude Z = X x Ysin(a) where a is the angle between them.
This product is called the vector or cross product.
The product you are asking about is called the scalar or dot product.
The result of the dot product is a scalar (with no direction) of magnitude m = X . Y cos(a) where a is again the angle between them
In your original question F and ds are both vectors. Note I have used bold to show vectors, a common convention.
Does this help?
##### Share on other sites
Force over a distance travelled is work done. Is this what you were looking at?
https://en.wikipedia.org/wiki/Work_%28physics%29
$W=F.d$
ok so is it f vector dot d vector??
Considering what you have revealed about your status elsewhere I guess that you have not yet studied vectors in detail.
So you may be asking what is the dot product all about, as distinct from what is the particular dot product.
If that is the case then the following may help.
Ordinary numbers (scalars) have one single type of product.
5 x 3 is always 15 and that is all there is to it.
Vectors, on the other hand have three distinct types of product.
The simplest is the product of a scalar and a vector eg (aZ ) which results in another vector a times the magnitude of Z but along the same line of action.
This product is called the multiplication of a vector by a scalar (it is not the so called scalar product)
The second also results in another vector and comes from multiplying two vectors together. This product is more complicated as it results in a single new vector that is at right angles to the plane containing the original vectors and of magnitude Z = X x Ysin(a) where a is the angle between them.
This product is called the vector or cross product.
The product you are asking about is called the scalar or dot product.
The result of the dot product is a scalar (with no direction) of magnitude m = X . Y cos(a) where a is again the angle between them
In your original question F and ds are both vectors. Note I have used bold to show vectors, a common convention.
Does this help?
thnx for that that helped
##### Share on other sites
ok so is it f vector dot d vector??
..
Well last time I looked both Force and displacement were both vectors. Remember you get no result with a dot product if the vectors are perpendicular - and that tallies with the fact that no work done when force act orthogonally to the motion.
##### Share on other sites
Well last time I looked both Force and displacement were both vectors. Remember you get no result with a dot product if the vectors are perpendicular - and that tallies with the fact that no work done when force act orthogonally to the motion.
ok so is the answer among the options is work done????
##### Share on other sites
!
Moderator Note
Members should be aware that the OP Jay Sharma has been banned as a sockpuppet of Rajnish Kaushik.
##### Share on other sites
if i m not wrong than f is force and d is displacement so according to dimensions work done is correct
##### Share on other sites
I know that he has been banned again (this time in the guise of Maddy) but It might be worth pointing out that my reply to the OP was made when it only said
"what is f vector dot ds vector"
Since it's a dot product, you know it's a scalar quantity.
I'm not certain, but I think that is enough to answer the question once you know what the options are.
BTW, Maddy/ Jay / Rajnish Kaushik
it's bad manners to change your post like that after someone has replied.
##### Share on other sites
This topic is now closed to further replies.
×
|
• 可展开空间结构专栏 •
### 构架式可展开抛物柱面网状天线结构设计
1. 1 上海宇航系统工程研究所,上海201109
2 南京航空航天大学,南京210016
• 出版日期:2023-02-25 发布日期:2023-01-13
### Structural design of deployable parabolic cylindrical truss-mesh antenna reflector
WANG Xiaokai,LI Xianghua,DU Jianghua,LIU Tianming,ZHONG Hantian,Chen Chuanzhi,LI Ming,ZHOU Xin
1. 1 Aerospace System Engineering Shanghai,Shanghai 201109,China
2 Nanjing University of Aeronautics and Astronautics,Nanjing 210016,China
• Online:2023-02-25 Published:2023-01-13
Abstract: For the requirement of the spaceused largesize cylindrical working surface,a deployable parabolic cylindrical truss-mesh antenna reflector constructed by quadrangular prism modular was proposed.The deploying and folding process of the parabolic cylindrical truss-mesh antenna reflector was realized by the driving assembly of the modular truss.The fitting method of parabolic working surface was extended to the parabolic cylindrical surface.The key design nodes of the support deployable struts were obtained by the radial projection of the uniform distributed nodes in the fitting circle.The front net nodes in the parabolic curve and the rear net nodes in the proposed catenary were obtained by the same method.Based on the front net nodes and rear net nodes,topological configuration of the parabolic cylindrical cable-nets was constructed.The pretension optimization design was performed by using the no-linear element method.The optimization results indicate that the maximum error ratio of pre-tension in the parabolic direction to its average value was 12.3% and the maximum error ratio of pre-tension in the cylindrical direction to its average value was 7.6%.Finally,a prototype with the size of 12m×12m was developed.Deployment tests and shape measurements were performed,and the parabolic cylindrical surface errors were tested within 2mm RMS.The results show that the proposed deployable parabolic cylindrical trussmesh antenna reflector has excellent deploying performance and a high surface precision.
|
## Physics (10th Edition)
Recall that $\epsilon_{0}=NBA\omega$ where $\epsilon_{0}$ is the maximum emf, N is the number of turns, B is the magnetic field, A is the area ,and $\omega$ is the angular speed. Thus, we find: $Area= \frac{\epsilon_{0}}{NB\omega}=\frac{75.0 V}{248\times0.170 T\times79.1 rad/s}$$=0.0225m^{2}$ Length of one side= $\sqrt {A}=\sqrt {0.0225m^{2}}=0.150 m$
|
# Exact sequence in a category with zero morphisms
Let $C$ be a category with zero morphisms (equivalently, $\mathsf{Set}_*$-enriched), for example it could be a linear category. Then we can talk about kernels and cokernels of morphisms in $C$. I wonder if the following definition is already established and appears somewhere in the literature:
Definition: If $f : A \to B$ and $g : B \to C$ are morphisms, then $0 \to A \xrightarrow{f} B \xrightarrow{g} C \to 0$ is called exact if $f$ is a kernel of $g$ and $g$ is a cokernel of $f$.
Here, "$0 \to A$" and "$C \to 0$" are just notation; I don't require that a zero object exists.
The definition is well-known when $C$ is abelian (and simplifies a bit in that special case).
-
In a pointed category, a sequence of morphisms $$1 \longrightarrow K \stackrel{k}{\longrightarrow} A \stackrel{q}{\longrightarrow} Q \longrightarrow 1$$ is a short exact sequence when $k = \ker q$ and $q = \operatorname{coker} k$.
Note that their definition of pointed category includes a zero object (denoted by $1$).
|
1. ## metric space
Hello
2. Originally Posted by patricia-donnelly
Let (X,d) be a metric space and say A is a subset of X. If x is an accumulation point of A, prove that every r-neighbourhood of x actually contains an infinite number of distinct points of A (where r>0).
Suppose $x$ is an accumulation point of a set $A$. Then there exists distinct $x_n \in A$ such that $x_n \to x$. Let $r>0$ then by definition of convergence it means $d(x,x_n) < r$ for $n\geq N$. Thus, $x_N,x_{N+1},...$ are distinct elements which lie in $B(x,r)$ and hence there are infinitely many.
Using this, prove that any finite subset of X is closed.
A set is closed if and only if it contains all its accumulation points. A finite set clearly has no accumulation points, hence the set is empty. And it contains the empty set.
3. You've been a great help. Thank you very much
|
# Time and Work 1/2
7. A is thrice as good a work man as B and takes 10 days less to do a piece of work than B takes. B can do the work in :
a. 12 days
b. 15 days
c. 20 days
d. 30 days
8. A can complete a job in 9 days B in 10 days and C in 15 days. B and C start the work and are forced to leave after 2 days. The time taken to complete the remaining work is :
a. 6 days
b. 9 days
c. 10 days
d. 13 days
9. A completes a work in 6 days, B works $1\displaystyle\frac{1}{2}$ times as fast as A. How many days it will take for A and B together to complete the work ?
a. $4\displaystyle\frac{7}{{12}}$
b. $3\displaystyle\frac{5}{{12}}$
c. $4\displaystyle\frac{4}{5}$
d. None of these
10. Twelve men can complete a work in 8 days. Three days after they started the work, 3 more men joined them. In how many days will all of them together complete the remaining work ?
a. 2
b. 4
c. 5
d. 6
11. A and B can complete a work in 10 days and 15 days respectively. B starts the work and after 5 days A also joins him. In all, the work would be completed in :
a. 7 days
b. 9 days
c. 11days
d. None of these
12. A can do a piece of work in 80 days. He works at it for 10 days and then B alone finishes the work in 42 days. The two together could complete the work in :
a. 24 days
b. 25 days
c. 30 days
d. 35 days
Follow: Share:
|
Online ISSN : 1884-7560
Print ISSN : 0367-6110
ISSN-L : 0367-6110
ジャーナル フリー
2016 年 51 巻 1 号 p. 19-26
The accident at the Fukushima nuclear power plant in 2011 caused the release of large amounts of tellurium (Te) isotopes, with radio-cesium (Cs) and radio-iodine (I), into the environment. The total amounts of 127mTe and 129mTe released from the nuclear power plant were estimated as 1.1 × 1015 and 3.3 × 1015 Bq, respectively. At the location where the deposition of 129mTe was relatively large, the ratio of the radioactivity of 129mTe to that of 137Cs reportedly reached 1.49 on June 14, 2011. Since 127mTe has a relatively long half-life, it possibly contributed to the internal radiation dose at the early stage after the accident. In this paper, the ratio of the committed effective dose of 127mTe to that of 137Cs after the oral ingestion of rice was estimated by using various reported parameters. The relevant parameters are: 1) the deposition ratios of 127mTe, 129mTe, and 134Cs to 137Cs; 2) the deposition ratio of 127mTe to 129mTe; 3) the transfer factors of Te and Cs; and 4) the effective dose coefficients for 127mTe, 129mTe, 134Cs, and 137Cs. The ratios of the committed effective dose of 127mTe to that of 137Cs were calculated for adults after a single ingestion at the time of the rice harvest. The ratio was 0.45 where the 129mTe/137Cs in the soil was higher and 0.05 where the level of 129mTe/137Cs was average. The ratio of the committed effective dose from 129mTe and 127mTe to that from 137Cs for one year reached 0.55 and 9.03 at the location where the level of 129mTe/137Cs in the soil was higher. These data could indicate that radioactive Te should not be disregarded in reconstructing the internal radiation dose from food for one year after the accident.
|
# Polytropic index n is given by
This question was previously asked in
ISRO Scientist ME 2010 Paper
View all ISRO Scientist ME Papers >
1. $$\frac{{\ln \left( {\frac{{{p_2}}}{{{p_1}}}} \right)}}{{\ln \left( {\frac{{{v_1}}}{{{v_2}}}} \right)}}$$
2. $$\frac{{\ln \left( {\frac{{{p_1}}}{{{p_2}}}} \right)}}{{\ln \left( {\frac{{{v_1}}}{{{v_2}}}} \right)}}$$
3. $$\frac{{\ln \left( {\frac{{{v_1}}}{{{v_2}}}} \right)}}{{\ln \left( {\frac{{{p_2}}}{{{p_1}}}} \right)}}$$
4. $$\frac{{\ln \left( {\frac{{{v_2}}}{{{v_1}}}} \right)}}{{\ln \left( {\frac{{{p_2}}}{{{p_1}}}} \right)}}$$
Option 1 : $$\frac{{\ln \left( {\frac{{{p_2}}}{{{p_1}}}} \right)}}{{\ln \left( {\frac{{{v_1}}}{{{v_2}}}} \right)}}$$
Free
ST 1: General Knowledge
6156
20 Questions 20 Marks 20 Mins
## Detailed Solution
Concept:
According to the Ideal gas equation for a polytropic process:
$${P_1}V_1^n = {P_2}V_2^n$$
$$\Rightarrow {\left( {\frac{{{V_1}}}{{{V_2}}}} \right)^n } = \frac{{{P_2}}}{{{P_1}}}$$
Taking log on both sides
$$n \ln \left( {\frac{{{V_1}}}{{{V_2}}}} \right) = \ln \left( {\frac{{{P_2}}}{{{P_1}}}} \right)$$
$$\Rightarrow n = \frac{{\ln \left( {\frac{{{P_2}}}{{{P_1}}}} \right)}}{{\ln \left( {\frac{{{V_1}}}{{{V_2}}}} \right)}}$$
Work done for Polytropic process is, $${{W}_{1-2}}=\frac{{{P}_{1}}{{V}_{1}}\;-\;{{P}_{2}}{{V}_{2}}}{n\;-\;1}$$
Heat transfer for Polytropic process is, $$Q = W\left( {\frac{{\gamma\;- \;n}}{{\gamma\;-\;1}}} \right)$$
|
# Changes between Version 1 and Version 2 of TracInstall
Ignore:
Timestamp:
Jan 14, 2010 3:19:43 PM (4 years ago)
Comment:
--
### Legend:
Unmodified
v1 = Trac Installation Guide = = Trac Installation Guide for 0.11 = [[TracGuideToc]] Trac is a lightweight project management tool that is implemented as a web-based application. Trac is written in the Python programming language and can use [http://sqlite.org/ SQLite] or [http://www.postgresql.org/ PostgreSQL] as database. For HTML rendering, Trac uses the [http://www.clearsilver.net/ Clearsilver] templating system. What follows are generic instructions for installing and setting up Trac and its requirements. While you can find instructions for installing Trac on specific systems at [http://projects.edgewall.com/trac/wiki/TracInstallPlatforms TracInstallPlatforms] on the main Trac site, please be sure to first read through these general instructions to get a good understanding of the tasks involved. Trac is written in the Python programming language and needs a database, [http://sqlite.org/ SQLite], [http://postgresql.org/ PostgreSQL], [http://mysql.com/ MySQL]. For HTML rendering, Trac uses the [http://genshi.edgewall.org Genshi] templating system. What follows are generic instructions for installing and setting up Trac and its requirements. While you can find instructions for installing Trac on specific systems at TracInstallPlatforms on the main Trac site, please be sure to '''first read through these general instructions''' to get a good understanding of the tasks involved. See TracUpgrade for instructions on how to upgrade an existing installation. == Quick Install a Released Version == For a quick install, first make sure you have [http://python.org/download Python] (2.3-2.6) and [http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-install easy_install]. Then enter (''omitting 'sudo' if not applicable'') {{{ sudo easy_install Trac }}} to install Trac, SQLite, and Genshi. == Requirements == The hardware requirements for running Trac obviously depend on the expected data volume (number of wiki pages, tickets, revisions) and traffic. Very small projects will run fine with a 500MHz processor and 128MB RAM using SQLite. In general, the more RAM, the better. A fast hard disk also helps. To install Trac, the following software packages must be installed: * [http://www.python.org/ Python], version >= 2.3. * Python 2.4 is not supported on Windows since there are no Subversion bindings available for it. * [http://www.python.org/ Python], version >=2.3 (<3.0) * if using mod_python together with xml-related things, use python-2.5. expat is namespaced there and does not cause apache to crash any more(see [http://www.dscpl.com.au/wiki/ModPython/Articles/ExpatCausingApacheCrash here] for details). * For RPM-based systems you might also need the python-devel and python-xml packages. * [http://subversion.tigris.org/ Subversion], version >= 1.0. (>= 1.1 recommended) and corresponding [http://svnbook.red-bean.com/svnbook-1.1/ch08s02.html#svn-ch-8-sect-2.3 Python bindings] * Trac uses the [http://www.swig.org/ SWIG] bindings included in the Subversion distribution, '''not''' [http://pysvn.tigris.org/ PySVN] (which is sometimes confused with the standard SWIG bindings). * If Subversion was already installed without the SWIG bindings, you'll need to re-configure Subversion and make swig-py, make install-swig-py. * [http://www.clearsilver.net/ ClearSilver], version >= 0.9.3 * With python-bindings (./configure --with-python=/usr/bin/python) === For SQLite === * [http://www.sqlite.org/ SQLite], version 2.8.x or 3.x * [http://pysqlite.org/ PySQLite] * version 1.0.x (for SQLite 2.8.x) * version 1.1.x or 2.x (for SQLite 3.x) === For PostgreSQL === * See instructions in [trac:wiki:TracOnWindows/Python2.5 TracOnWindows/Python2.5] * [wiki:setuptools], version >= 0.6 * [http://genshi.edgewall.org/wiki/Download Genshi], version >= 0.5 (was version >= 0.4.1 on previous 0.11 release candidates) * You also need a database system and the corresponding python drivers for it. The database can be either SQLite, PostgreSQL or MySQL. * Optional if some plugins require it: [http://www.clearsilver.net/ ClearSilver] ==== For SQLite ==== If you're using Python 2.5 or 2.6, you already have everything you need. If you're using Python 2.3 or 2.4 and need pysqlite, you can download from [http://code.google.com/p/pysqlite/downloads/list google code] the Windows installers or the tar.gz archive for building from source: {{{ $tar xvfz .tar.gz$ cd $python setup.py build_static install }}} That way, the latest SQLite version will be downloaded and built into the bindings. If you're still using SQLite 2.x, you'll need pysqlite 1.0.x, although this package is not easy to find anymore. For SQLite 3.x, try not to use pysqlite 1.1.x, which has been deprecated in favor of pysqlite 2.x. See additional information in [trac:PySqlite PySqlite]. ==== For PostgreSQL ==== * [http://www.postgresql.org/ PostgreSQL] * [http://initd.org/projects/psycopg1 psycopg1], [http://initd.org/projects/psycopg2 psycopg2], or [http://pypgsql.sourceforge.net/ pyPgSQL] === Optional Requirements === * [http://initd.org/projects/psycopg2 psycopg2] * See [trac:wiki:DatabaseBackend#Postgresql DatabaseBackend] '''Warning''': PostgreSQL 8.3 uses a strict type checking mechanism. To use Trac with the 8.3 Version of PostgreSQL, you will need [http://trac.edgewall.org/changeset/6512 trac-0.11] or later. ==== For MySQL ==== * [http://mysql.com/ MySQL], version 4.1 or later ([http://askmonty.org/wiki/index.php/MariaDB MariaDB] might work as well) * [http://sf.net/projects/mysql-python MySQLdb], version 1.2.1 or later See [trac:MySqlDb MySqlDb] for more detailed information. It is ''very'' important to read carefully that page before creating the database. == Optional Requirements == ==== Version Control System ==== '''Please note:''' if using Subversion, Trac must be installed on the '''same machine'''. Remote repositories are currently not supported (although Windows UNC paths such as {{{\\machine_name\path\to\svn}}} do work). * [http://subversion.tigris.org/ Subversion], version >= 1.0. (versions recommended: 1.2.4, 1.3.2 or 1.4.2) and the '''''corresponding''''' Python bindings. For troubleshooting, check [trac:TracSubversion TracSubversion] * Trac uses the [http://svnbook.red-bean.com/svnbook-1.1/ch08s02.html#svn-ch-8-sect-2.3 SWIG] bindings included in the Subversion distribution, '''not''' [http://pysvn.tigris.org/ PySVN] (which is sometimes confused with the standard SWIG bindings). * If Subversion was already installed without the SWIG bindings, on Unix you'll need to re-configure Subversion and make swig-py, make install-swig-py. * There are [http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=91 pre-compiled bindings] available for win32. * Support for other version control systems is provided via third-parties. See [trac:PluginList PluginList] and [trac:VersioningSystemBackend VersioningSystemBackend]. ==== Web Server ==== * A CGI-capable web server (see TracCgi), or * a [http://www.fastcgi.com/ FastCGI]-capable web server (see TracFastCgi), or * an [http://tomcat.apache.org/connectors-doc/ajp/ajpv13a.html AJP]-capable web server (see [trac:TracOnWindowsIisAjp TracOnWindowsIisAjp]), or * [http://httpd.apache.org/ Apache] with [http://code.google.com/p/modwsgi/ mod_wsgi] (see [wiki:TracModWSGI] or http://code.google.com/p/modwsgi/wiki/IntegrationWithTrac) or * This should work with Apache 1.3, 2.0 or 2.2 and promises to deliver more performance than using mod_python. A little less mature than mod_python. * [http://httpd.apache.org/ Apache] with [http://www.modpython.org/ mod_python 3.1.3+] (see TracModPython) * [http://peak.telecommunity.com/DevCenter/setuptools setuptools], version >= 0.5a13 for using plugins (see TracPlugins) * [http://docutils.sourceforge.net/ docutils], version >= 0.3.3 for WikiRestructuredText. * [http://silvercity.sourceforge.net/ SilverCity] and/or [http://www.gnu.org/software/enscript/enscript.html Enscript] for [wiki:TracSyntaxColoring syntax highlighting]. '''Attention''': The various available versions of these dependencies are not necessarily interchangable, so please pay attention to the version numbers above. If you are having trouble getting Trac to work please double-check all the dependencies before asking for help on the [http://projects.edgewall.com/trac/wiki/MailingList MailingList] or [http://projects.edgewall.com/trac/wiki/IrcChannel IrcChannel]. Please refer to the documentation of these packages to find out how they are best installed. In addition, most of the [http://projects.edgewall.com/trac/wiki/TracInstallPlatforms platform-specific instructions] also describe the installation of the dependencies. * When installing mod_python the development versions of Python and Apache are required (actually the libraries and header files) For those stuck with Apache 1.3, it is also possible to get Trac working with [http://www.modpython.org/ mod_python 2.7] (see [trac:wiki:TracModPython2.7 TracModPython2.7]). This guide hasn't been updated since 0.84, so it may or may not work. ==== Other Python Utilities ==== * [http://docutils.sourceforge.net/ docutils], version >= 0.3.9 for WikiRestructuredText. * [http://pygments.pocoo.org Pygments] for '''syntax highlighting''', although [http://silvercity.sourceforge.net/ SilverCity] >= 0.9.7 and/or [http://gnu.org/software/enscript/enscript.html GNU Enscript] are also possible. Refer to TracSyntaxColoring for details. * [http://pytz.sf.net pytz] to get a complete list of time zones, otherwise Trac will fall back on a shorter list from an internal time zone implementation. '''Attention''': The various available versions of these dependencies are not necessarily interchangable, so please pay attention to the version numbers above. If you are having trouble getting Trac to work please double-check all the dependencies before asking for help on the [trac:MailingList MailingList] or [trac:IrcChannel IrcChannel]. Please refer to the documentation of these packages to find out how they are best installed. In addition, most of the [trac:TracInstallPlatforms platform-specific instructions] also describe the installation of the dependencies. Keep in mind however that the information there ''probably concern older versions of Trac than the one you're installing'' (there are even some pages that are still talking about Trac 0.8!). == Installing Trac == Like most Python programs, the Trac Python package is installed by running the following command at the top of the source directory: One way to install Trac is using setuptools. With setuptools you can install Trac from the subversion repository; for example, to install release version 0.11 do: {{{ easy_install http://svn.edgewall.org/repos/trac/tags/trac-0.11 }}} But of course the python-typical setup at the top of the source directory also works: {{{$ python ./setup.py install ''Note: you'll need root permissions or equivalent for this step.'' This will byte-compile the python source code and install it in the site-packages directory of your Python installation. The directories cgi-bin, templates, htdocs, wiki-default and wiki-macros are all copied to $prefix/share/trac/. This will byte-compile the python source code and install it as an .egg file or folder in the site-packages directory of your Python installation. The .egg will also contain all other resources needed by standard Trac, such as htdocs and templates. The script will also install the [wiki:TracAdmin trac-admin] command-line tool, used to create and maintain [wiki:TracEnvironment project environments], as well as the [wiki:TracStandalone tracd] standalone server. === Advanced Users === ==== Advanced Options ==== To install Trac to a custom location, or find out about other advanced installation options, run: {{{$ python ./setup.py --help }}} easy_install --help }}} Also see [http://docs.python.org/inst/inst.html Installing Python Modules] for detailed information. Specifically, you might be interested in: {{{ $python ./setup.py install --prefix=/path/you/want }}} easy_install --prefix=/path/to/installdir }}} or, if installing Trac to a Mac OS X system: {{{ easy_install --prefix=/usr/local --install-dir=/Library/Python/2.5/site-packages }}} The above will place your tracd and trac-admin commands into /usr/local/bin and will install the Trac libraries and dependencies into /Library/Python/2.5/site-packages, which is Apple's preferred location for third-party Python application installations. == Creating a Project Environment == A new environment is created using [wiki:TracAdmin trac-admin]: {{{$ trac-admin /path/to/trac_project_env initenv }}} [wiki:TracAdmin trac-admin] will prompt you for the information it needs to create the environment, such as the name of the project, the path to an existing subversion repository, the [wiki:TracEnvironment#DatabaseConnectionStrings database connection string], and so on. If you're not sure what to specify for one of these options, just leave it blank to use the default value. The database connection string in particular will always work as long as you have SQLite installed. The only option where the default value is likely to not work is the path to the Subversion repository, so make sure that one's correct. $trac-admin /path/to/myproject initenv }}} [wiki:TracAdmin trac-admin] will prompt you for the information it needs to create the environment, such as the name of the project, the type and the path to an existing [wiki:TracEnvironment#SourceCodeRepository source code repository], the [wiki:TracEnvironment#DatabaseConnectionStrings database connection string], and so on. If you're not sure what to specify for one of these options, just leave it blank to use the default value. The database connection string in particular will always work as long as you have SQLite installed. Leaving the path to the source code repository empty will disable any functionality related to version control, but you can always add that back when the basic system is running. Also note that the values you specify here can be changed later by directly editing the [wiki:TracIni] configuration file. ''Note: The user account under which the web server runs will require write permissions to the environment directory and all the files inside.'' ''Note: The user account under which the web server runs will require write permissions to the environment directory and all the files inside. On Linux, with the web server running as user apache and group apache, enter:'' chown -R apache.apache /path/to/myproject == Running the Standalone Server == After having created a Trac environment, you can easily try the web interface by running the standalone server [wiki:TracStandalone tracd]: {{{$ tracd --port 8000 /path/to/projectenv }}} Then, fire up a browser and visit http://localhost:8000/. You should get a simple listing of all environments that tracd knows about. Follow the link to the environment you just created, and you should see Trac in action. $tracd --port 8000 /path/to/myproject }}} Then, fire up a browser and visit http://localhost:8000/. You should get a simple listing of all environments that tracd knows about. Follow the link to the environment you just created, and you should see Trac in action. If you only plan on managing a single project with trac you can have the standalone server skip the environment list by starting it like this: {{{$ tracd -s --port 8000 /path/to/myproject }}} == Running Trac on a Web Server == Trac provides three options for connecting to a “real” web server: [wiki:TracCgi CGI], [wiki:TracFastCgi FastCGI] and [wiki:TracModPython mod_python]. For decent performance, it is recommended that you use either FastCGI or mod_python. Trac provides three options for connecting to a "real" web server: [wiki:TracCgi CGI], [wiki:TracFastCgi FastCGI] and [wiki:TracModPython mod_python]. For decent performance, it is recommended that you use either FastCGI or mod_python. If you're not afraid of running newer code, you can also try running Trac on [wiki:TracModWSGI mod_wsgi]. This should deliver even better performance than mod_python, but the module isn't as extensively tested as mod_python. Trac also supports [trac:TracOnWindowsIisAjp AJP] which may be your choice if you want to connect to IIS. ==== Generating the Trac cgi-bin directory ==== In order for Trac to function properly with FastCGI or mod_python, you need to have a trac.cgi file. This is an executable which loads the appropriate Python code. It can be generated using the deploy option of [wiki:TracAdmin trac-admin]. There is, however, a bit of a chicken-and-egg problem. The [wiki:TracAdmin trac-admin] command requires an existing environment to function, but complains if the deploy directory already exists. This is a problem, because environments are often stored in a subdirectory of the deploy. The solution is to do something like this: {{{ mkdir -p /usr/share/trac/projects/my-project trac-admin /usr/share/trac/projects/my-project initenv trac-admin /usr/share/trac/projects/my-project deploy /tmp/deploy mv /tmp/deploy/* /usr/share/trac }}} ==== Setting up the Plugin Cache ==== Some Python plugins need to be extracted to a cache directory. By default the cache resides in the home directory of the current user. When running Trac on a Web Server as a dedicated user (which is highly recommended) who has no home directory, this might prevent the plugins from starting. To override the cache location you can set the PYTHON_EGG_CACHE environment variable. Refer to your server documentation for detailed instructions. == Configuring Authentication == The process of adding, removing, and configuring user accounts for authentication depends on the specific way you run Trac. To learn about how to accomplish these tasks, please visit one of the following pages: The process of adding, removing, and configuring user accounts for authentication depends on the specific way you run Trac. The basic procedure is described in the [wiki:TracCgi#AddingAuthentication "Adding Authentication"] section on the TracCgi page. To learn how to setup authentication for the frontend you're using, please refer to one of the following pages: * TracStandalone if you use the standalone server, tracd. * TracModPython if you use the mod_python method. == Automatic reference to the SVN changesets in Trac tickets == You can configure SVN to automatically add a reference to the changeset into the ticket comments, whenever files are committed to the repository. The description of the commit needs to contain one of the following formulas: * '''Refs #123''' - to reference this changeset in #123 ticket * '''Fixes #123''' - to reference this changeset and close #123 ticket with the default status ''fixed'' All you have to do is to edit the ''post-commit'' hook in your SVN repository and make it execute ''trac-post-commit-hook'' coming with Trac. If you are editing the ''post-commit'' hook for the first time you need to navigate to your SVN repository's hooks subfolder and rename existing there ''post-commit'' template: {{{ $cd /path/to/svn/repository/hooks$ mv post-commit.tmpl post-commit $chmod 755 post-commit }}} Next open it in any text editor and add a line with path to the Trac environment connected with this SVN repository and another line executing the ''trac-post-commit-hook'' script: {{{ REPOS="$1" REV="$2" TRAC_ENV="/path/to/your/trac/project" /usr/bin/python /usr/local/bin/trac-post-commit-hook -p "$TRAC_ENV" -r "\$REV" }}} Make sure that ''trac-post-commit-hook'' exists in above path with execution permissions for the same user which SVN is running from. This script can be found in contrib subfolder of your Trac distribution and the latest version can be always downloaded from [source:trunk/contrib/trac-post-commit-hook]. == Platform-specifics installations == * See [trac:TracInstallPlatforms TracInstallPlatforms] == Using Trac == Keep in mind that anonymous (not logged in) users can by default access most but not all of the features. You will need to configure authentication and grant additional [wiki:TracPermissions permissions] to authenticated users to see the full set of features. ''Enjoy!'' [http://projects.edgewall.com/trac/wiki/TracTeam The Trac Team] '' Enjoy! '' [trac:TracTeam The Trac Team] ---- See also: TracGuide, TracCgi, TracFastCgi, TracModPython, TracUpgrade, TracPermissions See also: [trac:TracInstallPlatforms TracInstallPlatforms], TracGuide, TracCgi, TracFastCgi, TracModPython, [wiki:TracModWSGI], TracUpgrade, TracPermissions
|
# Delay statement and clock generation in Verilog
Status
Not open for further replies.
#### ds18s20
##### Full Member level 3
Hi everyone,
Since we know that FPGA in general CAN NOT generate clock, but instead a clock must be fed into them from another source.
Then why is there # xx delay statement in Verilog or to put it another way - why are there so many examples of how we can "generate" a clock with:
Code:
@always #10 q= ~q
That never made sense to me? How would the FPGA know what the length of 10 time units is and where do the time units come from in the first place? Is this some evil compiler thing or does it exist PURELY SYNTHENICALLY only within the software environment for the purpose of testing and simulation?
And even if this is synthetic concept, how does one tell the compiler what one time unit is and where is the counting kept for this to work? Is it all transparent to the Quartus2 user?
In my mind I see serious conceptual problems when I encounter “assign # xx” statements.
Thanks much
~B
#### Old Nick
It's necessary for a test bench though.
#### aajizattari
##### Member level 3
Well!
This is indeed for test and simulation purposes ...
so that you can test your design yourself in the Software itself ....
and see if your design meets timing requirements
This thing is not at all synthesizeable .... at all
As for the time units you will have noticed they are specified in form of 'timescale ' to the tool we are using (I use Xilinx ISE )
You are right in that it is purely within software .....
That would indeed be a magic if a black box of hardware knows about seconds and nanosecs by itself
#### pankajrangaree1
##### Junior Member level 2
u r question is nice,but dnt give clk signal in the ckt,instead introduce some combinational ckt which is having delay.
#### manish12
Delay statement and clock generation in HDL i think
it for designer to get some hint about signal and delayed signal due to path,
so that he will at least think in that direction .
#### sp
##### Full Member level 6
verification required those delay a lot...
when creating behavioral models... when doing testbench.
but when you are FPGA user, we won't use that as those delay statement is unsynthesizable...
Status
Not open for further replies.
|
Get a free home demo of LearnNext
Available for CBSE, ICSE and State Board syllabus.
Call our LearnNext Expert on 1800 419 1234 (tollfree)
OR submit details below for a call back
clear
Sanjeev Tiwari
May 15, 2015
All alkalies are bases but all bases are not alkalies. Explain this statement
All alkalies are bases but all bases are not alkalies. Explain this statement
Swathi Ambati
A base is a substance that reacts with acids and neutrlize them. Some bases are soluble in water some are insoluble. Soluble bases are called alkalies. For example copper oxide does not dissolve in water. Hence it is called as base. Definition of an alkali: A base that is soluble in water is called an alkali. In general hydroxides of alkali metals and alkaline earthmetals are considered as alkalies. Example: KOH (aq) → K+(aq) + OH- (aq) Ca(OH)2(aq) → Ca+2 (aq) + OH- (aq) Therefore, it is said that all alkalis are bases, but all bases are not alkalies.
Syeda
Answer. A base is a substance that reacts with acids and neutrlize them. Some bases are soluble in water some are insoluble. Soluble bases are called alkalies. For example copper oxide does not dissolve in water. Hence it is called as base. Definition of an alkali: A base that is soluble in water is called an alkali. In general hydroxides of alkali metals and alkaline earthmetals are considered as alkalies. Example: KOH (aq) → K+(aq) + OH- (aq) Ca(OH)2(aq) → Ca+2 (aq) + OH- (aq) Therefore, it is said that all alkalis are bases, but all bases are not alkalies. SME Approved
Like NextGurukul? Also explore our advanced self-learning solution LearnNext
Offered for classes 6-12, LearnNext is a popular self-learning solution for students who strive for excellence
Explore
Animated Video
lessons
All India
Test Series
Interactive Video
Experiments
Best-in class
books
|
1. ## Bijection
Hi,
Can you please give me an example for bijective function between
{0, 1}^N and (0, 1, 2, 3}^N?
N is the set of natural numbers.
Thanks for any kind of help
2. Hi
hint: use the base-2 representation:
$00\leftrightarrow 0$
$01\leftrightarrow 1$
$10\leftrightarrow 2$
$11\leftrightarrow 3$
|
# The output of the given combination gates represents :
more_vert
The output of the given combination gates represents :
(1) XOR Gate (2) NAND Gate (3) NOR Gate (4) AND Gate
more_vert
verified
|
# If y=y(x) is the solution of the differential equation,
Question:
If $y=y(x)$ is the solution of the differential equation, $e^{y}\left(\frac{d y}{d x}-1\right)=e^{x}$ such that $y(0)=0$, then $y(1)$ is equal to:
1. (1) $1+\log _{e} 2$
2. (2) $2+\log _{e} 2$
3. (3) $2 e$
4. (4) $\log _{e} 2$
Correct Option: 1
Solution:
Let $e^{y}=t$
$e^{y} \frac{d y}{d x}=\frac{d t}{d x}$
$\therefore \quad \frac{d t}{d x}-t=e^{x}$ $\left[\because e^{y} \frac{d y}{d x}-e^{y}=e^{x}\right]$
I.F. $=e^{\int-1 . d x}=e^{-x}$
$t\left(e^{-x}\right)=\int e^{x} \cdot e^{-x} d x \Rightarrow e^{y-x}=x+c$
Put $x=0, y=0$, then we get $c=1$
$e^{y-x}=x+1$
$y=x+\log _{e}(x+1)$
Put $x=1 \quad \therefore \quad y=1+\log _{e} 2$
Leave a comment
Free Study Material
|
# Tighter bounds with assignment constraints
2 I have a Binary linear programming problem with a maximization objective and a set of assignment constraints like a + b + c + d + e = 1; x + y + z + w + k = 1 and so on. All the variables are binary. I am trying to solve the problem using CPLEX and the upper bounds i.e., LP relaxation is very weak. The LP solution takes the values a=b=c=d=e=1/5 and x=y=z=w=k=1/5 and so one. This makes the convergence of the branch and cut in CPLEX very bad. I was wondering if there is a way to tighten the bounds or if there are a class of valid inequalities that is capable of doing so when we have such assignment constraints. Thanks in advance. asked 11 Jul '14, 17:07 pondy 43●1●4 accept rate: 0%
2 Answers:
0 There does not exist cutting planes or valid inequalities which cut into the polytope defined by $\text{conv}(\{ x\in\{ 0,1\}^n \ :\ \sum_{i\in S_j}x_i=1,\forall j\in J \})$ where $$(S_j)_{j\in J}$$ is a partition of the set $$S$$ of variables. This is simply due to the polytope having all integral extreme points, wherefore the linear relaxation of the set of integer points is exactly the convex hull of integer points. This means that whenever you have a point satisfying the equalities it is in the convex hull of integer solutions and cannot be cut of. Instead you should focus on other substructures of your program or combinations of the assignment constraints and and other substructures. answered 12 Jul '14, 05:43 Sune 958●4●14 accept rate: 20%
0 I can't offer any wisdom about tightening bounds. If you have any problem-specific information that hints at some variables being preferable to others in a solution, it might be worth telling CPLEX to treat those constraints as SOS1 instances. The key (besides being lucky) is to provide useful weights; if you create SOS1 constraints with all variables given equal (default) weights, it will not help. One possibility would be too weight variables according to the ratio of their objective coefficient to the number of constraints they cover. answered 12 Jul '14, 16:22 Paul Rubin ♦♦ 14.6k●5●13 accept rate: 19%
Your answer
toggle preview community wiki
### Follow this question
By Email:
Once you sign in you will be able to subscribe for any updates here
By RSS:
Answers
Answers and Comments
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• link:[text](http://url.com/ "Title")
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
Tags:
Asked: 11 Jul '14, 17:07
Seen: 793 times
Last updated: 12 Jul '14, 16:22
### Related questions
OR-Exchange! Your site for questions, answers, and announcements about operations research.
|
# Publications
2018
Wen M, Carr S, Fang S, Kaxiras E, Tadmor EB. Dihedral-angle-corrected registry-dependent interlayer potential for multilayer graphene structures. PHYSICAL REVIEW B. 2018;98 (23).Abstract
The structural relaxation of multilayer graphene is essential in describing the interesting electronic properties induced by intentional misalignment of successive layers, including the recently reported superconductivity in twisted bilayer graphene. This is difficult to accomplish without an accurate interatomic potential. Here, we present a new, registry-dependent Kolmogorov-Crespi-type interatomic potential to model interlayer interactions in multilayer graphene structures. It consists of two parts, representing attractive interaction due to dispersion and repulsive interaction due to anisotropic overlap of electronic orbitals. An important new feature is a dihedral-angle-dependent term that is added to the repulsive part to describe correctly several distinct stacking states that the original Kolmogorov-Crespi potential cannot distinguish. We refer to the new model as the dihedral-angle-corrected registry-dependent interlayer potential (DRIP). Computations for several test problems show that DRIP correctly reproduces the binding, sliding, and twisting energies and forces obtained from ab initio total-energy calculations based on density-functional theory. We use the new potential to study the structural properties of a twisted graphene bilayer and the exfoliation of graphene from graphite. Our potential is available through the OpenKIM interatomic potential repository at https://openkim org.
Wang Z-T, Hoyt RA, El-Soda M, Madix RJ, Kaxiras E, Sykes ECH. Dry Dehydrogenation of Ethanol on Pt-Cu Single Atom Alloys. TOPICS IN CATALYSIS. 2018;61 (5-6, SI) :328-335.Abstract
The non-oxidative dehydrogenation of ethanol to acetaldehyde and hydrogen is an industrially relevant chemical conversion. Although Cu-based catalysts show high reactivity toward oxidative ethanol dehydrogenation, the flat Cu(111) surface is rather inactive for ethanol dehydrogenation in the absence of water, surface oxygen or defects. Herein we show, using experimental and theoretical studies of model systems, that adding 1% Pt into the surface of Cu(111) to form dilute Pt-Cu single atom alloys (SAAs) increases the activity of Cu(111) for ethanol dehydrogenation sixfold. The mechanism of ethanol dehydrogenation was investigated at the molecular level using scanning tunneling microscopy, temperature programmed experiments and density functional theory calculations. Our results demonstrate that Pt-Cu SAAs are much more active than Cu(111) for converting ethanol to acetaldehyde and hydrogen in the absence of surface oxygen and water. Specifically, the O-H bond of ethanol is activated at Pt sites below 160 K, followed by ethoxy spillover to Cu sites which results in a significant increase of the ethoxy intermediate yield. The C-H bond of ethoxy is then activated at 310 K, and the final product, acetaldehyde, desorbs from Cu(111) in a reaction rate limited process. Finally, we show that the Cu model surfaces exhibit stability with respect to poisoning as well as 100% selectivity in the alcohol dehydrogenation to acetaldehyde and hydrogen.
Fang S, Carr S, Cazalilla MA, Kaxiras E. Electronic structure theory of strained two-dimensional materials with hexagonal symmetry. PHYSICAL REVIEW B. 2018;98 (7).Abstract
We derive electronic tight-binding Hamiltonians for strained graphene, hexagonal boron nitride, and transition-metal dichalcogenides based on Wannier transformation of ab initio density functional theory calculations. Our microscopic models include strain effects to leading order that respect the hexagonal crystal symmetry and local crystal configuration and are beyond the central force approximation which assumes only pairwise distance dependence. Based on these models, we also derive and analyze the effective low-energy Hamiltonians. Our ab initio approaches complement the symmetry group representation construction for such effective low-energy Hamiltonians and provide the values of the coefficients for each symmetry-allowed term. These models are relevant for the design of electronic device applications since they provide the framework for describing the coupling of electrons to other degrees of freedom including phonons, spin, and the electromagnetic field. The models can also serve as the basis for exploring the physics of many-body systems of interesting quantum phases.
Mattheakis M, Tsironis GP, Kaxiras E. Emergence and dynamical properties of stochastic branching in the electronic flows of disordered Dirac solids. EPL. 2018;122 (2).Abstract
Graphene as well as more generally Dirac solids constitute two-dimensional materials where the electronic flow is ultra-relativistic. When a Dirac solid is deposited on a different substrate surface with roughness, a local random potential develops through an inhomogeneous charge impurity distribution. This external potential affects profoundly the charge flow and induces a chaotic pattern of current branches that develops through focusing and defocusing effects produced by the randomness of the surface. An additional bias voltage may be used to tune the branching pattern of the charge carrier currents. We employ analytical and numerical techniques in order to investigate the onset and the statistical properties of carrier branches in Dirac solids. We find a specific scaling-type relationship that connects the physical scale for the occurrence of branches with the characteristic medium properties, such as disorder and bias field. We use numerics to test and verify the theoretical prediction as well as a perturbative approach that gives a clear indication of the regime of validity of the approach. This work is relevant to device applications and may be tested experimentally.
Defo RK, Zhang X, Bracher D, Kim G, Hu E, Kaxiras E. Energetics and kinetics of vacancy defects in 4H-SiC. PHYSICAL REVIEW B. 2018;98 (10).Abstract
Defect engineering in wide-gap semiconductors is important in controlling the performance of single-photon emitter devices. The effective incorporation of defects depends strongly on the ability to control their formation and location, as well as to mitigate attendant damage to the material. In this study, we combine density functional theory, molecular dyamics (MD), and kinetic Monte Carlo (KMC) simulations to study the energetics and kinetics of the silicon monovacancy V-Si and related defects in 4H-SiC. We obtain the defect formation energy for V-Si in various charge states and use MD simulations to model the ion implantation process for creating defects. We also study the effects of high-temperature annealing on defect position and stability using KMC and analytical models. Using a larger (480-atom) supercell than previous studies, we obtain the temperature-dependent diffusivity of V-Si in various charge states and find significantly lower barriers to diffusion than previous estimates. In addition, we examine the recombination with interstitial Si and conversion of V-Si into CSiVC during annealing and propose methods for using strain to reduce changes in defect concentrations. Our results provide guidance for experimental efforts to control the position and density of V-Si defects within devices, helping to realize their potential as solid-state qubits.
Yang Y, Fang S, Fatemi V, Ruhman J, Navarro-Moratalla E, Watanabe K, Taniguchi T, Kaxiras E, Jarillo-Herrero P. Enhanced superconductivity upon weakening of charge density wave transport in 2H-TaS2 in the two-dimensional limit. PHYSICAL REVIEW B. 2018;98 (3).Abstract
Layered transition-metal dichalcogenides that host coexisting charge-density wave (CDW) and superconducting orders provide ideal systems for exploring the effects of dimensionality on correlated electronic phases. Dimensionality has a profound effect on both superconductivity and CDW instabilities. Here we report a substantial enhancement of the superconducting T-c to 3.4 K for 2H-TaS2 in the monolayer limit, compared to 0.8 K in the bulk. In addition, the transport signature of a CDW phase transition vanishes in the two-dimensional limit. In our analysis of electronic and vibrational properties of this material, we show that a reduction of the CDW amplitude results in a substantial increase of the density of states at the Fermi energy, which can boost T, by an amount similar to that seen in experiment. Our results indicate competition between CDW order and superconductivity in ultrathin 2H-TaS2 down to the monolayer limit, providing insight toward understanding correlated electronic phases in reduced dimensions.
Xu Y, Chen W, Kaxiras E, Friend CM, Madix RJ. General Effect of van der Waals Interactions on the Stability of Alkoxy Intermediates on Metal Surfaces. JOURNAL OF PHYSICAL CHEMISTRY B. 2018;122 (2, SI) :555-560.Abstract
The critical role of noncovalent van der Waals (vdW) interactions in determining the relative thermodynamic stability of alkoxy intermediates has been demonstrated for the Cu(110) surface using a combination of experiment and theory. The results may be significant for the selectivity control of copper-based reactions of alcohols. Previous examination of this effect on Au(110) was also extended to include higher molecular weight alcohols; on Cu(110) and Au(110) the hierarchy for the strength of binding of the alkoxys was found to be the same within experimental accuracy, with alkoxy species of greater chain length being more stable. The equilibrium constants governing the competition of alcohol pairs for binding sites of the alkoxys are also similar on the two surfaces. These results reveal the generality of such vdW effects. This work expands the understanding of the role of vdW interactions on the binding efficacy of key reactive intermediates on metal surfaces, a key factor in the rational design of complex and selective catalytic processes.
Bediako DK, Rezaee M, Yoo H, Larson DT, Zhao SYF, Taniguchi T, Watanabe K, Brower-Thomas TL, Kaxiras E, Kim P. Heterointerface effects in the electrointercalation of van der Waals heterostructures. NATURE. 2018;558 (7710) :425+.Abstract
Molecular-scale manipulation of electronic and ionic charge accumulation in materials is the backbone of electrochemical energy storagel(1-4). Layered van der Waals (vdW) crystals are a diverse family of materials into which mobile ions can electrochemically intercalate into the interlamellar gaps of the host atomic lattice(5,6). The structural diversity of such materials enables the interfacial properties of composites to be optimized to improve ion intercalation for energy storage and electronic devices(7-12). However, the ability of heterolayers to modify intercalation reactions, and their role at the atomic level, are yet to be elucidated. Here we demonstrate the electrointercalation of lithium at the level of individual atomic interfaces of dissimilar vdW layers. Electrochemical devices based on vdW heterostructures(13) of stacked hexagonal boron nitride, graphene and molybdenum dichalcogenide (MoX2;X=S, Se) layers are constructed. We use transmission electron microscopy, in situ magnetoresistance and optical spectroscopy techniques, as well as low-temperature quantum magneto-oscillation measurements and ab initio calculations, to resolve the intermediate stages of lithium intercalation at heterointerfaces. The formation of vdW heterointerfaces between graphene and MoX2 results in a more than tenfold greater accumulation of charge in MoX2 when compared to MoX2/MoX2 homointerfaces, while enforcing a more negative intercalation potential than that of bulk MoX2 by at least 0.5 V. Beyond energy storage, our combined experimental and computational methodology for manipulating and characterizing the electrochemical behaviour of layered systems opens new pathways to control the charge density in two-dimensional electronic and optoelectronic devices.
O'Connor CR, Hiebel F, Chen W, Kaxiras E, Madix RJ, Friend CM. Identifying key descriptors in surface binding: interplay of surface anchoring and intermolecular interactions for carboxylates on Au(110). CHEMICAL SCIENCE. 2018;9 (15) :3759-3766.Abstract
The relative stability of carboxylates on Au(110) was investigated as part of a comprehensive study of adsorbate binding on Group IB metals that can be used to predict and understand how to control reactivity in heterogeneous catalysis. The binding efficacy of carboxylates is only weakly dependent on alkyl chain length for relatively short-chain molecules, as demonstrated using quantitative temperature-programmed reaction spectroscopy. Corresponding density functional theory (DFT) calculations demonstrated that the bidentate anchoring geometry is rigid and restricts the amount of additional stabilization through adsorbate-surface van der Waals (vdW) interactions which control stability for alkoxides. A combination of scanning tunneling microscopy (STM) and low-energy electron diffraction (LEED) shows that carboxylates form dense local islands on Au(110). Complementary DFT calculations demonstrate that adsorbate-adsorbate interactions provide additional stabilization that increases as a function of alkyl chain length for C-2 and C-3 carboxylates. Hence, overall stability is generally a function of the anchoring group to the surface and the inter-adsorbate interaction. This study demonstrates the importance of these two important factors in describing binding of key catalytic intermediates.
Onat B, Cubuk ED, Malone BD, Kaxiras E. Implanted neural network potentials: Application to Li-Si alloys. PHYSICAL REVIEW B. 2018;97 (9).Abstract
Modeling the behavior of materials composed of elements with different bonding and electronic structure character for large spatial and temporal scales and over a large compositional range is a challenging problem. Cases in point are amorphous alloys of Si, a prototypical covalent material, and Li, a prototypical metal, which are being considered as anodes for high-energy-density batteries. To address this challenge, we develop a methodology based on neural networks that extends the conventional training approach to incorporate pre-trained parts that capture the character of different components, into the overall network; we refer to this model as the implanted neural network'' method. We show that this approach works well for the Si-Li amorphous alloys for a wide range of compositions, giving good results for key quantities like the diffusion coefficients. The method is readily generalizable to more complicated situations that involve two or more different elements.
Larson DT, Fampiou I, Kim G, Kaxiras E. Lithium Intercalation in Graphene-MoS2 Heterostructures. JOURNAL OF PHYSICAL CHEMISTRY C. 2018;122 (43) :24535-24541.Abstract
Two-dimensional (2D) heterostructures are interesting candidates for efficient energy storage devices due to their high carrier capacity by reversible intercalation. We employ here density functional theory calculations to investigate the structural and electronic properties of lithium intercalated graphene/molybdenum disulfide (Gr/MoS2) heterostructures. We explore the extent to which Li intercalates at the interface formed between graphene (Gr) and molybdenum disulfide (MoS2) layers by considering the adsorption and diffusion of Li atoms, the energetic stability, and the changes in the structural morphology of MoS2. We investigate the corresponding electronic structure and charge distribution within the heterostructure at varying concentrations of Li. Our results indicate that the maximum energetically allowed ratio of Li to Mo (Li to C) is 1:1 (1:3) for both the 2H and 1T' phases of MoS2. This is double the Li concentration allowed in graphene bilayers. We find that there is 60% more charge transfer to MoS2 than to Gr in the bilayer heterostructure, which results in a maximum doping of Gr and MoS2 of n(C) = 3.6 x 10(14) cm(-2) and n(MoS2) = 6.0 x 10(14) cm(-2), respectively.
Carr S, Massatt D, Fang S, Cazeaux P, Luskin M, Kaxiras E. Modeling Electronic Properties of Twisted 2D Atomic Heterostructures, in COUPLED MATHEMATICAL MODELS FOR PHYSICAL AND BIOLOGICAL NANOSCALE SYSTEMS AND THEIR APPLICATIONS. Vol 232. Banff Int Res Stat ; 2018 :245-265.Abstract
We present a general method for the electronic characterization of aperiodic 2D materials using ab-initio tight binding models. Specifically studied is the subclass of twisted, stacked heterostructures, but the formalism provided can be implemented for any 2D system without long-range interactions. This new method provides a multi-scale approach for dealing with the ab-initio calculation of electronic transport properties in stacked nanomaterials, allowing for fast and efficient simulation of multi-layered stacks in the presence of twist angles, magnetic field, and defects. We calculate the electronic density of states in twisted bilayer systems of graphene and MX2 transition metal dichalcogenides (TMDCs). We comment on the interesting features of their density of states as a function of twist-angle and local configuration and how these features are experimentally observable. These results support the bilayer twist-angle as a new variable for controlling electronic properties in artificial nanomaterials (''Twistronics'').
Zhang J, Hong H, Zhang J, Fu H, You P, Lischner J, Liu K, Kaxiras E, Meng S. New Pathway for Hot Electron Relaxation in Two-Dimensional Heterostructures. NANO LETTERS. 2018;18 (9) :6057-6063.Abstract
Two-dimensional (2D) heterostructures composed of transition-metal dichalcogenide atomic layers are the new frontier for novel optoelectronic and photovoltaic device applications. Some key properties that make these materials appealing, yet are not well understood, are ultrafast hole/electron dynamics, interlayer energy transfer and the formation of interlayer hot excitons. Here, we study photoexcited electron/hole dynamics in a representative heterostructure, the MoS2/WSe2 interface, which exhibits type II band alignment. Employing time-dependent density functional theory in the time domain, we observe ultrafast charge dynamics with lifetimes of tens to hundreds of femtoseconds. Most importantly, we report the discovery of an interfacial pathway in 2D heterostructures for the relaxation of photoexcited hot electrons through interlayer hopping, which is significantly faster than intralayer relaxation. This finding is of particular importance for understanding many experimentally observed photoinduced processes, including charge and energy transfer at an ultrafast time scale (<1 ps).
Hoyt RA, Montemore MM, Kaxiras E. Nonadiabatic Hydrogen Dissociation on Copper Nanoclusters. JOURNAL OF PHYSICAL CHEMISTRY LETTERS. 2018;9 (18) :5339+.Abstract
Copper surfaces exhibit high catalytic selectivity but have poor hydrogen dissociation kinetics; therefore, we consider icosahedral Cu-13 nanoclusters to understand how nanoscale structure might improve catalytic prospects. We find that the spin state is a surprisingly important design consideration. Cu-13 clusters have large magnetic moments due to finite size and symmetry effects and exhibit magnetization-dependent catalytic behavior. The most favorable transition state for hydrogen dissociation has a lower activation energy than that on single-crystal copper surfaces but requires a magnetization switch from 5 to 3 mu(B). Without this switch, the activation energy is higher than that on single-crystal surfaces. Weak spin-orbit coupling hinders this switch, decreasing the kinetic rate of hydrogen dissociation by a factor of 16. We consider strategies to facilitate magnetization switches through optical excitations, substitution, charge states, and co-catalysts; these considerations demonstrate how control of magnetic properties could improve catalytic performance.
Carr S, Fang S, Jarillo-Herrero P, Kaxiras E. Pressure dependence of the magic twist angle in graphene superlattices. PHYSICAL REVIEW B. 2018;98 (8).Abstract
The recently demonstrated unconventional superconductivity [Cao et al., Nature (London) 556, 43 (2018)] in twisted bilayer graphene (tBLG) opens the possibility for interesting applications of two-dimensional layers that involve correlated electron states. Here we explore the possibility of modifying electronic correlations by the application of uniaxial pressure on the weakly interacting layers, which results in increased interlayer coupling and a modification of the magic angle value and associated density of states. Our findings are based on first-principles calculations that accurately describe the height-dependent interlayer coupling through the combined use of density functional theory and maximally localized Wannier functions. We obtain the relationship between twist angle and external pressure for the magic angle flat bands of tBLG. This may provide a convenient method to tune electron correlations by controlling the length scale of the superlattice.
Shirodkar SN, Mattheakis M, Cazeaux P, Narang P, Soljacic M, Kaxiras E. Quantum plasmons with optical-range frequencies in doped few-layer graphene. PHYSICAL REVIEW B. 2018;97 (19).Abstract
Although plasmon modes exist in doped graphene, the limited range of doping achieved by gating restricts the plasmon frequencies to a range that does not include the visible and infrared. Here we show, through the use of first-principles calculations, that the high levels of doping achieved by lithium intercalation in bilayer and trilayer graphene shift the plasmon frequencies into the visible range. To obtain physically meaningful results, we introduce a correction of the effect of plasmon interaction across the vacuum separating periodic images of the doped graphene layers, consisting of transparent boundary conditions in the direction perpendicular to the layers; this represents a significant improvement over the exact Coulomb cutoff technique employed in earlier works. The resulting plasmon modes are due to local field effects and the nonlocal response of the material to external electromagnetic fields, requiring a fully quantum mechanical treatment. We describe the features of these quantum plasmons, including the dispersion relation, losses, and field localization. Our findings point to a strategy for fine-tuning the plasmon frequencies in graphene and other two-dimensional materials.
Larson DT, Kaxiras E. Raman spectrum of CrI3: An ab initio study. PHYSICAL REVIEW B. 2018;98 (8).Abstract
We study the Raman spectrum of CrI3, a material that exhibits magnetism in a single layer. We employ first-principles calculations within density functional theory to determine the effects of polarization, strain, and incident angle on the phonon spectra of the three-dimensional bulk and the single-layer two-dimensional structure, for both the high- and low-temperature crystal structures. Our results are in good agreement with existing experimental measurements and serve as a guide for additional investigations to elucidate the physics of this interesting material.
Montemore MM, Hoyt R, Kolesov G, Kaxiras E. Reaction-Induced Excitations and Their Effect on Surface Chemistry. ACS CATALYSIS. 2018;8 (11) :10358-10363.Abstract
Despite intensive study of reactions on metals, it is unclear whether electronic excitations play an important role. Here, we show that nonadiabatic effects do indeed play a significant role in N-2 and H-2 dissociation on Ru nanoparticles. We employ nonadiabatic dynamical calculations based on realtime, time-dependent density functional theory to study energy dissipation during these exothermic reaction steps. We find that dissipation of the excess energy into excitation of electrons exceeds thermal dissipation into phonons. For isolated dissociation events, electronic friction can increase reaction barriers; furthermore, the excitations induced by a dissociation event can affect other reacting molecules. Our studies suggest that, for exothermic reactions, metal catalysts in reaction conditions may be constantly experiencing electronic excitations, and these excitations can significantly affect surface chemistry.
Carr S, Massatt D, Torrisi SB, Cazeaux P, Luskin M, Kaxiras E. Relaxation and domain formation in incommensurate two-dimensional heterostructures. PHYSICAL REVIEW B. 2018;98 (22).Abstract
We introduce configuration space as a natural representation for calculating the mechanical relaxation patterns of incommensurate two-dimensional (2D) bilayers. The approach can be applied to a wide variety of 2D materials through the use of a continuum model in combination with a generalized stacking fault energy for interlayer interactions. We present computational results for small-angle twisted bilayer graphene and molybdenum disulfide (MoS2), a representative material of the transition-metal dichalcogenide family of 2D semiconductors. We calculate accurate relaxations for MoS2 even at small twist-angle values, enabled by the fact that our approach does not rely on empirical atomistic potentials for interlayer coupling. The results demonstrate the efficiency of the configuration space method by computing relaxations with minimal computational cost. We also outline a general explanation of domain formation in 2D bilayers with nearly aligned lattices, taking advantage of the relationship between real space and configuration space. The configuration space approach also enables calculation of relaxations in incommensurate multilayer systems.
Cao Y, Fatemi V, Fang S, Watanabe K, Taniguchi T, Kaxiras E, Jarillo-Herrero P. Unconventional superconductivity in magic-angle graphene superlattices. NATURE. 2018;556 (7699) :43+.Abstract
The behaviour of strongly correlated materials, and in particular unconventional superconductors, has been studied extensively for decades, but is still not well understood. This lack of theoretical understanding has motivated the development of experimental techniques for studying such behaviour, such as using ultracold atom lattices to simulate quantum materials. Here we report the realization of intrinsic unconventional superconductivity-which cannot be explained by weak electron-phonon interactions-in a two-dimensional superlattice created by stacking two sheets of graphene that are twisted relative to each other by a small angle. For twist angles of about 1.1 degrees-the first magic' angle-the electronic band structure of this twisted bilayer graphene' exhibits flat bands near zero Fermi energy, resulting in correlated insulating states at half-filling. Upon electrostatic doping of the material away from these correlated insulating states, we observe tunable zero-resistance states with a critical temperature of up to 1.7 kelvin. The temperature-carrier-density phase diagram of twisted bilayer graphene is similar to that of copper oxides (or cuprates), and includes dome-shaped regions that correspond to superconductivity. Moreover, quantum oscillations in the longitudinal resistance of the material indicate the presence of small Fermi surfaces near the correlated insulating states, in analogy with underdoped cuprates. The relatively high superconducting critical temperature of twisted bilayer graphene, given such a small Fermi surface (which corresponds to a carrier density of about 1011 per square centimetre), puts it among the superconductors with the strongest pairing strength between electrons. Twisted bilayer graphene is a precisely tunable, purely carbon-based, two-dimensional superconductor. It is therefore an ideal material for investigations of strongly correlated phenomena, which could lead to insights into the physics of high-critical-temperature superconductors and quantum spin liquids.
|
## Noncommutative Field Theory: Numerical Analysis with the Fuzzy Disc [PDF]
Fedele Lizzi, Bernardino Spisso
The fuzzy disc is a discretization of the algebra of functions on the two dimensional disc using finite matrices which preserves the action of the rotation group. We define a $\varphi^4$ scalar field theory on it and analyze numerically for three different limits for the rank of the matrix going to infinity. The numerical simulations reveal three different phases: uniform and disordered phases already the present in the commutative scalar field theory and a nonuniform ordered phase as a noncommutative effects. We have computed the transition curves between phases and their scaling. This is in agreement with studies on the fuzzy sphere, although the speed of convergence for the disc seems to be better. We have performed also three the limits for the theory in the cases of the theory going to the commutative plane or commutative disc. In this case the theory behaves differently, showing the intimate relationship between the nonuniform phase and noncommutative geometry.
View original: http://arxiv.org/abs/1207.4998
|
# An operator inequality
I would be most thankful if you could help me prove the following operator inequality. Let $A$ be an arbitrary linear operator on a Hilbert space, satisfying $$\left\|AA^{\ast} - A^{\ast}A\right\|\leq 2a$$ where $A^{\ast}$ is the Hermitian adjoint and $a>0$ is a constant. Let $\varepsilon$ be equal to either $+1$ or $-1$. Then show that $$2\sqrt{A^{\ast}A + aI} - \varepsilon\left(A + A^{\ast}\right) \geq 0$$ Thank you!
In this answer we assume that $A$ is a bounded operator on a complex Hilbert space $H$.
Sketched proof and hints:
1. We may uniquely write $A=B+iC$, where $B$ and $C$ are selfadjoint operators. Define selfadjoint operator $D:=i[C,B]=D^{\dagger}$. Then OP's statement reads $$\tag{1} ||D|| ~\leq ~a \qquad \Rightarrow \qquad \sqrt{A^{\dagger}A+aI} ~\geq~ \pm B.$$
2. Show that (1) is a consequence of $$\tag{2}||D|| ~\leq ~a \qquad \Rightarrow \qquad\sqrt{A^{\dagger}A+aI}~\geq~ \sqrt{ B^2 }.$$
3. Show that (2) is a consequence of $$\tag{3}||D|| ~\leq ~a \qquad \Rightarrow \qquad\sqrt{A^{\dagger}A+aI}~\geq~ \sqrt{ B^2 +C^2 }.$$
4. Show that (3) is equivalent to $$\tag{4}||D|| ~\leq ~a \qquad \Rightarrow \qquad A^{\dagger}A+aI~\geq~ B^2 +C^2 .$$
5. Show that (4) is equivalent to $$\tag{5}||D|| ~\leq ~a \qquad \Rightarrow \qquad aI~\geq~ D .$$
6. Show (5).
|
New Classical Optics | Components • Devices • Systems
## FREEFORM OPTICS
#### Contact: Henrik C. Pedersen
With freeform surfaces it is possible to shape light. Look at these two examples employing a freeform reflector and a freeform lens, respectively:
#### Realized LED street lamp prototype demonstrating the rectangular illumination pattern.
On the left is shown a freeform reflector that transforms a Lambertian light distribution from a laser-illuminated phosphor layer to a circular and uniform distribution of light 6 meters away from the reflector. In this case only a single freeform surface shapes the light.
On the right is shown a freeform, injection molded polymer lens that transforms a Lambertian light distribution from an LED to a rectangular, uniform illumination that is supposed to match a road stretch between two light poles. In this case two refractive surfaces shape the light.
How are these freeform surfaces designed? Let’s take a look at the reflector, as an example:
We build up the reflector using segments. Each reflector segment is responsible for illuminating a corresponding segment on the road, as illustrated above.
We assume that the LED has a Lambertian light distribution, i.e.:
$$I(\theta ) = {{\cos (\theta )} \over \pi },$$
-where I(θ) is the radiant intensity (W/sr) and θ is the angle with respect to the optical axis of the LED.
If we integrate I(θ) over the entire hemisphere we get:
$$\mathop \smallint \limits_{Hemisphere} I(\theta )d\Omega = \mathop \smallint \limits_0^{90} {{\cos (\theta )} \over \pi }2\pi \sin (\theta )d\theta = 1W,$$
-where Ω is the solid angle. Hence, the total power is normalized to 1 W.
Consider the first reflector segment in the figure above. It spans the first 5 Degs. of the hemisphere. Thus, energetically, this segment distributes:
$$\mathop \smallint \limits_{85}^{90} 2\cos (\theta )\sin (\theta )d\theta = 0,0076W.$$
This implies, that segment 1 needs to redirect its central light ray at the outer 0.76% of the area on the road. If the illuminated area is a uniform circle with, say, a radius of 1m, this outer rim of light would span from radius √(1-0.0076) = 0.996m to 1m.
So, now that we know the target position on the road, it is a simple geometrical matter of tilting the first reflector segment according to this requirement. The second reflector segment is tilted using the same procedure and is then attached to the first segment to form a continuous shape. This procedure is continued until the whole hemisphere is covered.
The technique can be used to any desired light distribution. Below is shown the result of three designed reflectors that generate light distributions in the shape of sin2(r), sin2(2r), and sin2(3r), where r is the normalized distance from the center:
#### Ray trace of light pattern achieved from a sin2(3r) reflector.
Freeform reflectors for generating light patterns without rotational symmetry are somewhat harder to design, even though the principle is very similar to the one outlined above. Freeform lenses (including non-rotationally symmetric TIR lenses) are also designed using a similar technique.
|
Mind Matters Natural and Artificial Intelligence News and Analysis
# Will the Real “Predatory Journal” Please Stand Up?
Large publishers serve themselves by painting "predatory journals" with a broad brush
The scientific publishing industry has been on a hunt for what it calls “predatory journals.” They want to make sure that all scientific publications occur in “legitimate” and “reputable” journals. Additionally, they encourage scholars to avoid “predatory” journals which are there merely to enrich themselves by having you pay for access.
While I agree with these ideas in principle, I’ve noticed more and more that the way that these principles are applied has been, well, incredibly self-serving for the journals.
To begin with, let’s look at a commentary on predatory journals published in 2019 in the journal Nature:
Predatory journals are a global threat. They accept articles for publication — along with authors’ fees — without performing promised quality checks for issues such as plagiarism or ethical approval. Naive readers are not the only victims. Many researchers have been duped into submitting to predatory journals, in which their work can be overlooked. One study that focused on 46,000 researchers based in Italy found that about 5% of them published in such outlets. A separate analysis suggests predatory publishers collect millions of dollars in publication fees that are ultimately paid out by funders such as the US National Institutes of Health (NIH).
Agnes Grudniewicz et al., “Predatory journals: no definition, no defence” at Nature
Now, to begin with, I hardly see how a journal could be a “global threat.” Bad papers that should never have seen the light of day are published in “legitimate” journals every day, but they are generally considered an annoyance, not a “global threat.”
However, what I find interesting here is that the author uses a study that says that 5% of researchers publish in such outlets. Supposedly, the study looks at legitimate researchers. If the researchers themselves are legitimate, then, unless the author is impugning the work of those researchers, it seems like the authors themselves find the journal sufficiently scholarly to publish in. If the authors find them scholarly, why are other people complaining? What matters is the research.
The goal of journals is to (a) check the quality of research and (b) distribute the research in a way that other researchers can benefit from. There are indeed some outlets which skip quality checks and have a fake peer review process. Those should be noted so that these journals aren’t considered for tenure reviews or cited without double checking.
However, the complaint that these journals “collect millions of dollars in publication fees that are ultimately paid out by funders such as the US National Institutes of Health (NIH)” seems a bit over the top, considering that the complaint was made in the journal Nature, which charges authors over $11,000 per article to be published. This amount is paid by the same funding sources. By comparison, the International Journal of Biology, which is published by a publisher listed in Beall’s list of predatory publishers, charges$300 per article. So, of all the complaints that a journal like Nature might have about predatory journals, it seems that complaining about misuse of funds is significantly misdirected. Publishing in Nature seems to be the misuse of public funding. Note that Nature has abnormally high article processing charges, but is relevant because they are the ones publishing this commentary. Typical article processing charges, at least in biology, seems to be around $3,000 per article, which is still much more expensive than the “predatory” journals. So what is the actual definition of a predatory journal? In the commentary, the authors state the definition of a predatory journal as: Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices. Agnes Grudniewicz et al., “Predatory journals: no definition, no defence” at Nature It’s always dangerous when people use the word “or” instead of “and,” especially when they include more innocuous things on their list. According to this definition, a new journal that advertises itself aggressively is counted as a “predatory journal,” no matter how high quality the review process is (remember, they used “or” in the definition). This is incredibly self-serving. It means that the existing journals are non-predatory for the simple reason that they have been around long enough that they don’t need to advertise. It means that any upcoming competition (which would, by definition, need to advertise), can simply be labeled as “predatory” because they are aggressively advertising. Let’s say you wanted to cut out the fat of academic publishing, so you decide to start a journal. You have some overhead, but you cover for that by charging$500 for articles (that’s 1/20 of the price Nature is charging). No one knows about you, so you do an online advertising campaign. You don’t have direct connections to enough people in the field, so your advertising is more indiscriminate than you want, but better to reach too many than too few, right? Well, by giving people the option of a low-cost publishing alternative and bothering to tell them about it, you now qualify as a “predatory” journal, and Nature will complain about you misappropriating government funds.
But which journal is really being predatory here?
|
# Have fun with MPI in C
spagnuolocarmine
29.1K views
## Collective Communications Routines
Collective communication is defined as communication that involves a group or groups of processes. One of the key arguments in a call to a collective routine is a communicator that defines the group or groups of participating processes and provides a context for the operation. All processes in the group identified by the intracommunicator must call the collective routine.
The syntax and semantics of the collective operations are defined to be consistent with the syntax and semantics of the point-to-point operations. Thus, general datatypes are allowed and must match between sending and receiving processes. Several collective routines such as broadcast and gather have a single originating or receiving process. Such a process is called the root. Some arguments in the collective functions are specified as significant only at root, and are ignored for all participants except the root.
To understand how collective operations apply to intercommunicators, is possible to view the MPI intracommunicator collective operations as fitting one of the following categories :
• All-To-One, such as gathering (see Figure) or reducing (see Figure) in one process data.
• One-To-All, such as broadcasting (see Figure) data on all processors in a group.
• All-To-All, such as executing one collective operation using all processors in a group as root.
• Other
In the following, all the MPI collective communications will be described by example.
A fundamental collective operation is the explicit synchronization between processors in a group.
MPI_BARRIER(comm) If comm is an intracommunicator, MPI_BARRIER blocks the caller until all group members have called it. The call returns at any process only after all group members have entered the call.
int MPI_Barrier(MPI_Comm comm)
• IN comm, communicator (handle)
The following example uses 8 processes.
MPI BARRIER
MPI_BCAST(buffer, count, datatype, root, comm) If comm is an intracommunicator, MPI_BCAST broadcasts a message from the process with rank root to all processes of the group, itself included. It is called by all members of the group using the same arguments for comm and root. On return, the content of root's buffer is copied to all other processes.
int MPI_Bcast(void* buffer, int count, MPI_Datatype datatype, int root,MPI_Comm comm)
• INOUT buffer, starting address of buffer (choice)
• IN count, number of entries in buffer (non-negative integer)
• IN datatype, data type of buffer (handle)
• IN root, rank of broadcast root (integer)
• IN comm, communicator (handle)
The following example uses 8 processes.
MPI BCAST
### Why we should use collective operation for group communications?
MPI collective operations exploit optimized solutions to realize communication between processors in a group. For instance, the broadcasting operation exploits a tree structure (as depicted in the Figure), which allows parallelizing the communications.
Obviously the effect of this optimization scales according to the number of processors involved in the communications. The following example presents a comparison between the MPI_BCAST operation and its version developed using MPI_Send and MPI_Receive. If you can not see the advantage to use the broadcast operation please run this experiment on more processors.
MPI BCAST COMPARE
## Gather
MPI_GATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm) If comm is an intracommunicator, each process (root process included) sends the contents of its send buffer to the root process. The root process receives the messages and stores them in rank order. General, derived datatypes are allowed for both sendtype and recvtype. The type signature of sendcount, sendtype on each process must be equal to the type signature of recvcount, recvtype at the root. This implies that the amount of data sent must be equal to the amount of data received, pairwise between each process and the root. Distinct type maps between sender and receiver are still allowed.
All arguments to the function are significant on process root, while on other processes, only arguments sendbuf, sendcount, sendtype, root, and comm are significant. The arguments root and comm must have identical values on all processes. Note that the recvcount argument at the root indicates the number of items it receives from each process, not the total number of items it receives.
int MPI_Gather(const void* sendbuf, int sendcount, MPI_Datatype sendtype,void* recvbuf, int recvcount, MPI_Datatype recvtype, int root,MPI_Comm comm)
• IN sendbuf, starting address of send buffer (choice)
• IN sendcount, number of elements in send buffer (non-negative integer)
• IN sendtype, data type of send buffer elements (handle)
• OUT recvbuf, address of receive buffer (choice, significant only at root)
• IN recvcount, number of elements for any single receive (non-negative integer, significant only at root)
• IN recvtype, data type of recv buffer elements (significant only at root) (handle)
• IN root, rank of receiving process (integer)
• IN comm, communicator (handle)
The following example uses 3 processes.
MPI GATHER
MPI_GATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm) extends the functionality of MPI_GATHER by allowing a varying count of data from each process, since recvcounts is now an array. It also allows more flexibility as to where the data is placed on the root, by providing the new argument, displs. The data received from process j is placed into recvbuf of the root process beginning at offset displs[j] elements (in terms of the recvtype). The receive buffer is ignored for all non-root processes.
int MPI_Gatherv(const void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, const int recvcounts[], const int displs[], MPI_Datatype recvtype, int root, MPI_Comm comm)
• IN sendbuf, starting address of send buffer (choice)
• IN sendcount, number of elements in send buffer (non-negative integer)
• IN sendtype, data type of send buffer elements (handle)
• OUT recvbuf, address of receive buffer (choice, significant only at root)
• IN recvcounts, non-negative integer array (of length group size) containing the number of elements that are received from each process (significant only at root)
• IN displs, integer array (of length group size). Entry i specifies the displacement relative to recvbuf at which to place the - incoming data from process i (significant only at root)
• IN recvtype, data type of recv buffer elements (significant only at root) (handle)
• IN root, rank of receiving process (integer)
• IN comm, communicator (handle)
The following example uses 4 processes.
MPI GATHER
## Scatter
MPI_SCATTER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm) takes an array of elements and distributes the elements in the order of process rank. An alternative description is that the root sends a message with MPI_Send(sendbuf, sendcount x n, sendtype, ...). This message is split into n equal segments, the i-th segment is sent to the i-th process in the group, and each process receives this message. The send buffer is ignored for all non-root processes.
int MPI_Scatter(const void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
• IN sendbuf, address of send buffer (choice, significant only at root)
• IN sendcount, number of elements sent to each process (non-negative integer, significant only at root)
• IN sendtype, data type of send buffer elements (significant only at root) (handle)
• IN recvcount, number of elements in receive buffer (non-negative integer)
• IN recvtype, data type of receive buffer elements (handle)
• IN root, rank of sending process (integer)
• IN comm, communicator (handle)
The following example uses 3 processes.
MPI SCATTER
MPI_SCATTERV(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm) is the inverse operation to MPI_GATHERV. MPI_SCATTERV extends the functionality of MPI_SCATTER by allowing a varying count of data to be sent to each process, since sendcounts is now an array. It also allows more flexibility as to where the data is taken from on the root, by providing an additional argument, displs.
int MPI_Scatterv(const void* sendbuf, const int sendcounts[], const int displs[], MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
• IN sendbuf, address of send buffer (choice, significant only at root)
• IN sendcounts, non-negative integer array (of length group size) specifying the number of elements to send to each rank
• IN displs integer array (of length group size). Entry i specifies the displacement (relative to sendbuf) from which to take the outgoing data to process i
• IN sendtype data type of send buffer elements (handle)
• IN recvcount number of elements in receive buffer (non-negative integer)
• IN recvtype data type of receive buffer elements (handle)
• IN root rank of sending process (integer)
• IN comm communicator (handle)
The following example uses 10 processes.
MPI SCATTERV
## Other collective operations
• MPI_ALLGATHER, MPI_ALLGATHERV: A variation on Gather where all members of a group receive the result.
• MPI_ALLTOALL, MPI_ALLTOALLV: Scatter/Gather data from all members to all members of a group (also called complete exchange).
• MPI_ALLREDUCE, MPI_REDUCE: Global reduction operations such as sum, max, min, or user-defined functions, where the result is returned to all members of a group and a variation where the result is returned to only one member.
## Nonblocking Collective Communication
As described in Nonblocking Communication, performance of many applications can be improved by overlapping communication and computation, and many systems enable this. Nonblocking collective operations combine the potential benefits of nonblocking point-to-point operations, to exploit overlap and to avoid synchronization, with the optimized implementation and message scheduling provided by collective operations. One way of doing this would be to perform a blocking collective operation in a separate thread. An alternative mechanism that often leads to better performance (e.g., avoids context switching, scheduler overheads, and thread management) is to use nonblocking collective communication.
The nonblocking collective communication model is similar to the model used for nonblocking point-to-point communication. A nonblocking call initiates a collective operation, which must be completed in a separate completion call. Once initiated, the operation may progress independently of any computation or other communication at participating processes. In this manner, nonblocking collective operations can mitigate possible synchronizing effects of collective operations by running them in the "background". In addition to enabling communication-computation overlap, nonblocking collective operations can perform collective operations on overlapping communicators, which would lead to deadlocks with blocking operations. Their semantic advantages can also be useful in combination with point-to-point communication.
As in the nonblocking point-to-point case, all calls are local and return immediately, irrespective of the status of other processes. The call initiates the operation, which indicates that the system may start to copy data out of the send buffer and into the receive buffer. Once initiated, all associated send buffers and buffers associated with input arguments (such as arrays of counts, displacements, or datatypes in the vector versions of the collectives) should not be modified, and all associated receive buffers should not be accessed, until the collective operation completes. The call returns a request handle, which must be passed to a completion call.
All completion calls (e.g., MPI_WAIT) are supported for nonblocking collective operations. Similarly to the blocking case, nonblocking collective operations are considered to be complete when the local part of the operation is finished, i.e., for the caller, the semantics of the operation are guaranteed and all buffers can be safely accessed and modified. Completion does not indicate that other processes have completed or even started the operation (unless otherwise implied by the description of the operation). Completion of a particular nonblocking collective operation also does not indicate completion of any other posted nonblocking collective (or send-receive) operations, whether they are posted before or after the completed operation.
This operations are defined by adding the I character in the name of the operation. For instance, the non blocking broadcast operation is name MPI_Ibcast.
|
# The Lumber Room
"Consign them to dust and damp by way of preserving them"
## A simple puzzle, with a foray into inequivalent expressions
[Needs cleanup… just dumping here for now.]
From the four numbers [6, 6, 5, 2], using only the binary operations [+, -, *, /], form the number 17.
When he tweeted the first time, I thought about it a little bit (while walking from my desk to the restroom or something like that), but forgot about it pretty soon and didn’t give it much further thought. When he posted again, I gave it another serious try, failed, and so gave up and wrote a computer program.
This is what I thought this time.
## Idea
Any expression is formed as a binary tree. For example, 28 = 6 + (2 * (5 + 6)) is formed as this binary tree (TODO make a proper diagram with DOT or something):
+
6 *
2 +
5 6
And 8 = (2 + 6) / (6 – 5) is this binary tree:
/
+ -
2 6 6 5
Alternatively, any expression is built up from the 4 given numbers [a, b, c, d] as follows:
Take any two of the numbers and perform any operation on them, and replace the two numbers with the result. Then repeat, until you have only one number, which is the final result.
Thus the above two expressions 28 = 6 + (2 * (5 + 6)) and 8 = (2 + 6) / (6 – 5) can be formed, respectively, as:
1. Start with [6, 6, 5, 2]. Replace (5, 6) with 5+6=11 to get [6, 11, 2]. Replace (11, 2) with 11*2=22 to get [6, 22]. Replace (6, 22) with 6+22=28, and that’s your result.
2. Start with [6, 6, 5, 2]. Replace (2, 6) with 2+6=8 to get [8, 6, 5]. Replace (6, 5) with 6-5=1 to get [8, 1]. Replace (8, 1) with 8/1=8 and that’s your result.
So my idea was to generate all possible such expressions out of [6, 6, 5, 2], and see if 17 was one of them. (I suspected it may be possible by doing divisions and going via non-integers, but couldn’t see how.)
(In hindsight it seems odd that my first attempt was to answer whether 17 could be generated, rather than how: I guess at this point, despite the author’s assurance that there are no underhanded tricks involved, I still wanted to test whether 17 could be generated in this usual way, if only to ensure that my understanding of the puzzle was correct.)
Written by S
Wed, 2016-07-20 at 00:38:51
Posted in mathematics, unfinished
## Multiple ways of understanding
In his wonderful On Proof and Progress in Mathematics, Thurston begins his second section “How do people understand mathematics?” as follows:
This is a very hard question. Understanding is an individual and internal matter that is hard to be fully aware of, hard to understand and often hard to communicate. We can only touch on it lightly here.
People have very different ways of understanding particular pieces of mathematics. To illustrate this, it is best to take an example that practicing mathematicians understand in multiple ways, but that we see our students struggling with. The derivative of a function fits well. The derivative can be thought of as:
1. Infinitesimal: the ratio of the infinitesimal change in the value of a function to the infinitesimal change in a function.
2. Symbolic: the derivative of $x^n$ is $nx^{n-1}$, the derivative of $\sin(x)$ is $\cos(x)$, the derivative of $f \circ g$ is $f' \circ g * g'$, etc.
3. Logical: $f'(x) = d$ if and only if for every $\epsilon$ there is a $\delta$ such that when $0 < |\Delta x| < \delta,$
$\left|\frac{f(x+\Delta x) - f(x)}{\Delta x} - d\right| < \epsilon.$
4. Geometric: the derivative is the slope of a line tangent to the graph of the function, if the graph has a tangent.
5. Rate: the instantaneous speed of $f(t)$, when $t$ is time.
6. Approximation: The derivative of a function is the best linear approximation to the function near a point.
7. Microscopic: The derivative of a function is the limit of what you get by looking at it under a microscope of higher and higher power.
This is a list of different ways of thinking about or conceiving of the derivative, rather than a list of different logical definitions. Unless great efforts are made to maintain the tone and flavor of the original human insights, the differences start to evaporate as soon as the mental concepts are translated into precise, formal and explicit definitions.
I can remember absorbing each of these concepts as something new and interesting, and spending a good deal of mental time and effort digesting and practicing with each, reconciling it with the others. I also remember coming back to revisit these different concepts later with added meaning and understanding.
The list continues; there is no reason for it ever to stop. A sample entry further down the list may help illustrate this. We may think we know all there is to say about a certain subject, but new insights are around the corner. Furthermore, one person’s clear mental image is another person’s intimidation:
1. The derivative of a real-valued function $f$ in a domain $D$ is the Lagrangian section of the cotangent bundle $T^{\ast}(D)$ that gives the connection form for the unique flat connection on the trivial $\mathbf{R}$-bundle $D \times \mathbf{R}$ for which the graph of $f$ is parallel.
These differences are not just a curiosity. Human thinking and understanding do not work on a single track, like a computer with a single central processing unit. Our brains and minds seem to be organized into a variety of separate, powerful facilities. These facilities work together loosely, “talking” to each other at high levels rather than at low levels of organization.
This has been extended on the MathOverflow question Different ways of thinking about the derivative where you can find even more ways of thinking about the derivative. (Two of the interesting pointers are to this discussion on the n-Category Café, and to the book Calculus Unlimited by Marsden and Weinstein, which does calculus using a “method of exhaustion” that does not involve limits. (Its definition of the derivative is also mentioned at the earlier link, as that notion of the derivative closest to [the idea of Eudoxus and Archimedes] of “the tangent line touches the curve, and in the space between the line and the curve, no other straight line can be interposed”, or “the line which touches the curve only once” — this counts as another important way of thinking about the derivative.)
It has also been best extended by Terence Tao, who in an October 2009 blog post on Grothendieck’s definition of a group gave several ways of thinking about a group:
In his wonderful article “On proof and progress in mathematics“, Bill Thurston describes (among many other topics) how one’s understanding of given concept in mathematics (such as that of the derivative) can be vastly enriched by viewing it simultaneously from many subtly different perspectives; in the case of the derivative, he gives seven standard such perspectives (infinitesimal, symbolic, logical, geometric, rate, approximation, microscopic) and then mentions a much later perspective in the sequence (as describing a flat connection for a graph).
One can of course do something similar for many other fundamental notions in mathematics. For instance, the notion of a group ${G}$ can be thought of in a number of (closely related) ways, such as the following:
1. Motivating examples: A group is an abstraction of the operations of addition/subtraction or multiplication/division in arithmetic or linear algebra, or of composition/inversion of transformations.
2. Universal algebraic: A group is a set ${G}$ with an identity element ${e}$, a unary inverse operation ${\cdot^{-1}: G \rightarrow G}$, and a binary multiplication operation ${\cdot: G \times G \rightarrow G}$ obeying the relations (or axioms) ${e \cdot x = x \cdot e = x}$, ${x \cdot x^{-1} = x^{-1} \cdot x = e}$, ${(x \cdot y) \cdot z = x \cdot (y \cdot z)}$ for all ${x,y,z \in G}$.
3. Symmetric: A group is all the ways in which one can transform a space ${V}$ to itself while preserving some object or structure ${O}$ on this space.
4. Representation theoretic: A group is identifiable with a collection of transformations on a space ${V}$ which is closed under composition and inverse, and contains the identity transformation.
5. Presentation theoretic: A group can be generated by a collection of generators subject to some number of relations.
6. Topological: A group is the fundamental group ${\pi_1(X)}$ of a connected topological space ${X}$.
7. Dynamic: A group represents the passage of time (or of some other variable(s) of motion or action) on a (reversible) dynamical system.
8. Category theoretic: A group is a category with one object, in which all morphisms have inverses.
9. Quantum: A group is the classical limit ${q \rightarrow 0}$ of a quantum group.
etc.
One can view a large part of group theory (and related subjects, such as representation theory) as exploring the interconnections between various of these perspectives. As one’s understanding of the subject matures, many of these formerly distinct perspectives slowly merge into a single unified perspective.
From a recent talk by Ezra Getzler, I learned a more sophisticated perspective on a group, somewhat analogous to Thurston’s example of a sophisticated perspective on a derivative (and coincidentally, flat connections play a central role in both):
1. Sheaf theoretic: A group is identifiable with a (set-valued) sheaf on the category of simplicial complexes such that the morphisms associated to collapses of ${d}$-simplices are bijective for ${d < 1}$ (and merely surjective for ${d \leq 1}$).
The rest of the post elaborates on this understanding.
Again in a Google Buzz post on Jun 9, 2010, Tao posted the following:
Bill Thurston’s “On proof and progress in mathematics” has many nice observations about the nature and practice of modern mathematics. One of them is that for any fundamental concept in mathematics, there is usually no “best” way to define or think about that concept, but instead there is often a family of interrelated and overlapping, but distinct, perspectives on that concept, each of which conveying its own useful intuition and generalisations; often, the combination of all of these perspectives is far greater than the sum of the parts. Thurston illustrates this with the concept of differentiation, to which he lists seven basic perspectives and one more advanced perspective, and hints at dozens more.
But even the most basic of mathematical concepts admit this multiplicity of interpretation and perspective. Consider for instance the operation of addition, that takes two numbers x and y and forms their sum x+y. There are many such ways to interpret this operation:
1. (Disjoint union) x+y is the “size” of the disjoint union X u Y of an object X of size x, and an object Y of size y. (Size is, of course, another concept with many different interpretations: cardinality, volume, mass, length, measure, etc.)
2. (Concatenation) x+y is the size of the object formed by concatenating an object X of size x with an object Y of size y (or by appending Y to X).
3. (Iteration) x+y is formed from x by incrementing it y times.
4. (Superposition) x+y is the “strength” of the superposition of a force (or field, intensity, etc.) of strength x with a force of strength y.
5. (Translation action) x+y is the translation of x by y.
5a. (Translation representation) x+y is the amount of translation or displacement incurred by composing a translation by x with a translation by y.
6. (Algebraic) + is a binary operation on numbers that give it the structure of an additive group (or monoid), with 0 being the additive identity and 1 being the generator of the natural numbers or integers.
7. (Logical) +, when combined with the other basic arithmetic operations, are a family of structures on numbers that obey a set of axioms such as the Peano axioms.
8. (Algorithmic) x+y is the output of the long addition algorithm that takes x and y as input.
9. etc.
These perspectives are all closely related to each other; this is why we are willing to give them all the common name of “addition”, and the common symbol of “+”. Nevertheless there are some slight differences between each perspective. For instance, addition of cardinals is based on perspective 1, while addition of ordinals is based on perspective 2. This distinction becomes apparent once one considers infinite cardinals or ordinals: for instance, in cardinal arithmetic, aleph_0 = 1+ aleph_0 = aleph_0 + 1 = aleph_0 + aleph_0, whereas in ordinal arithmetic, omega = 1+omega < omega+1 < omega + omega.
Transitioning from one perspective to another is often a necessary first conceptual step when the time comes to generalise the concept. As a child, addition of natural numbers is usually taught initially by using perspective 1 or 3, but to generalise to addition of integers, one must first switch to a perspective such as 4, 5, or 5a; similar conceptual shifts are needed when one then turns to addition of rationals, real numbers, complex numbers, residue classes, functions, matrices, elements of abstract additive groups, nonstandard number systems, etc. Eventually, one internalises all of the perspectives (and their inter-relationships) simultaneously, and then becomes comfortable with the addition concept in a very broad set of contexts; but it can be more of a struggle to do so when one has grasped only a subset of the possible ways of thinking about addition.
In many situations, the various perspectives of a concept are either completely equivalent to each other, or close enough to equivalent that one can safely “abuse notation” by identifying them together. But occasionally, one of the equivalences breaks down, and then it becomes useful to maintain a careful distinction between two perspectives that are almost, but not quite, compatible. Consider for instance the following ways of interpreting the operation of exponentiation x^y of two numbers x, y:
1. (Combinatorial) x^y is the number of ways to make y independent choices, each of which chooses from x alternatives.
2. (Set theoretic) x^y is the size of the space of functions from a set Y of size y to a set X of size x.
3. (Geometric) x^y is the volume (or measure) of a y-dimensional cube (or hypercube) whose sidelength is x.
4. (Iteration) x^y is the operation of starting at 1 and multiplying by x y times.
5. (Homomorphism) y → x^y is the continuous homomorphism from the domain of y (with the additive group structure) to the range of x^y (with the multiplicative structure) that maps 1 to x.
6. (Algebraic) ^ is the operation that obeys the laws of exponentiation in algebra.
7. (Log-exponential) x^y is exp( y log x ). (This raises the question of how to interpret exp and log, and again there are multiple perspectives for each…)
8. (Complex-analytic) Complex exponentiation is the analytic continuation of real exponentiation.
9. (Computational) x^y is whatever my calculator or computer outputs when it is asked to evaluate x^y.
10. etc.
Again, these interpretations are usually compatible with each other, but there are some key exceptions. For instance, the quantity 0^0 would be equal to zero [ed: I think this should be one —S] using some of these interpretations, but would be undefined in others. The quantity 4^{1/2} would be equal to 2 in some interpretations, be undefined in others, and be equal to the multivalued expression +-2 (or to depend on a choice of branch) in yet further interpretations. And quantities such as i^i are sufficiently problematic that it is usually best to try to avoid exponentiation of one arbitrary complex number by another arbitrary complex number unless one knows exactly what one is doing. In such situations, it is best not to think about a single, one-size-fits-all notion of a concept such as exponentiation, but instead be aware of the context one is in (e.g. is one raising a complex number to an integer power? A positive real to a complex power? A complex number to a fractional power? etc.) and to know which interpretations are most natural for that context, as this will help protect against making errors when manipulating expressions involving exponentiation.
It is also quite instructive to build one’s own list of interpretations for various basic concepts, analogously to those above (or Thurston’s example). Some good examples of concepts to try this on include “multiplication”, “integration”, “function”, “measure”, “solution”, “space”, “size”, “distance”, “curvature”, “number”, “convergence”, “probability” or “smoothness”. See also my blog post below in which the concept of a “group” is considered.
I plan to collect more such “different ways of thinking about the same (mathematical) thing” in this post, as I encounter them.
Written by S
Sat, 2016-03-26 at 10:05:09
Posted in mathematics, quotes
## The Pandit (काशीविद्यासुधानिधिः)
with one comment
The Pandit (काशीविद्यासुधानिधिः)
A Monthly Journal, of the Benares College, devoted to Sanskrit Literature
This was a journal that ran from 1866 to 1920, and some issues are available online. “The Benares College” in its title is what was the first college in the city (established 1791), later renamed the Government Sanskrit College, Varanasi, and now the Sampurnanand Sanskrit University.
There are some interesting things in there. From a cursory look, it’s mainly editions of Sanskrit works (Kavya, Mimamsa, Sankhya, Nyaya, Vedanta, Vyakarana, etc.) and translations of some, along with the occasional harsh review of a recent work (printed anonymously of course), but also contains, among other things, (partial?) translations into Sanskrit of John Locke’s An Essay Concerning Human Understanding and Bishop Berkeley’s A Treatise Concerning the Principles of Human Knowledge. Also some hilarious (and quite valid) complaints about miscommunication between English Orientalists and traditional pandits, with their different education systems and different notions of what topics are simple and what are advanced.
The journal’s motto:
श्रीमद्विजयिनीदेवीपाठशालोदयोदितः । प्राच्यप्रतीच्यवाक्पूर्वापरपक्षद्वयान्वितः ॥
अङ्करश्मिः स्फुटयतु काशीविद्यासुधानिधिः । प्राचीनार्यजनप्रज्ञाविलासकुमुदोत्करान् ॥
The metadata is terrible: there’s only an index of sorts at the end of the whole volume; each issue of the journal carries no table of contents (or if it did, they have been ripped out when binding each (June to May) year’s issues into volumes). Authorship information is scarce. Some translations have been abandoned. (I arrived at this journal looking at Volume 9 where an English translation of Kedārabhaṭṭa’s Vṛtta-ratnākara is begun, carried into three chapters (published in alternate issues), left with a “to be continued” as usual, except there’s no mention of it in succeeding issues.) Still, a lot of interesting stuff in there.
Among the British contributors/editors of the journal were Ralph T. H. Griffith (who translated the Ramayana into English verse: there are advertisements for the translation in these volumes) and James R. Ballantyne (previously encountered as the author of Iṅglaṇḍīya-bhāṣā-vyākaraṇam a book on English grammar written in Sanskrit: he seems to have also been an ardent promoter of Christianity, but also an enthusiastic worker for more dialogue between the pandits and the Western scholars), each of whom served as the principal of the college. (Later principals of the college include Ganganath Jha and Gopinath Kaviraj.) Among the Indian contributors to the journal are Vitthala Shastri, who in 1852 appears to have written a Sanskrit commentary on Francis Bacon’s _Novum Organum,_ (I think it’s this, but see also the preface of this book for context) Bapudeva Sastri, and others: probably the contributors were all faculty of the college; consider the 1853 list of faculty here (Also note the relative salaries!)
Had previously encountered a mention of this magazine in this book (post).
The issues I could find—and I searched quite thoroughly I think—are below. Preferably, someone needs to download from Google Books and re-upload to the Internet Archive, as books on Google Books have an occasional tendency to disappear (or get locked US-only).
https://books.google.com/books?id=Z71EAAAAcAAJ 1866 Vol 1 (1 – 12)
https://books.google.com/books?id=ESgJAAAAQAAJ 1866 vol 1 (1 – 12)
https://books.google.com/books?id=Sr8IAAAAQAAJ 1866 Vol 1 (1 – 12)
https://books.google.com/books?id=JAspAAAAYAAJ 1866 vol 1-3 (1 – 36)
https://books.google.com/books?id=Y78IAAAAQAAJ 1867 Vol 2 (13 – 24)
https://books.google.com/books?id=JigJAAAAQAAJ 1867 Vol 2 (13 – 24)
https://books.google.com/books?id=cL1EAAAAcAAJ 1867 Vol 2 (13 – 24)
https://books.google.com/books?id=g78IAAAAQAAJ 1868 Vol 3 (25 – 36)
https://books.google.com/books?id=eL1EAAAAcAAJ 1868 Vol 3 (25 – 36)
https://books.google.com/books?id=OSgJAAAAQAAJ 1868 Vol 3 (25 – 36)
https://books.google.com/books?id=m78IAAAAQAAJ 1869 vol 4 (37 – 48)
https://books.google.com/books?id=WygJAAAAQAAJ 1869 Vol 4 (37 – 48)
https://books.google.com/books?id=g71EAAAAcAAJ 1869 vol 4 (37 – 48)
https://books.google.com/books?id=vr8IAAAAQAAJ 1870 vol 5 (49 – 60)
https://books.google.com/books?id=eCgJAAAAQAAJ 1870 vol 5 (49 – 60)
https://books.google.com/books?id=24dSAAAAcAAJ 1870 vol 5 (49 – 60)
https://books.google.com/books?id=0b8IAAAAQAAJ 1871 Vol 6 (61 – 72)
https://books.google.com/books?id=nigJAAAAQAAJ 1871 vol 6 (61 – 72)
https://books.google.com/books?id=5YdSAAAAcAAJ 1871 vol 6 (61 – 72)
https://books.google.com/books?id=878IAAAAQAAJ 1872 Vol 7 (73 – 84)
https://books.google.com/books?id=uCgJAAAAQAAJ 1872 Vol 7 (73 – 84)
https://books.google.com/books?id=TrZUAAAAcAAJ 1872 vol 7 (73 – 84)
https://books.google.com/books?id=6ygJAAAAQAAJ 1873 vol 8 (85 – 96)
https://books.google.com/books?id=KMAIAAAAQAAJ 1874 vol 9 (97 – 108)
https://books.google.com/books?id=ICkJAAAAQAAJ 1875 Vol 10 (109 – 120)
https://books.google.com/books?id=CcAIAAAAQAAJ 1875 vol 10 (109 – 120)
[New series]
https://books.google.com/books?id=LNA9AQAAMAAJ 1911 Vol 33 Snippet View
https://books.google.com/books?id=ctA9AQAAMAAJ 1912 Vol 34 Snippet View
https://books.google.com/books?id=3dA9AQAAMAAJ 1913 Vol 35 Snippet View
https://books.google.com/books?id=a9E9AQAAMAAJ 1916 Vol 38 Snippet View
https://books.google.com/books?id=N9E9AQAAMAAJ 1916 Vol 37 Snippet View
Written by S
Tue, 2016-03-15 at 14:18:00
Posted in sanskrit
## The same in every country
(TODO: Learn and elaborate more on their respective histories and goals.)
The formula
$\frac{\pi}{4} = 1 - \frac13 + \frac15 - \frac17 + \frac19 - \frac1{11} + \dots$
(reminded via this post), a special case at $x=1$ of
$\arctan x = x - \frac{x}3 + \frac{x}5 - \frac{x}7 + \dots,$
was found by Leibniz in 1673, while he was trying to find the area (“quadrature”) of a circle, and he had as prior work the ideas of Pascal on infinitesimal triangles, and that of Mercator on the area of the hyperbola $y(1+x) = 1$ with its infinite series for $\log(1+x)$. This was Leibniz’s first big mathematical work, before his more general ideas on calculus.
Leibniz did not know that this series had already been discovered earlier in 1671 by the short-lived mathematician James Gregory in Scotland. Gregory too had encountered Mercator’s infinite series $\log(1+x) = x - x^2/2 + x^3/3 + \dots$, and was working on different goals: he was trying to invert logarithmic and trigonometric functions.
Neither of them knew that the series had already been found two centuries earlier by Mādhava (1340–1425) in India (as known through the quotations of Nīlakaṇṭha c.1500), working in a completely different mathematical culture whose goals and practices were very different. The logarithm function doesn’t seem to have been known, let alone an infinite series for it, though a calculus of finite differences for interpolation for trigonometric functions seems to have been ahead of Europe by centuries (starting all the way back with Āryabhaṭa in c. 500 and more clearly stated by Bhāskara II in 1150). Using a different approach (based on the arc of a circle) and geometric series and sums-of-powers, Mādhava (or the mathematicians of the Kerala tradition) arrived at the same formula.
[The above is based on The Discovery of the Series Formula for π by Leibniz, Gregory and Nilakantha by Ranjay Roy (1991).]
This startling universality of mathematics across different cultures is what David Mumford remarks on, in Why I am a Platonist:
As Littlewood said to Hardy, the Greek mathematicians spoke a language modern mathematicians can understand, they were not clever schoolboys but were “fellows of a different college”. They were working and thinking the same way as Hardy and Littlewood. There is nothing whatsoever that needs to be adjusted to compensate for their living in a different time and place, in a different culture, with a different language and education from us. We are all understanding the same abstract mathematical set of ideas and seeing the same relationships.
The same thought was also expressed by Mean Girls:
Written by S
Tue, 2016-03-15 at 13:53:32
Posted in history, mathematics
## The generating function for Smirnov words (or: time until k consecutive results are the same)
1. Alphabet
Suppose we have an alphabet ${\mathcal{A}}$ of size ${m}$. Its generating function (using the variable ${z}$ to mark length) is simply ${A(z) = mz}$, as ${\mathcal{A}}$ contains ${m}$ elements of length ${1}$ each.
2. Words
Let ${\mathcal{W}}$ denote the class of all words over the alphabet ${\mathcal{A}}$. There are many ways to find the generating function ${W(z)}$ for ${\mathcal{W}}$.
2.1.
We have
$\displaystyle \mathcal{W} = \{\epsilon\} + \mathcal{A} + \mathcal{A}\mathcal{A} + \mathcal{A}\mathcal{A}\mathcal{A} + \dots$
so its generating function is
\displaystyle \begin{aligned} W(z) &= 1 + A(z) + A(z)^2 + A(z)^3 + \dots \\ &= 1 + mz + (mz)^2 + (mz)^3 + \dots \\ &= \frac{1}{1-mz} \end{aligned}
2.2.
To put it differently, in the symbolic framework, we have ${\mathcal{W} = \textsc{Seq}(\mathcal{A})}$, so the generating function for ${\mathcal{W}}$ is
$\displaystyle W(z) = \frac{1}{1 - A(z)} = \frac{1}{1-mz}.$
2.3.
We could have arrived at this with direct counting: the number of words of length ${n}$ is ${W_n = m^n}$ as there are ${m}$ choices for each of the ${n}$ letters, so the generating function is
$\displaystyle W(z) = \sum_{n \ge 0}W_n z^n = \sum_{n \ge 0} m^n z^n = \frac{1}{1-mz}.$
3. Smirnov words
Next, let ${\mathcal{S}}$ denote the class of Smirnov words over the alphabet ${\mathcal{A}}$, defined as words in which no two consecutive letters are identical. (That is, words ${w_1w_2 \dots w_n}$ in which ${w_i \in \mathcal{A}}$ for all ${i}$, and ${w_i \neq w_{i-1}}$ for any ${1 < i \le n}$.) Again, we can find the generating function for ${\mathcal{S}}$ in different ways.
3.1.
For any word in ${\mathcal{W}}$, by “collapsing” all runs of each letter, we get a Smirnov word. To put it differently, any word in ${\mathcal{W}}$ can be obtained from a Smirnov word ${w}$ by “expanding” each letter ${w_i}$ into a nonempty sequence of that letter. This observation (see Analytic Combinatorics, pp. 204–205) lets us relate the generating functions of ${\mathcal{W}}$ and ${\mathcal{S}}$ as
$\displaystyle W(z) = S(\frac{z}{1-z})$
which implicitly gives the generating function ${S(z)}$: we have
$\displaystyle S(z) = W(\frac{z}{1+z}) = \frac{1}{1-m\frac{z}{1+z}} = \frac{1+z}{1 - (m-1)z}.$
3.2.
Alternatively, consider in an arbitrary word the first occurrence of a pair of repeated letters. Either this doesn’t happen at all (the word is a Smirnov word), or else, if it happens at position ${i}$ so that ${w_i = w_{i+1}}$, then the part of the word up to position ${i}$ is a nonempty Smirnov word, the letter at position ${i+1}$ is the same as the previous letter, and everything after ${i+1}$ is an arbitrary word. This gives
$\displaystyle \mathcal{W} = \mathcal{S} + (\mathcal{S} \setminus \{ \epsilon \}) \cdot \mathcal{Z} \cdot \mathcal{W}$
or in terms of generating functions
$\displaystyle W(z) = S(z) + (S(z) - 1)zW(z)$
giving
$\displaystyle S(z) = \frac{W(z) (1 + z)}{1 + zW(z)} = \frac{1 + z}{(1-mz)(1 + \frac{z}{1-mz})} = \frac{1+z}{1 - (m-1)z}$
3.3.
A minor variant is to again pick an arbitrary word and consider its first pair of repeated letters, happening (if it does) at positions ${i}$ and ${i+1}$, but this time consider the prefix up to ${i -1}$: either it is empty, or the pair of letters is different from the last letter of the prefix, giving us the decomposition
$\displaystyle \mathcal{W} = \mathcal{S} + m\mathcal{Z}^2 \cdot \mathcal{W} + (\mathcal{S}\setminus \{ \epsilon \}) \cdot (m-1)\mathcal{Z}^2 \mathcal{W}$
and corresponding generating function
$\displaystyle W(z) = S(z) + mz^2W(z) + (S(z) - 1)(m-1)z^2W(z)$
so
$\displaystyle S(z) = \frac{W(z)(1-z^2)}{1 + (m-1)z^2W(z)} = \frac{1-z^2}{1 - mz + (m-1)z^2} = \frac{(1-z)(1+z)}{(1-z)(1 - (m-1)z)}$
which is the same as before after we cancel the ${(1-z)}$ factors.
3.4.
We could have arrived at this result with direct counting. For ${n \ge 1}$, for a Smirnov word of length ${n}$, we have ${m}$ choices for the first letter, and for each of the other ${(n-1)}$ letters, as they must not be the same as the previous letter, we have ${(m-1)}$ choices. This gives the number of Smirnov words of length ${n}$ as ${m (m-1)^{n-1}}$ for ${n \ge 1}$, and so the generating function for Smirnov words is
$\displaystyle S(z) = 1 + \sum_{n \ge 1} m (m-1)^{n-1} z^n = 1 + mz \sum_{n \ge 1} (m-1)^{n-1}z^{n-1} = 1 + \frac{mz}{1-(m-1)z}$
again giving
$\displaystyle S(z) = \frac{1 + z}{1 - (m-1)z}$
4. Words with bounded runs
We can now generalize. Let ${\mathcal{S}_k}$ denote the class of words in which no letter occurs more than ${k}$ times consecutively. (${\mathcal{S} = \mathcal{S}_1}$.) We can find the generating function for ${\mathcal{S}_k}$.
4.1.
To get a word in ${\mathcal{S}}$ we can take a Smirnov word and replace each letter with a nonempty sequence of up to ${k}$ occurrences of that letter. This gives:
$\displaystyle S_k(z) = S(z + z^2 + \dots + z^k) = S(z\frac{1-z^{k}}{1-z})$
so
$\displaystyle S_k(z) = \frac{1 + z\frac{1-z^{k}}{1-z}}{1 - (m-1)z\frac{1-z^{k}}{1-z}} = \frac{1 - z^{k+1}}{1 - mz + (m-1)z^{k+1}}.$
4.2.
Pick any arbitrary word, and consider its first occurrence of a run of ${k+1}$ letters. Either such a run does not exist (which means the word we picked is in ${\mathcal{S}_k}$), or it occurs right at the beginning (${m}$ possibilities, one for each letter in the alphabet), or, if it occurs starting at position ${i > 1}$, then the part of the word up to position ${i-1}$ (the “prefix”) is a nonempty Smirnov word, positions ${i}$ to ${i+k}$ are ${k+1}$ occurrences of any of the ${m-1}$ letters other than the last letter of the prefix, and what follows is an arbitrary word. This gives
$\displaystyle \mathcal{W} = \mathcal{S}_k + m\mathcal{Z}^{k+1} \cdot \mathcal{W} + (\mathcal{S}_k \setminus \{ \epsilon \}) \cdot (m-1)\mathcal{Z}^{k+1} \cdot \mathcal{W}$
or in terms of generating functions
$\displaystyle W(z) = S_k(z) + mz^{k+1}W(z) + (S_k(z) - 1)(m-1)z^{k+1}W(z)$
so
$\displaystyle W(z)(1 - z^{k+1}) = S_k(z) (1 + (m-1)z^{k+1} W(z))$
giving
$\displaystyle S_k(z) = \frac{W(z)(1-z^{k+1})}{1 + (m-1)z^{k+1}W(z)} = \frac{1-z^{k+1}}{1-mz + (m-1)z^{k+1}}$
4.3.
Arriving at this via direct counting seems hard.
5. Words that stop at a long run
Now consider words in which we “stop” as soon we see ${k}$ consecutive identical letters. Let the class of such words be denoted ${\mathcal{U}}$ (not writing ${\mathcal{U}_k}$ to keep the notation simple). As before, we can find its generating function in multiple ways.
5.1.
We get any word in ${\mathcal{U}}$ by either immediately seeing a run of length ${k}$ and stopping, or by starting with a nonempty prefix in ${\mathcal{S}_{k-1}}$, and then stopping with a run of ${k}$ identical letters different from the last letter of the prefix. Thus we have
$\displaystyle \mathcal{U} = m \mathcal{Z}^k + (\mathcal{S}_{k-1} \setminus \{\epsilon\}) \cdot (m-1)\mathcal{Z}^k$
and
$\displaystyle U(z) = m z^k + (S_{k-1}(z) - 1) (m-1) z^k$
which gives
$\displaystyle U(z) = z^k(1 + (m-1)S_{k-1}(z)) = z^k\left(1+(m-1)\frac{1-z^k}{1-mz+(m-1)z^k}\right) = \frac{m(1-z)z^k}{1 - mz + (m-1)z^k}$
5.2.
Alternatively, we can decompose any word by looking for its first run of ${k}$ identical letters. Either it doesn’t occur at all (the word we picked is in ${\mathcal{S}_{k-1}}$, or the part of the word until the end of the run belongs to ${\mathcal{U}}$ and the rest is an arbitrary word, so
$\displaystyle \mathcal{W} = \mathcal{S}_{k-1} + \mathcal{U} \cdot \mathcal{W}$
and
$\displaystyle W(z) = S_{k-1}(z) + U(z) W(z)$
so
$\displaystyle U(z) = 1 - \frac{S_{k-1}(z)}{W(z)} = 1 - \frac{(1-z^k)(1-mz)}{1-mz + (m-1)z^k} = \frac{m(1-z)z^k}{1 - mz + (m-1)z^k}$
6. Probability
Finally we arrive at the motivation: suppose we keep appending a random letter from the alphabet, until we encounter the same letter ${k}$ times consecutively. What can we say about the length ${X}$ of the word thus generated? As all sequences of letters are equally likely, the probability of seeing any string of length ${n}$ is ${\frac{1}{m^n}}$. So in the above generating function ${U(z) = \sum_{n} U_n z^n}$, the probability of our word having length ${n}$ is ${U_n / m^n}$, and the probability generating function ${P(z)}$ is therefore ${\sum_{n} U_n z^n / m^n}$. This ${P(z)}$ can be got by replacing ${z}$ with ${z/m}$ in the expression for ${U(z)}$: we have
$\displaystyle P(z) = U(z/m) = \frac{(m-z)z^k}{m^k(1-z) + (m-1)z^k}$
In principle, this probability generating function tells us everything about the distribution of the length of the word. For example, its expected length is
$\displaystyle \mathop{E}[X] = P'(1) = \frac{m^k - 1}{m - 1}$
(See this question on Quora for other powerful ways of finding this expected value directly.)
We can also find its variance, as
$\displaystyle \mathop{Var}[X] = P''(1) + P'(1) - P'(1)^2 = \frac{m^{2k} - (2k-1)(m-1)m^k - m}{(m-1)^2}$
This variance is really too large to be useful, so what we would really like, is the shape of the distribution… to be continued.
Written by S
Sun, 2016-01-03 at 03:06:23
Posted in mathematics
## Converting a data URL (aka data URI) to an image on the commandline (Mac OS X)
This is trivial, but was awfully hard to find via Google Search. Eventually had to give up and actually think about it. :-)
So, a data-URI looks something like the following:
data:image/png;base64,[and a stream of base64 characters here]
The part after the comma is literally the contents of the file (image or whatever), encoded in base64, so all you need to do is run base64 --decode on that part.
For example, with the whole data URL copied to the clipboard, I can do:
pbpaste | sed -e 's#data:image/png;base64,##' | base64 --decode > out.png
to get it into a png file.
Written by S
Sun, 2015-09-27 at 19:58:17
Posted in compknow
## Using Stellarium to make an animation / video
(I don’t have a solution yet.)
I just wanted to show what the sky looks like over the course of a week.
On a Mac with Stellarium installed, I ran the following
/Applications/Stellarium.app/Contents/MacOS/stellarium --startup-script stellarium.ssc
with the following stellarium.ssc:
// -*- mode: javascript -*-
core.clear('natural'); // "atmosphere, landscape, no lines, labels or markers"
core.wait(5);
core.setObserverLocation('Ujjain, India');
core.setDate('1986-08-15T05:30:00', 'utc');
core.wait(5);
for (var i = 0; i < 2 * 24 * 7; i += 1) {
core.setDate('+30 minutes');
core.wait(0.5);
core.screenshot('uj');
core.wait(0.5);
}
core.wait(10);
core.quitStellarium();
It took a while (some 10–15 minutes) and created those 336 images in ~/Pictures/Stellarium/uj*, occupying a total size of about 550 MB. This seems a start, but Imagemagick etc. seem to choke on creating a GIF from such large data.
Giving up for now; would like to come back in future and figure out something better, that results in a smaller GIF.
Written by S
Mon, 2015-09-14 at 20:10:10
|
# Angular distribution of Bremsstrahlung photons and of positrons for calculations of terrestrial gamma-ray flashes and positron beams
@article{Kohn2014AngularDO,
title={Angular distribution of Bremsstrahlung photons and of positrons for calculations of terrestrial gamma-ray flashes and positron beams},
author={Christoph Kohn and Ute Ebert},
journal={Atmospheric Research},
year={2014},
volume={135},
pages={432-465}
}
• Published 22 February 2012
• Physics
• Atmospheric Research
## Figures and Tables from this paper
The importance of electron-electron bremsstrahlung for terrestrial gamma-ray flashes, electron beams and electron-positron beams
• Physics
• 2014
Thunderstorms emit terrestrial gamma-ray flashes with photon energies of up to tens of MeV and electron-positron beams that are created by photons with energies above 1.022 MeV. These photons are
Energy resolved positron and hadron spectrum produced by a negative stepped lightning leader
• Physics
• 2014
Gamma-ray flashes with quantum energies up to 40 MeV and beams of electrons and positrons have been detected by satellites above thunderclouds. We here adopt the model of an upward moving negative
Calculation of beams of positrons, neutrons, and protons associated with terrestrial gamma ray flashes
• Physics
• 2015
Positron beams have been observed by the Fermi satellite to be correlated with lightning leaders, and neutron emissions have been attributed to lightning and to laboratory sparks as well. Here we
Production mechanisms of leptons, photons, and hadrons and their possible feedback close to lightning leaders
• Physics
Journal of geophysical research. Atmospheres : JGR
• 2017
The feedback mechanism together with the field enhancement by lightning leaders yields particle energies even above 40 MeV measurable at satellite altitudes, because of their high rest mass hadrons are measurable on a longer time scale than leptons and photons.
The structure of ionization showers in air generated by electrons with 1 MeV energy or less
• Physics
• 2014
Ionization showers are created in the Earth's atmosphere by cosmic particles or by run-away electrons from pulsed discharges or by the decay of radioactive elements like radon and krypton. These
Electron-positron pairs and radioactive nuclei production by irradiation of high-Z target with \gamma-photon flash generated by an ultra-intense laser in the $\lambda^3$ regime
• Physics
• 2022
This paper studies the interaction of laser-driven γ-photons and high energy charged particles with high-Z targets through Monte-Carlo simulations. The interacting particles are taken from
The Emission of Terrestrial Gamma Ray Flashes From Encountering Streamer Coronae Associated to the Breakdown of Lightning Leaders
• Physics
Geophysical Research Letters
• 2020
Terrestrial gamma ray flashes (TGFs) are beams of high‐energy photons associated to lightning. These photons are the bremsstrahlung of energetic electrons whose origin is currently explained by two
Analyzing x-ray emissions from meter-scale negative discharges in ambient air
• Physics
• 2016
When voltage pulses of 1 MV drive meter long air discharges, short and intense bursts of x-rays are measured. Here we develop a model for electron acceleration and subsequent photon generation within
X-ray diagnostics of ECR ion sources-Techniques, results, and challenges.
• Physics
The Review of scientific instruments
• 2022
The high magnetic confinement provided by the minimum-B structure of electron cyclotron resonance ion sources (ECRIS) hosts a non-equilibrium plasma, composed of cold multi-charged ions and hot
## References
SHOWING 1-10 OF 78 REFERENCES
Triply differential cross section and polarization correlations in electron bremsstrahlung emission.
• Shaffer, Tong
• Physics
Physical review. A, Atomic, molecular, and optical physics
• 1996
A reformulation of the relativistic bremsstrahlung code of Tseng and Pratt is reported, again utilizing partial-wave and multipole expansions in a screened potential within the independent particle approximation, but differently organizing their summation.
The Elementary Process of Bremsstrahlung
Abstract In this report we review the experimental and theoretical developments concerning the elementary process of electron bremsstrahlung. The term “elementary process” means that not only the
Erratum: Pair production and bremsstrahlung of charged leptons
Photo pair productions of electrons, muons and heavy leptons and bremsstrahlung of electrons and muons are reviewed. Atomic and nuclear form factors necessary for these calculations are discussed.
The relativistic feedback discharge model of terrestrial gamma ray flashes
[1] As thunderclouds charge, the large-scale fields may approach the relativistic feedback threshold, above which the production of relativistic runaway electron avalanches becomes self-sustaining
Terrestrial gamma-ray flashes as powerful particle accelerators.
• Physics
Physical review letters
• 2011
It is determined that the TGF emission above 10 MeV has a significant power-law spectral component reaching energies up to 100 MeV, and these results challenge TGF theoretical models based on runaway electron acceleration.
Downward emission of runaway electrons and bremsstrahlung photons in thunderstorm electric fields
• Physics
• 2004
Intensive radiations presumably associated with lightning activities have sometimes been detected around nuclear facilities in Japan. In order to investigate the generation of bremsstrahlung photons
High‐energy electron beams launched into space by thunderstorms
• Physics
• 2008
Using CGRO/BATSE data, a possible new source of high‐energy electrons and positrons in the earth's inner magnetosphere is presented. These particles are generated within the upper atmosphere by
Calculation of Bremsstrahlung Cross Sections with Sommerfeld-Maue Eigenfunctions
• Physics
• 1969
A formula for the differential cross section of bremsstrahlung is calculated with the aid of Sommerfeld-Maue eigenfunctions, i.e., under the assumption of a pure Coulomb field and low atomic numbers
|
NIL Protocol
Search
⌃K
# Impermanent Loss Value
## What is Impermanent Loss Value?
The concept of impermanent loss value (ILV) underpins the protocol mechanics of NIL.
ILV is defined as the exact value of tokens required to cover IL incurred on a AMM token pair, given a set of starting and ending token prices and quantities. Let's look at an example scenario.
We start with an initial $10,000 LP position for the WETH-USDC pair on Uniswap V2: WETH USDC Initial Price$1,000
$1 Initial Quantity 5 5,000 Let’s say the price of WETH moves +50% during the LP period: WETH USDC Initial Price$1,000
$1 Initial Quantity 5 5,000 Ending Price$1,500
$1 Ending Quantity 4.082 6,123.72 By multiplying the ending token prices and quantities, we can calculate ILV as follows: Value If Held (No LP) 5 WETH *$1,500 + 5,000 USDC * $1 =$12,500 LP Value With IL 4.082 WETH * $1,500 + 6.123.72 USDC *$1 = $12,247.45 Impermanent Loss Value (ILV) =$252.55
## ILV Formula
The formula for impermanent loss value (ILV) is easily conceptualized as the value of the LP’s holdings at time T if they had just held their tokens, minus the value of their holdings after providing liquidity in the pool.
$ILV = V_{HODL} \: – \: V_{LP}$
Constant function AMMs use a formulaic approach to determine the price of an asset, and we can use these formulas to deterministically calculate the ILV incurred on an LP position. The input values required to make these ILV calculations are embedded in each NIL Contract.
For example, in constant product function AMMs (e.g. Uniswap V2 and forks), we can calculate the ILV of an AMM position if we know the starting and ending prices of Token A, and the starting value of the total LP position (in units of Token B).
$V_{HODL} = \text{$$\cfrac {LP_{0}} {2}$$}\:*\:(\text{$$\cfrac {P_{t}} {P_{0}}$$}\:+ \:1)\:\:\:\:\:\:and\:\:\:\:\:\:V_{LP} \: = \:LP_{0}\:*\:\sqrt{P_{t} / P_{0}}$
$ILV = \text{$$\cfrac {LP_{0}} {2}$$}\:*\:(\text{$$\cfrac {P_{t}} {P_{0}}$$}\:+ \:1) \:\: – \:LP_{0}\:*\:\sqrt{P_{t} / P_{0}}$
NIL Contracts are a new crypto derivative and DeFi primitive with a payout function that perfectly replicates the ILV incurred on a given AMM LP position. Participants can use NIL contracts to go long/short impermanent loss in order to generate returns or hedge their LP positions.
## Price Oracles
Because impermanent loss is incurred on movements of the exchange rate between two on-chain tokens, NIL uses AMM-specific TWAP price oracles (e.g. Uniswap V2 price oracle) to calculate ILV.
By using token price oracles and the data on a NIL Contract, the impermanent loss value incurred on an AMM LP position can be calculated at any point in time.
|
## 3.4 Ordinals
A set $T$ is transitive if $x\in T$ implies $x\subset T$. A set $\alpha$ is an ordinal if it is transitive and well-ordered by $\in$. In this case, we define $\alpha + 1 = \alpha \cup \{ \alpha \}$, which is another ordinal called the successor of $\alpha$. An ordinal $\alpha$ is called a successor ordinal if there exists an ordinal $\beta$ such that $\alpha = \beta + 1$. The smallest ordinal is $\emptyset$ which is also denoted $0$. If $\alpha$ is not $0$, and not a successor ordinal, then $\alpha$ is called a limit ordinal and we have
$\alpha = \bigcup \nolimits _{\gamma \in \alpha } \gamma .$
The first limit ordinal is $\omega$ and it is also the first infinite ordinal. The first uncountable ordinal $\omega _1$ is the set of all countable ordinals. The collection of all ordinals is a proper class. It is well-ordered by $\in$ in the following sense: any nonempty set (or even class) of ordinals has a least element. Given a set $A$ of ordinals, we define the supremum of $A$ to be $\sup _{\alpha \in A} \alpha = \bigcup _{\alpha \in A} \alpha$. It is the least ordinal bigger or equal to all $\alpha \in A$. Given any well-ordered set $(S, <)$, there is a unique ordinal $\alpha$ such that $(S, <) \cong (\alpha , \in )$; this is called the order type of the well-ordered set.
Comment #971 by Fred Rohrer on
I suggest changing "well ordered" to "well-ordered" (or at least use the same spelling throughout). The other occurrences without a hyphen are in 00YP, 065T, 03C3, 09E0, 06RF and 06RG.
Comment #3567 by Christian Hildebrandt on
I would suggest changing $\geq$ to $\lt$ to keep the analogy with $\in$ .
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
## Non-linear feedbacks
I thought I would just briefly mention a recent paper by Bloch-Johnson, Pierrehumbert & Abbot called Feedback temperature dependence determines the risk of high warming. As I understand it, the basic idea is to consider what would happen if the feedback response has a temperature dependence. If the feedback response is linear, then you can estimate the climate sensitivity, $\lambda$, at any time using
$\lambda = -\dfrac{\Delta F}{\Delta T}.$
Credit : Figure 1 from Bloch-Johnson et al. (2015)
However, it is clear that feedbacks do have a temperature dependence. The Planck response ($dF = 4 \epsilon \sigma T^3 dT$) is clearly temperature dependent. What we don’t know – given all the feedbacks – is the strength of this possible non-linearity. What Bloch-Johnson et al. do is simply assume that the climate sensitivity has a non-linear term,
$-\Delta F = \lambda \Delta T + \alpha \Delta T^2.$
The figure to the right shows how different values of $\alpha$ influences the response to a change in forcing. The dashed line is the linear response. Negative $\alpha$ values reduce the response, while positive ones increase it. There are also combinations of $\alpha$ and $\lambda$ that could lead to runaway warming. Most GCMs, apparently, suggest that the linear approximation works well. There are some – as shown in the right hand panel above – that do, however, indicate a non-negligible $\alpha$ value.
Credit : Figure 3 from Bloch-Johnson et al. (2015)
As shown in the figure above, the paper also considers the impact of a possible non-linearity on observationally-based estimates of climate sensitivity. Negative values reduce both climate sensitivity and the range, while positive values do the opposite. Something to bear in mind, though, is that most observationally based analyses assume feedbacks are linear, and so – by definition – cannot be used to determine if they’re not.
Anyway, that’s all I was going to say. As I understand it, the point of the paper is not to suggest that feedbacks will be non-linear, but to illustrate the impact of them being non-linear. Additionally it illustrates that such a non-linearity would not be evident in observationally-based studies. In a sense, it seems to be a Black swan type of argument. If feedbacks are non-negligibly non-linear, then this would not yet be evident, but could result in the probability of high climate sensitivity being much greater than we currently think. This is especially true if we do continue to follow a high emission pathway, which could ultimately much more than double atmospheric CO2.
Update : I had an email from Ray Pierrehumbert with some additional context. I’ve added it below. Bear in mind that the figure is simply illustrative of how a bifurcation might work, not from some kind of actual calculation.
Although the nonlinear term can be quite important even for mid-range IPCC type climate sensitivity, when you go out on the fat tail (say, 8C per doubling) then the nonlinearity becomes not just a modification of the story, but the WHOLE story — unless the world manages to limit radiative forcing to very small values. So, consideration of fat tails and nonlinearity/bifurcations are inseparable. Worse, when there is a bifurcation, the local analysis can tell you that you jump but it doesn’t tell you where you land — could be just a transition to a state a few degrees higher, but could be a Venus-type runaway (not that I think the latter is likely, but it can’t be settled based on the kind of local analysis people usually do). In other words, not just a black swan, but potentially a whole flock of black swans.
Figure from Ray Pierrehumbert
This sketch may be a useful visualization. The vertical axis represents climate state (think global mean temperature) the horizontal represents control knob (think CO2) The arrows represent two possible jumps, compatible with the same local behavior. As one gets close to the fold, the linear analysis becomes increasingly meaningless.
This entry was posted in Climate change, Climate sensitivity, Global warming, Science and tagged , , , , . Bookmark the permalink.
### 27 Responses to Non-linear feedbacks
1. There’s a subtlety to this whole non-linearity issue. This paper considers the impact of the feedbacks having an explicit temperature dependence. However, it is also likely (as pointed out in the Isaac Held post I link to here) that there will be regional variations that mean that a globally averaged feedback response could have an apparent temperature dependence. For example, if polar regions warm more rapidly as the system approaches equilibrium, then an observationally-based study that considers only the initial part of a warming period, will underestimate climate sensitivity even if the feedbacks aren’t actually non-negligibly non-linear. I hope I’ve explained that properly.
2. If I get you right, aTTP, what you’re saying is that, when compared with previous estimates, this paper suggests there’s the potential for more uncertainty with regards to eventual warming. This suggests that we need to be even more cautious than we aren’t being because we’re heading into a greater unknown.
Maybe our species will end up being named by some intelligent successor in future times ‘Homo Sapiens Inprudens’.
3. I just realized how closely my recent comment
is related to the post of Isaac Held that you refer to.
I like indices that tell about temperatures in regions where most people live and prefer excluding very sparsely populated regions that add disproportionally to the “random” variability in the calculated average temperatures or have other issues that make the value less well defined. The areas that I would exclude for those reasons include Arctic, Antarctic and parts of Siberia. (Thus I like HadCRUT better than the alternatives.) Problems that these regions have include variable extent of ice cover and the extreme variability in surface temperatures caused by very common states of temperature inversion. Finland is Northern enough for making it clear, how unstable the surface temperature is during a winter time inversion.
It’s, of course, important to study also areas that I would exclude from the temperature index, but the indicators used for those areas might be based on something else than the surface temperature. Any single index has it’s problems. In science there’s no reason to restrict analysis to a single index (and it’s seldom done).
4. John says:
Reblogged this on jpratt27.
5. John,
Yes, I think that’s roughly the situation. This is arguing that there could be fat tails, so it’s not just a high impact, low probability event, it could be a high impact event that is more likely than we currently think,
Pekka,
Any single index has it’s problems. In science there’s no reason to restrict analysis to a single index (and it’s seldom done).
Yes, I agree. However, there is always a balance between trying to present nice simple metrics that might not be ideal and introducing so much complexity that it’s hard to explain the overall picture.
6. Eli Rabett says:
Pekka, you are assuming that what happens in the sparsely populated areas has no effect on the heavily populated areas, also you are making an interesting argument wrt urban heat islands.
7. Arthur Smith says:
Just a note here – the Bloch-Johnson et al. figure 1 showing climate model nonlinear behavior is under the condition of an initial 32x CO2 forcing, so a very large forcing change – I expect almost any realistic model would be nonlinear at that level of change. Their figure 3 which you show is displaying climate model sensitivities under much less forcing – a 4x CO2 impulse (the bottom axis shows the delta T under 4x CO2 for a range of CMIP5 models).
8. Eli,
No I don’t, but the direct influence of the temperature is local. The indirect effects may be large, but they cannot be described well by the temperatures. I prefer using the temperature index for the temperature and discussing the other phenomena using measures that describe them like sea level of ocean acidification.
The other point that I have is that the index should be defined in a way that minimizes noise that may hide trends.
9. Arthur,
Yes, that’s a good point. One thing I think they were trying to also suggest is that currents GCMs probably can’t properly model such large changes anyway. They will probably be non-linear, but can they properly represent these non-linearities? As I understand it, a thrust of the paper is to argue that if we do increase anthropogenic forcings substantially, then these non-linearities will become non-negligible. Whether or not that means a small change from the linear expectation, or a large one, is what’s currently unknown.
10. “What Bloch-Johnson et al. do is simply assume that the climate sensitivity has a non-linear term,”
Of course they did.
Why not assume that term is alpha * T ^ 100 ???
What feedback physically is dependent on the square of temperature?
Soden and Held 2006 identify the major feedbacks as:
Water Vapor ( largest )
Clouds ( very uncertain )
Albedo ( smallest positive )
Lapse Rate ( negative )
Which ones are we suggesting are dependent on the square of temperature change?
What happens if that non-linearity is the square root rather than the square of temperature change?
11. TE,
Because if it’s non-linear, the next order is $\Delta T^2$, not $\Delta T^{100}$.
Consider the Taylor series of the Planck function.
$F(T + dT) = F(T) + \dfrac{dF}{dT} dT + \dfrac{1}{2} \dfrac{d^2F}{dT^2} dT^2 +....$
We know that $F(T) = \epsilon \sigma T^4$, so you can actually solve this to get $dF$
$dF = F(T + dT) - F(T) = 0 + 4 \epsilon \sigma T^3 dT + 6 \epsilon \sigma T^2 dT^2.$
So, if we want to keep the higher order terms in the Planck response, it has the form of $\alpha dT^2$. That would be true for any feedback. Of course, the coefficient of the higher order terms could be zero, but they do consider that possibility.
12. This is the non-linear impact of multiple doublings of CO2, inducing the water vapor rise along with it:
http://theoilconundrum.blogspot.com/2013/03/climate-sensitivity-and-33c-discrepancy.html
Here is an alternate differential form
http://theoilconundrum.blogspot.com/2012/03/co2-outgassing-model.html
13. John Hartz says:
14. Pekka.
I could not agree more. well put.
15. BBD says:
John H
The paper is about the impact of ‘non-linear’ feedbacks on climate sensitivity. If the strength of the feedback response is always directionally proportional to the change in forcing, it is considered ‘linear’. If instead it increases disproportionally because it is boosted by rising temperatures, then the feedback would be ‘non-linear’. This is something that we would not see now, at lower temperatures. It is something that would not show up in so-called ‘observational’ estimates of sensitivity, biasing them low.
16. Yes, that’s a good point. One thing I think they were trying to also suggest is that currents GCMs probably can’t properly model such large changes anyway.
I’ve sometimes wondered at what point in climate forcing do non-linearities invalidate the extrapolations and calibrations I understand are sometimes done to make models calculable with reasonable mesh sizes and in reasonable times. In particular, when are historical calibrations of less use and ab initio physics all that’s left to be done, as incredibly difficult as that is. I don’t expect we are there now, but this may occur some time in the future. Did it occur during the Siberian coal-burning setup at the end of the Permian?
17. John Hartz says:
BBD: Your explanation is consistent with my understanding of the OP.
The unanswred question that I have is:
If temperature feedbacks are indeed nonlinear, will “Climate Sensitivity remain constant or will it change over time?
18. Kevin O'Neill says:
“The areas that I would exclude for those reasons include Arctic, Antarctic and parts of Siberia.”
“The other point that I have is that the index should be defined in a way that minimizes noise that may hide trends.”
Given arctic amplification is *expected* – then wouldn’t excluding the arctic work to increase noise at the expense of the trend?
19. BBD says:
John H
If temperature feedbacks are indeed nonlinear, will “Climate Sensitivity remain constant or will it change over time?
Sensitivity will change as it warms. It will increase.
20. Andrew Dodds says:
BBD –
Or it may be that the range of possible stable climate states is not continuous. Which would at least explain why arriving at a narrowly defined value for ECS is probably impossible,
21. BBD says:
Andrew
Yes, that’s certainly possible and climate behaviour across the Cenozoic could even be seen as suggestive that this is the case. Hyperthermals, the Oi-1 glaciation, late Oligocene warming, Mi-1 glaciation and the Plio-Pleistocene glacial cycles all hint at inherent instability and thresholds.
22. Brian Dodge says:
“The study of past warm climates may not narrow uncertainty in future climate projections in coming centuries because fast climate sensitivity may itself be state-dependent, but proxies and models are both consistent with significant increases in fast sensitivity with increasing temperature.”
State-dependent climate sensitivity in past warm climates and its implications for future climate projections; Rodrigo Caballero and Matthew Huber; http://www.pnas.org/content/110/35/14162.full
I also wonder about rate dependence; would the response curve of significant climate variables(surface & lower troposphere temperature, sea surface temperature, ice volumes/extents/seasonality, sea level) be nonlinearly different for different rates of CO2 increase, and how long would it take for the curves to converge. If you compared an instantaneous CO2 doubling to scenarios taking 100 or 500 years, would there be overshoots, lags, black swans that have policy implications? The paleoclimate proxies indicate thousands of years for collapse of major ice sheets. Larsen B in past high CO2 regimes may have been slowly nibbled away as temperature rose, but the current rate of change suported surface ponding of meltwater and catastrophic collapse. If the rate of melting from a faster rise in seawater temperature at the edge of major ice sheets drives the calving front into areas where the ground slopes inland, can paleoclimate response preclude major nonlinearities? If the change in speed of ice flow towards the calving front lags the rate at which ice removal at the front is changing, then the front will retreat into the ice sheet. If the temperature rises slowly, the rate of calving, offset by an increase in the rate of ice flow, could maintain a constant calving front. If the temperatures rise quickly, the calving front could move into areas where the dynamics change. E.g.
“The collapse of the Western Antarctica ice sheet is already under way and is unstoppable, two separate teams of scientists said on Monday.”
“But the researchers said that even though such a rise could not be stopped, it is still several centuries off, and potentially up to 1,000 years away.”
What if it’s 100 years away?
http://www.sciencemag.org/content/348/6237/899.full
“Growing evidence has demonstrated the importance of ice shelf buttressing on the inland grounded ice, especially if it is resting on bedrock below sea level. Much of the Southern Antarctic Peninsula satisfies this condition and also possesses a bed slope that deepens inland. Such ice sheet geometry is potentially unstable. We use satellite altimetry and gravity observations to show that a major portion of the region has, since 2009, destabilized. Ice mass loss of the marine-terminating glaciers has rapidly accelerated from close to balance in the 2000s to a sustained rate of –56 ± 8 gigatons per year, constituting a major fraction of Antarctica’s contribution to rising sea level. The widespread, simultaneous nature of the acceleration, in the absence of a persistent atmospheric forcing, points to an oceanic driving mechanism.”
Would you bet your beach house that it doesn’t also point to a Larsen B style Black Swan (birds of black feather?)?
23. John Hartz says:
For a brief, plain-English summary of the recent paper by Bloch-Johnson, Pierrehumbert & Abbot paper and its implications see:
How the harm of climate change could explode exponentially down the road by Ryan Cooper, The Week, June 4, 2015
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
## Comparing Theories to more traditional testing
My old work colleague Tim has recently blogged about using NSpec to specify a stack.
NSpec has the same sort of functionality as a unit testing framework such as NUnit. The terminology has been changed to get over the roadblock that some people have in adopting tests.
Theories actually give something over and above normal unit testing, and that’s what I’m going to look at in this blog post. I’m going to look at Tim’s example and show how using theories actually differ from Tim’s more traditional example.
The stack interface for which the implementation was arrived at via speccing is as follows:
public class Stack<t>{public Stack();public void Clear();public bool Contains(T item);public T Peek();public T Pop();public void Push(T item);// Propertiespublic int Count { get; }}
The following tests were arrived at:
namespace Stack.Specs{[Context]public class WhenTheStackIsEmpty{ Stack _stack = new Stack<int>(); [Specification] public void CountShouldBeZero() { Specify.That(_stack.Count).ShouldEqual(0); } [Specification] public void PeekShouldThrowException() { MethodThatThrows mtt = delegate() { _stack.Peek(); }; Specify.ThrownBy(mtt).ShouldBeOfType(typeof(InvalidOperationException)); }}}
That’s ample for us to discuss the difference between theories and more normal testing.
For the PeekShouldThrowException test/specification, we can see from the naming of the context that the developer intends to show that for an empty stack, the Peek operation throws an exception. However, what the developer has actually shown is that calling Peek on a newly-created stack throws an exception.
Developers tend to think in fairly general terms, and express this generality by using more specific cases. However, some of this generality can get lost. Theories aim to keep more of that generality.
We can demonstrate this in a theory (don’t take much note of the syntax, just the concepts)
[Theory] public void PeekOnEmptyStackShouldThrow(Stack<int> stack) { try { stack.Peek(); Assert.Fail(ExpectedExceptionNotThrown); } catch (InvalidOperationException) { } }
This states that calling Peek() on ANY stack should fail, we need to show that this is only true for an empty stack. We could do this by simply checking for this:
[Theory] public void PeekOnEmptyStackShouldThrow(Stack<int> stack) { try { if (stack.Count == 0) { stack.Peek(); Assert.Fail(ExpectedExceptionNotThrown); } } catch (InvalidOperationException) { } }
But as we’ll see in a bit, using assumptions gives us some extra feedback (again, don’t focus on the syntax).
[Theory] [Assumption("AssumeStackIsEmpty")] public void PeekOnEmptyStackShouldThrow(Stack<int> stack) { try { stack.Peek(); Assert.Fail(ExpectedExceptionNotThrown); } catch (InvalidOperationException) { } } public bool AssumeStackIsEmpty(Stack<int> stack) { return stack.Count == 0; }
This is a much more general statement than the original specification/test, we’re saying that the stack should fail if we try to Peek on it for ANY empty stack.
We don’t care whether this is a newly-created stack, or it is a stack which has been manipulated via its public interface. Also, Liskov Substitution Principle states that we should be able to use any classes derived from Stack, and the theories should hold true.
We validate this theory with example data, in much the same way as when we’re doing test-driven development. The extra power comes from the generality in the way that the theory is written – we can imagine a tool that performs static code analysis on the Stack class to confirm that it obeys this.
However, the literature mentions that the most likely way to validate a theory is via an exploration phase, via a plug-in tool that will try various combinations of input data to look for anything that fails the theory.
It is prohibilively expensive to explore every possible combination of inputs, imagine all the possible values of a double, or in our example, there are an infinite number of operations that could happen to a stack that gets passed in.
This fits in nicely with the name theory with parallels with science – it’s not feasible to prove it, but we look for data to disprove it.
The example data is important for the red-green-refactor cycle. The exploration phase sits outside that – it finds which input data doesn’t fit the theory, allowing the theory to be modifed. There are exploration tools in Java, and I haven’t looked too much into it, but it may be possible to use Microsoft’s Pex as an exploration tool?
Before I forget, this is a possible way to specify the example data for our stack:
[Theory] [Assumption("AssumeStackIsEmpty")] [InlineData("EmptyStack", new Stack())] [PropertyData("EmptiedStack")] public void PeekOnEmptyStackShouldThrow(Stack<int> stack) { try { stack.Peek(); Assert.Fail(ExpectedExceptionNotThrown); } catch (InvalidOperationException) { } } public List<exampledata> EmptiedStack { get { List<exampledata> data = new List<ExampleData>(); Stack stack = new Stack(); stack.Push(2); stack.Push(3); stack.Pop(); stack.Pop(); data.Add(stack); return data; } }
In my prototype extension, the assumptions are important and are validated, as they tell us something vital about the code. I think that all the information about the behaviour of the system is vital, and should be documented and validated, but there are varied opinions on the list. That’s why I’m blogging – give me your feedback 🙂
If the user changed the behaviour of Peek() such that it was valid on an empty stack (it may return a Null Object for certain generic types), then our assumption would not detect this if it was simply filtering the data – the assumption would say “Peek() fails, but only on empty stacks”, whereas Peek() would not fail on empty stacks. See my previous post for the behaviours I have implemented.
Notice in Tim’s implementation how his stack is hardcoded to have at most 10 items. When TDDing we may make slightly less obviously limited implementations to get our tests to pass, but forget to add the extra test cases to show this limitation (the process of progressively adding more and more general test cases is called triangulation).
When writing theories, the same process happens, but writing the theories as a more general statement means that a code reviewer/automated tool can see that the developer intended that we intended that we can push a new item onto ANY stack, not just a stack that contained 9 or less items.
Any thoughts? Have I got the wrong end of the stick? If anyone found this post useful, I might full flesh out the equivalent of Tim’s example.
## Sample Theory Implementation as NUnit Extension.
There’s been lots of comments bouncing around on the NUnit mailing list about what exactly constitutes a Theory, and what the desired features are, so I’ve created an NUnit extension with a sample Theory implementation – you can get it, Maslina version 1.0.0.0, from www.taumuon.co.uk/rakija
xUnit.Net implements theories but does not have any in-built Assumption mechanism (you can effectively filter out bad data, which is the same as a filtering assumption). JUnit 4.4, I think, only filters out data – it doesn’t tell us anything about the state of an assumption.
Anyway, from reading the literature on theories (see my previous blog posting), I quite like the idea of having assumptions tell us something about the code, that those assumptions are validated.
The syntax of my addin is quite poor, and there’s not really enough validation of user input, but I’m aiming to try to do some theory-driven development (theorizing?) using it, to see what feels good and what grates.
Any feedback gratefully received (especially – is it valid to say that this is an implementation of a Theory, are validation of assumptions useful or unnecessary fluff?)
Here is the syntax of my extension.
[TestFixture] public class TheorySampleFixture { [Theory] [PropertyData("MyTestMethodData")] [InlineData("Parity", new object[] { 1.0, 1.0, 1.0 })] [InlineData("Parity 2", new object[] { 2.0, 2.0, 1.0 })] [InlineData("Double Euros", new object[] { 2.0, 1.0, 2.0 })] // This does not match the assumption, and will cause this //specific theory Assert to fail, in which case we will get a pass overall. // If the unit under test were changed to somehow handle zero exchange rate, // the body of the theory method would pass, but the // assumption would still not be met and overall we will register a failure. [InlineData("ExchangeRate Assumption Check", new object[] { 2.0, 1.0, 0.0 })] // This case will fail, there is an assumption that the dollar value is not three, // but passing in a value of 3 doesn't cause a failure in the code, demonstrating // that the assumption serves no purpose [InlineData("This should fail, assumption met but no failure in method", new object[] { 3.0, 1.0, 3.0 })] [Assumption("ConvertToEurosAndBackExchangeRateIsNotZero")] [Assumption("DollarsNotThree")] public void MyTheoryCanConvertToFromEuros(double amountDollars, double amountEuros, double exchangeRateDollarsPerEuro) { // Should check are equivalent within a tolerance // Calls static method on Convert method Assert.AreEqual(amountDollars, Converter.ConvertEurosToDollars(Converter.ConvertDollarsToEuros(amountDollars, exchangeRateDollarsPerEuro), exchangeRateDollarsPerEuro)); } // Assumption is that the exchange rate is not zero public bool ConvertToEurosAndBackExchangeRateIsNotZero(double amountDollars, double amountEuros, double exchangeRateDollarsPerEuro) { // Should have a tolerance on this return exchangeRateDollarsPerEuro != 0.0; } // Assume that dollar value not equal to three // This is just to demonstrate that an invalid assumption results in a failure. public bool DollarsNotThree(double amountDollars, double amountEuros, double exchangeRateDollarsPerEuro) { return amountDollars != 3.0; } /// Returns the data for MyTestMethod /// public IList MyTestMethodData { get { List details = new List(); details.Add(new TheoryExampleDataDetail("Some other case should pass", new object[] { 2.0, 20.0, 5.0})); return details; } } } public static class Converter { public static double ConvertEurosToDollars(double amountDollars, double dollarsPerEuro) { return amountDollars * dollarsPerEuro; } public static double ConvertDollarsToEuros(double amountEuros, double dollarsPerEuro) { return amountEuros / dollarsPerEuro; } }
A nicer syntax/api would be to have the assumptions inline:
public void CanConvertToEurosAndBack(double amountDollars, double amountEuros, double exchangeRateDollarsPerEuro){Assume.That(exchangeRateDollarsPerEuro != 0.0);Assume.That(amountDollars != 0.0);// Checks are equivalent within a tolerance// Calls static method on Convert methodAssert.AreEqual(amountDollars, Converter.ConvertEurosToDollars(Converter.ConvertDollarsToEuros(amountDollars,exchangeRateDollarsPerEuro),exchangeRateDollarsPerEuro));}
Here’s the rules of my Theory Implementation
If there is no example data, the theory passes (we may want to change this in the future).
If there are no assumptions for a theory, then each set of example data is executed against the theory each producing its own pass or fail.
If assumptions exist, the each set of data is first validated against the assumption – if it meets the assumption, then the test proceeds and any test failure is flagged as an error.
If the example data does not meet the assumption, then if the test passes it indicates that the assumption is invalid, and that case is marked as a failure, with a specific message “AssumptionFailed”. Any assertion failures or exceptions in the actual theory code are treated as passes. (in the future, would we want to mark the specific exception expected in the test methdo if an assumption is not met?).
NOTE: we may want to mark as a failure any theory for which ALL example data fails the assumptions, as a check that the
actual body of the theory is actually being executed. I’ve not done this for now as it would be trickier with the current
NUnit implementation.
Similarly, I was thinking of failing if any of the assumptions weren’t actually executed, but again, this is tricky in the current NUnit implementation (and may not give us much).
Automated exploration would not follow the last two suggested rules. The automation API would need to generate its data and execute it as if it were inline data. It may be helpful for the automated tool to be able to retrieve the user-supplied example data, so it doesn’t report a failure for any known case, but this is probably not necessary.
Feedback on these rules would be most welcome. If you want to change the behaviour of the assumptions (i.e. have assumptions only filter and nothing more), then the behaviour can be changed in TheoryMethod.RunTestMethod()
Here’s the output of the above theory:
## Theories
I’ve just released a slightly updated version of my NUnit extension for data-driven unit testing.
There’s been a lot of discussion on the NUnit developer list recently regarding Theories – something new in JUnit and xUnit.Net, and it’s taken a while to discover why they’re so powerful (they’re superficially very similar to data-driven unit tests, and a lot of the differences are semantics).
First, there’s some good background on theories written by David Saff:
http://shareandenjoy.saff.net/tdd-specifications.pdf
http://shareandenjoy.saff.net/2007/04/popper-and-junitfactory.html
http://dspace.mit.edu/bitstream/1721.1/40090/1/MIT-CSAIL-TR-2008-002.pdf
Theories on first glance look like a data-driven unit test, but I think that the most important difference is, is that:
Theories are, in theory (excuse the pun), supposed to pass for ANY POSSIBLE parameters, whereas data-driven tests only express the behaviour examples that the developer has provided (they are nothing new in unit testing – just a way for a developer to more clearly group parameters together, or get the parameterized data from an external data source without recompiling tests).
Theories are a generalized statement of how the program should run, whereas in TDDing, a very explicit statement of intent is made, which can be made to pass by coding that specific case in the implementation, and then the program is made to work by triangulization – expressing the generalization by giving more inputs. However, the theory literature points out that as we haven’t passed in too many data points we can’t be sure whether we’ve actually expressed what we meant.
Theories, by forcing us to write our tests such that they take any inputs, are much more powerful a statement, and allow for the possible inputs to be explored with external tools.
As an aside, one question I posted to the NUnit developer list regarding theories: “One thing that comes to mind, is that theories are written such that all possible inputs should pass. Apart from using a tool such as agitator, is there a way to test that the tests are written in a general way (I mean, if you had a theory that took parameters, but it totally ignored those parameters and worked as a vanilla unit test – i.e. created its own input), then it’s not really a valid theory – is
there a way to detect these cases? Probably not, but I was just idly wondering.” Answers on a postcard to… well, I’d prefer a reply comment 😉
|
# In the two-wattmeter method of 3ϕ power measurement, if the phase sequence of the supply is reversed:
Free Practice With Testbook Mock Tests
## Options:
1. one of the meters will show a negative reading
2. the meters will not read
3. there won't be a change in meter readings
4. the reading of wattmeters will be interchanged
### Correct Answer: Option 4 (Solution Below)
This question was previously asked in
DMRC JE EE 2018 Official Paper 3
## Solution:
Two wattmeter method:
The connection diagram using wattmeters as shown below.
$${W_1} = {I_R}{V_{RB}}\cos \left( {{I_R}^\wedge{V_{RB}}} \right)$$
$${W_2} = {I_Y}{V_{YB}}\cos \left( {{I_Y}^\wedge{V_{YB}}} \right)$$
From the phasor diagram
$${I_R}^\wedge{V_{RB}} = 30 - \phi$$
$${I_Y}^\wedge{V_{YB}} = 30 + \phi$$
$${W_1} = {I_R}{V_{RB}}\cos \left( {30 - \phi } \right)$$
$$\Rightarrow {W_1} = {V_L}{I_L}\cos \left( {30 - \phi } \right)$$
$${W_2} = {I_Y}{V_{YB}}\cos \left( {30 + \phi } \right)$$
$$\Rightarrow {W_2} = {V_L}{I_L}\cos \left( {30 + \phi } \right)$$
If the phase sequence of the supply is reversed, the reading of wattmeters will be interchanged.
|
## The present single-ionization cross section of Sn<sup>8 +</sup> compared to the results of the present CADW calculations
2013-08-07T00:00:00Z (GMT) by
<p><strong>Figure 7.</strong> The present single-ionization cross section of Sn<sup>8 +</sup> compared to the results of the present CADW calculations. Same notation as in figure <a href="http://iopscience.iop.org/0953-4075/46/17/175201/article#jpb473129f2" target="_blank">2</a>. The brackets with arrows denote energy ranges, where REDA processes involving 3d-subshell excitations are to be expected. The brackets with arrows denote energy ranges, where REDA processes involving 3d- and 3p-subshell excitations are to be expected. The dark-shaded area at the bottom of the graph represents the total ionization contribution of the 4d<sup>5</sup>4f excited-ion-beam component with an estimated fraction of 0.6%.</p> <p><strong>Abstract</strong></p> <p>Electron-impact single-ionization cross sections of Sn<sup><em>q</em> +</sup> ions in charge states <em>q</em> = 4–13 with 4d<sup>[10 − (<em>q</em> − 4)]</sup> outer-shell configurations have been studied in the energy range from the corresponding thresholds up to 1000 eV. Absolute cross sections and fine-step energy-scan data have been measured employing the crossed-beams technique. Contributions of different ionization mechanisms have been analysed by comparing the experimental data with calculations employing the configuration-averaged distorted wave approximation. Ionization plasma rate coefficients inferred from the experimental data are also presented.</p>
|
# What am I doing when I separate the variables of a differential equation?
I see an equation like this:
and solve it by “separating variables” like this:
What am I doing when I solve an equation this way? Because $\textrm{d}y/\textrm{d}x$ actually means
they are not really separate entities I can multiply around algebraically.
I can check the solution when I’m done this procedure, and I’ve never run into problems with it. Nonetheless, what is the justification behind it?
What I thought of to do in this particular case is write
then by the fundamental theorem of calculus
Is this correct? Will such a procedure work every time I can find a way to separate variables?
The basic justification is that integration by substitution works, which in turn is justified by the chain rule and the fundamental theorem of calculus.
More specifically, suppose you have:
Rewrite as:
Add the implicit dependency of $y$ on $x$ to obtain
Now, integrate both sides with respect to $x$:
If we do a variable substitution of $y$ for $x$ on the left-hand side (i.e., use the integration by substitution technique), we replace $\frac{dy}{dx} dx$ with $dy$. Thus we have
which is the separation of variables formula.
So if you believe integration by substitution, then separation of variables is valid.
|
## Near-rings with identity
Abbreviation: NRng$_1$
### Definition
A near-ring with identity is a structure $\mathbf{N}=\langle N,+,-,0,\cdot,1 \rangle$ of type $\langle 2,1,0,2,0\rangle$ such that
$\langle N,+,-,0,\cdot\rangle$ is a near-rings
$1$ is a multiplicative identity: $x\cdot 1=x\mbox{and}1\cdot x=x$
##### Morphisms
Let $\mathbf{M}$ and $\mathbf{N}$ be near-rings with identity. A morphism from $\mathbf{M}$ to $\mathbf{N}$ is a function $h:M\rightarrow N$ that is a homomorphism:
$h(x+y)=h(x)+h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(1)=1$
Remark: It follows that $h(0)=0$ and $h(-x)=-h(x)$.
### Examples
Example 1: $\langle\mathbb{R}^{\mathbb{R}},+,-,0,\cdot,1\rangle$, the near-ring of functions on the real numbers with pointwise addition, subtraction, zero, composition, and the identity function.
### Basic results
$0$ is a zero for $\cdot$: $0\cdot x=0$ and $x\cdot 0=0$.
### Properties
Classtype variety decidable no unbounded no yes yes, $n=2$ yes yes
### Finite members
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
|
# Vertex Formula
In geometry, a vertex is a point where two or more curves, lines, or edges meet. As a consequence of this definition, the point where two lines meet to form an angle and the corners of polygons and polyhedra are vertices.
For example, a square has four corners, each corner is called a vertex. The plural form of the vertex is vertices. The word vertex is most commonly used to denote the corners of a polygon.
When two lines meet at a vertex, they form an included angle. For polygons, the included angle at each vertex is an interior angle of the polygon. Vertex is also sometimes used to indicate the ‘top’ or high point of something, such as the vertex of an isosceles triangle, which is the ‘top’ corner opposite to its base, but this is not its strict mathematical definition.
The Vertex Formula is given as,
$\large Vertex=\left(h,\:k\right)=\left(\frac{-b}{2a},c-\frac{b^{2}}{4a}\right)$
### Solved Examples of Vertex
Example: Find the vertex of the parabola: $y = 3x^{2} + 12x – 12$
Solution:
Given,
a = 3
b = 12
c = -12
So, the x-coordinate of the vertex is:
$-\frac{12}{2\left ( 3 \right )}$
=-2
y-coordinate is:
(4ac – b2)/4a = [4(3)(-12) – (12)2]/ 4(3)
= (-144 – 144)/12
= -228/12
= -24
Alternatively,
Substitute x = -2 in the original equation to get the y-coordinate as given below:
y = 3(-2)2 + 12(-2) – 12
= 12 – 24 – 12
=-24
So, the vertex of the parabola is at (-2, -24).
|
Post Details
area of semicircle calculator
The semicircle is half of the circle and hence the area of semicircle is half the area of circle.Let us learn here how to find the area of a semicircle. The area of the shaded section is 113.076mm^2. The tool works as semicircle perimeter calculator as well - e.g., if you want to braid the rug, you can calculate how much lace you'll need. Program to find area of a Trapezoid. If you know the length of the radius, you can calculate its perimeter using the following formula: Enter the radius and total degrees of the rotation (arc) of the semi circle to calculate the area. Last modified by . And the areas? The area is the number of square units enclosed by the sides of the shape. To find the area of the circle use A=π×r 2. The degrees of rotation is the angle between end line/points. The area of a semicircle is half of the area of the circle. So, the formula for the area of a semicircle is A = pi * r ^2/2. KurtHeckman. Area of largest semicircle that can be drawn inside a square. A semi-circle is half of a circle. This free area calculator determines the area of a number of common shapes using both metric units and US customary units of length, including rectangle, triangle, trapezoid, circle, sector, ellipse, and parallelogram. That formula is . Now we know that area of a semi-circle formula is pi times the radius squared then we’re going to divide by 2. The area of a semicircle is half the area of a circle. BYJU’S online semicircle calculator tool performs the calculation faster, and it displays the area in a fraction of seconds. This is because, a semi-circle is just the half of a circle and hence the area of a semi-circle is the area of a circle divided by 2. This shape is 200 ft wide and an average of 20 feet deep. Calculate Reset. Home. Therefore, the area of the semicircle is given by. The value of π is 3.14 or 22/7. Question 1: For a vehicle having wheels of radius 24cm find the distance covered by it in one complete revolution of wheels. The area of a semicircle is the space contained by the circle. Calculations at a semi-ellipse. It doesn’t matter if there is a triangle in the middle of it. If this is not the case, then you must find the angle by measuring it.For this example we will assume it is in fact a semi circle, so the angle is 180. he next step is to measure or calculate the radius.For this example we are going to assume the diameter of the semi circle is given at 2 meters (m). And note the value. Mar 4, 2019, 11:46:05 PM (r) Radius of Circle (r) Radius of Circle . The formula for the area, A, of a circle is built around its radius. Area of a Semi-Circle: The first step is to ensure that the area you are looking to calculate is in fact a semi circle. Use this calculator to easily calculate the area of a circle, given its radius in any metric: mm, cm, meters, km, inches, feet, yards, miles, etc. Of course, you'll get the same result when using sector area formula. Quadrant area: πr² / 4 To find the area of a semi-circle, you need to know the formula for the area of a circle. Using the formula we find the area of this semi circle is 1.570 meters squared. Task 2: Find the area of a circle given its diameter is 12 cm. Circle Area Calculator; Circle Circumference Calculator; Diameter Calculator. By using this website, you agree to our Cookie Policy. Example 2 : Find the area of the semicircle whose radius is 3.5 cm. So, the formula for the area of a semicircle is:Area=πR22where: R is the radius of the semicircle π is Pi, approximately 3.142 We know that the … We’ll use 3.14 as an approximation of pi. As the area of a circle is πr 2. We can find the perimeter of a semicircle with the help of this below formula: where, R = Radius of the semicircle Use our below online perimeter of a semicircle calculator to find the perimeter. Area of a semicircle. Area of SemiCircle. Sign-Up Today! Area of Semicircle. Area of SemiCircle. Area of Semi-Circle. Henceforth, this formula will be used in the upcoming Java programs to find the area of a semicircle. (A stands for the area of the circle, π is Pi and r is … r = d/2 = 10/2 = 5 Now we will plug the radius into the formula. The area of a semicircle can be calculated using the following formula: if you know the length of the radius. For more on this seeVolume of a horizontal cylindrical segment. P = \frac{1}{2} πd + d. where d is the diameter of the semi circle. The perimeter of a semicircle can be calculated using the following formula: if you know the length of the radius. Area of Half Circle Calculator. Radius and diameter refer to the original circle, which was bisected through its center.Enter one value and choose the number of decimal places. Area of a Semi-Circle Formula: A = (1/2) * π * r 2 Where, A = Area of Semicircle r = Radius Related Calculator: Area of a Semicircle Calculator; The area of the semicircle can be derived from the area of the circle. For this semicircle: $A = \frac{1}{2} \pi r^{2}$ $= \frac{1}{2}\times \pi \times 4 \times 4$ Since we know that a semicircle is half of a circle, we can simply divide that equation by two to calculate the area of a semicircle. $\begingroup$ Just to check - do you perhaps mean the diameter of the semi-circle is 2.5, and that it is located 3.9 units above the horizontal? To find the radius we must divide the diameter by 2. Example diagram. The area of a shape is the amount of space that it encloses. Vector Calculator (3D) Cost per Round (ammunition) Microeconomics Calculator; Weapon Physical DPS Calculator; Midpoint Method for Price Elasticity of Demand; Torispherical Head - Volume; Signal-to-Noise Ratio(dB) Percent by Mass (Weight Percent) Semicircle - Volume Area and is denoted by A symbol. So, the area of a semicircle is 1/2(πr 2 ), where r is the radius. Task 1: Given the radius of a cricle, find its area. Hence the area of a semi-circle is just the half of the area of a circle. The area of a semicircle is the space contained by the circle. So, the area of a semicircle is 1/2(πr 2 ), where r is the radius. As the area of a complete circle is πR 2 then going by the unitary method the area of a semi-circle will be πR 2 /2. It doesn’t matter if there is a triangle in the middle of it. In our case, the perimeter equals 10.28 ft. The area of a semicircle is half of the area of the circle. The formula for the volume of a semicircle is: V = ½•π•r²•h . We know the formula to calculate area of a circle is πr^2, by dividing this by 2 we will get the area of a semicircle. he final step is the plug all of the information into the formula, or simply use the calculator. A different formula is required to work out the perimeter of a semi circle (a circle cut in half), because it consists of a curved edge as well as a straight edge. Use our free online area of a semicircle calculator to find the area of a semi-circle using its radius. Let’s say we were given this as an example and let’s say they said that the radius for this was 10. If you draw two opposing line segments from the circle's origin to the edge, you just drew the diameter. acf3eb11-3ed7-11e9-8682-bc764e2038f2. $\endgroup$ – Peter Woolfitt Jul 25 '15 at 3:05 $\begingroup$ Yes, diameter is 2.5, and it is 3.9 above the horizontal. Additional Information A semi-circle (or semicircle) is simply one-half of circle. How to Find the Area of a Semicircle Formula Knowing the radius. b.… Example: find the area of a circle. Given any one variable A, C, r or d of a circle you can calculate the other three unknowns. 03, Dec 17. Solution : Area of Semicircle = (1/2) π r 2. Remember Pi Constant; Apply the number pi (π) to the formula. "Our mission is to provide the construction industry with a tool to save time and money through simplicity when dealing with complex formulas and calculations in the field." Calculate Reset. So, the formula for the area of a semicircle is: where: R is the radius of the semicircle π is Pi, approximately 3.142 Perimeter of a semicircle. Understand the concept of the Unitary method here. Tags. Let’s say we were given this as an example and let’s say they said that the radius for this was 10. What is a Circle's Diameter? The formula for finding the area of a full circle is … Of course, you'll get the same result when using sector area formula. (For a semi circle this is always 180 degrees). To find the area using the circumference, or the distance around the circle, use the formula area = c^2/4π, where c is the circumference. Improve your math knowledge with free questions in "Semicircles: calculate area, perimeter, radius, and diameter" and thousands of other math skills. Half of the circle is semicircle. Created by . Half of the circle is semicircle. Area result. Find the perimeter of a semicircle with a diameter of 10. A semi-circle is half of a circle. Area of a semicircle, A = (½)πr 2 square units. How to calculate Area of a Semicircle when diameter is given using this online calculator? Solved Examples for You. The value of π is 3.14 or 22/7. P = r(π + 2) P = 5(π + 2) = 25.708 The perimeter is 25.708. Semi-Ellipse Calculator. Area of a Semicircle Calculator A semicircle is nothing but half of the circle. Find the area of this semicircle. Area of Semi-Circle. Comments; Attachments; Stats; History; No comments. In order to solve for area of a semicircle you have to take the radius, which in this case is 10 substitute it in for r and then solve based on your formula. 2D - Plane 2D Geometry Verified. Then, you would multiply 4 by π and get 12.57. The area of a semicircle is always expressed in square units, based on the units used for the radius of a circle. A semicircle is half of a circle. Apply the second equation to get π x (12 / 2) 2 = 3.14159 x 36 = 113.1 cm 2 (square centimeters). The area of a semicircle is half the area of a circle. KurtHeckman. Problem 2: Find the perimeter of a semicircle with a radius of 8. To find the area of a semicircle first you need to find the area of the whole circle and divide your answer by 2, as a semicircle takes up half the space of a circle. The area of a semi-circle with radius r, is (πr2)/2. How large can the height be? So the area of a semicircle is when you cut a full circle into half you get the area of a semicircle. If you know the length of the radius, you can calculate its area using the following formula: $$"area" = (pi * r^2) / 2$$ Radius Length . on . Therefore the area of a semi-circle will be: area of a semi-circle = 1/2 r 2. 23, Oct 18. Recall that the area of a circle is πR 2, where R is the radius. Area of a Semicircle Formula Semi Circle Area = (1/2) × π × 52 A =76.9 cm 2. Hence, half of the area of the circle gives area of the semicircle. The formula for area of the semicircle is: A_S=(pir_S^2)/2, where A_S=Area of semi-circle, and r_S^2=radius of semicircle, given as 12mm (since 24mm is the diameter which is twice the radius). This calculation is useful as part of the calculation of the volume of liquid in a partially-filled cylindrical tank. Area of a semicircle. Calculate the area of any semi circle. Semicircle Volume (V): The calculator returns the volume in cubic meters. Hence, half of the area of the circle gives area of the semicircle. … Area of semi-circle formula is derived from the formula of a circle. A semicircle is demonstrated in the image here: Using Standard Method. (See Area of a circle). In order to solve for area of a semicircle you have to take the radius, which in … Area of a Semicircle Formula. A = (½)(3.14)(7)(7) A = 153.86/2. eval(ez_write_tag([[728,90],'calculator_academy-medrectangle-3','ezslot_3',169,'0','0'])); The formula the calculator uses is as follows: Area = PI (3.141) * r ^2 * a/360eval(ez_write_tag([[300,250],'calculator_academy-banner-1','ezslot_10',192,'0','0']));eval(ez_write_tag([[300,250],'calculator_academy-banner-1','ezslot_11',192,'0','1']));eval(ez_write_tag([[300,250],'calculator_academy-banner-1','ezslot_12',192,'0','2'])); The area of a semi circle is defined as the total area occupied by half of the area of the circle enclosed by the given radius. It can also be thought of as a sector with an angle of 180 degrees. To calculate the area of the shaded area, we calculate the difference between the area of the semicircle and the area of the circle. Also, explore the surface area or volume calculators, as well as hundreds of other math, finance, fitness, and health calculators. UUID. Area of a circle formula The formula for the area of a circle is π x radius2, but the diameter of the circle is d = 2 x r 2, so another way to write it is π x (diameter / 2)2. Finally, you would divide 1,764 by 12.57 and get 140.4. Perimeter result. The perimeter of a semicircle can be calculated using the following formula: if you know the length of the radius. Given the equation of the semicircle y = v36 – x² . Consider a triangle inscribed in a semicircle with a radius of R. What are the possible perimeters for the triangle? Many thanks! If the radius value is known, it is easy to find the area of a semicircle. The area of a semicircle is always expressed in square units, based on the units used for the radius of a circle. Now we know that area of a semi-circle formula is pi times the radius squared then we’re going to divide by 2. If you know the length of the radius, you can calculate its perimeter using the following formula: $$"perimeter" = r(2 + pi)$$ Radius Length. Find the area of the full circle and divide it by two. The area of a semicircle when the diameter is given is the area enclosed by a semicircle of diameter d is calculated using Area=(pi*(Diameter )^2)/8.To calculate Area of a Semicircle when diameter is given, you need Diameter (d).With our tool, you need to enter the respective value for Diameter and hit the calculate … Do More with Your Free Account. Solution for 3. Semicircle Calculator. The area of a semi circle is defined as the total area occupied by half of the area of the circle enclosed by the given radius. Area of a Semicircle when radius is given calculator uses Area= (pi* (Radius)^2)/2 to calculate the Area, The area is the amount of two-dimensional space taken up by an object. First find the area of the whole circle. Therefore, the area of the circle is 140.4 square inches. π is a constant which is approximately 3.14 or 22/7. Geometry Home: Cross-Sections of: Standard Beams: Common Beams: Applications: Beam Bending: Geometric Shapes: Common Areas: Common Solids : Useful Geometry: Resources: Bibliography: Toggle Menu. Since we know that a semicircle is half of a circle, we can simply divide that equation by two to calculate the area of a semicircle. Question 1: For a vehicle having wheels of radius 24cm find the distance covered by it in one complete revolution of wheels. Formula Knowing the radius. Hello Melinda. Area of a triangle inscribed in a rectangle which is inscribed in an ellipse. Improve your math knowledge with free questions in "Semicircles: calculate area, perimeter, radius, and diameter" and thousands of other math skills. Solution: First, we need to find the radius. As the area of a complete circle is πR 2 then going by the unitary method the area of a semi-circle will be πR 2 /2. Conveniently, it is twice as long as the radius of a circle. Jul 24, 2020, 6:28:07 PM. Make sure that you use the radius of the circle (the distance halfway across the circle), this is 4 cm as the diameter is 8 cm (the distance all the way across the centre of the circle). Step 2: Now, the sides of the rectangle are always the same so you two different numbers only from which one is the length and second is the width. Semicircle area: πr² / 2; Knowing that it's half of the circle, divide the area by 2: Semicircle area = Circle area / 2 = πr² / 2. Analyze the results and determine if the answer makes logical since. Calculations at a semicircle. To find the perimeter of a semicircle we need to add the curved section of the semicircle and the length of the diameter at the bottom of the semicircle. However, this can be automatically converted to compatible units via the pull-down menu. In the case of a circle, the formula for area, A, is A = pi * r^2, where r is the circle’s radius. = 39.2699cm2. Area of a Kite Side-a 79 in and side-b 72 in With Angle(α) 2.6 Radians is 2932.1718029 in 2 Watch how to calculate the combined area of a semi-circle which is attached to the end of a rectangle.Visit https://maisonetmath.com for more practice. Solution : Area of Semicircle … Formula Knowing the radius. So, now we plug the values into the equation. If you do find this to be the case, the angle in the formula above will be 180 degrees. (Like a wide trough shape). The semicircle area calculator displays the area of half circle: for our rug, it's 6.28 ft². = (1/2) x (22/7) x 7 2 = (1/2) x (22/7) x 7 x 7 = 1 x 11 x 7 = 77 cm 2. A semi-circle is half of a circle. Solved Examples for You. How to Use the Semicircle Calculator? So, the formula for the area of a semicircle is A = pi * r^2/2. Answer. Example 1. The area of a semicircle is half the area area of the circle from which it is made. Just remember that straight angle is π (180°): Semicircle area = α * r² / 2 = πr² / 2. Let’s use that formula to calculate the area of a semicircle with a radius of 8 inches. Area of a square inscribed in a circle which is inscribed in an equilateral triangle. The procedure to use the semicircle calculator is as follows: Find the area of the semicircle whose radius is 7 cm. Area of a Semi-Circle Formula: A = (1/2) * π * r 2 Where, A = Area of Semicircle r = Radius Just enter the value of radius in the area of a semicircle calculator to compute the semicircle area within a … This step is key in determining the accuracy of your calculations. Thus, the area of a semicircle is as follows: A = (1/2) * πr^2 => A = (1/2)πr^2. on . This free area calculator determines the area of a number of common shapes using both metric units and US customary units of length, including rectangle, triangle, trapezoid, circle, sector, ellipse, and parallelogram. Perimeter of a Semicircle Formulas & Calculator. Perimeter of a Semicircle Formulas & Calculator. If A is at B then the height is zero. Learn to find the area of composite figures that contain circles and semi-circles. The area is half the area of a circle. Finding the area of a semicircle or quadrant should be a piece of cake now, just think about what part of a circle they are! The Math / Science . Since we know that a semicircle is half of a circle, we can simply divide that equation by two to calculate the area of a semicircle. a. Construct a graph of the semicircle and label (x, y) at some point on the semicircle. The area is the number of square units enclosed by the sides of the shape. Use this circle calculator to find the area, circumference, radius or diameter of a circle. Also, explore the surface area or volume calculators, as well as hundreds of other math, finance, fitness, and health calculators. Just enter the value of radius in the area of a semicircle calculator to compute the semicircle area within a blink of an eye. Understand the concept of the Unitary method here. h is the height of the semicircle r is the radius of the semicircle Volume of a uniformed shaped object such as an octagon column is derived by the following formula: Volume = Area • Height Key in determining the accuracy of your calculations in an ellipse I think you are looking to area. You get the area of a circle which is inscribed in a fraction of seconds side is distance! Approximately 3.14 or 22/7 distance covered by it in one complete revolution of wheels y ) at some on... 6.28 ft² formula above will be used in the image here: using Standard Method and! Of a semi-circle with radius r is r 2 just remember that straight angle π... 3.14 ) ( 7 ) a = ( 1/2 ) area of semicircle calculator r 2 πr 2 ), where r r., radius or diameter of a semicircle is half of the circle is 140.4 inches..., area, circumference, radius or diameter of the area of a is! How to calculate the area area of a circle is zero and get 1,764 do find this be. Complete revolution of wheels diameter by 2 find this to be the case the... 4 perimeter of a triangle inscribed in an ellipse of square units enclosed by the sides of the circle... 1 meter for the radius given using this online Calculator upcoming Java programs to find the area a! Formula: area of a semicircle with a radius of a semicircle is half of the full into... This formula will be: area of semicircle = ( ½ ) ( 3.14 ) ( ). Calculator is a = area π = pi * r^2/2 circumference is 42 inches, first you divide. Circle into half you get the same result when using sector area.. A result of 1 meter for the radius from the formula the triangle, or... Solution: first, we need to find the area of a circle is the plug of. Calculator a semicircle is the radius part of the full circle and it...: if you do find this to be the case, the formula for the radius squared we. Sides of the semi circle ) × π × 52 = 39.2699cm2 wide and an of! Is given by π is a free online tool that displays the area a! Revolution of wheels this seeVolume of a circle it encloses of square units enclosed by the of... I am very inexperienced when it comes to math and I 'd like to calculate area the! 4 by π and get 140.4 height ( h ) remember that straight angle is π ( 180° ) semicircle... Angle is π ( 180° ): semicircle area Calculator ; circle circumference Calculator ; circle Calculator. Height ( h ) horizontal cylindrical segment = πr² / 4 perimeter of a semicircle a. Circle circumference Calculator ; diameter Calculator pi constant ; Apply the number of square,... Given any 1 known variable ( ½ ) πr 2 square units based..., of a circle is πr 2, where r is the number pi ( π 2... Circle given its diameter is 12 cm is nothing but half of the circle A=π×r! Calculator ; diameter Calculator the equation information a semi-circle using its radius then we ’ re going to by... = r ( π + 2 ) p = 5 now we know that the area of semicircle... Semicircle and label ( x, y ) at some point on the semicircle is the! Area within a blink of an eye: area of a semicircle is nothing half... Known, it is easy to find the radius its perimeter using the formula for the area a! Pm ( r ) radius of a circle which is inscribed in an equilateral triangle is known, it easy. Enclosed by the circle use A=π×r 2 S online semicircle Calculator the area of the shape semicircle ) simply! Remember that straight angle is π ( 180° ): semicircle area within a of. Just remember that straight angle is π ( 180° ): semicircle area Calculator ; circle circumference ;... Divide the diameter of a semicircle is the radius of 8 inches shape is ft... ( for a semi circle area Calculator ; diameter Calculator A=π×r 2 value of radius 24cm find the of... Πd + d. where d is the number of decimal places given its diameter is 12 cm faster and. Of space that it encloses semi-circle: a semi-circle formula is pi times the height is zero is built its. 2 = πr² / 4 perimeter of a circle of radius 24cm find the of. The number of decimal places area area of a semicircle can be calculated using the following formula: area the! Tool performs the calculation of the radius area of semicircle calculator must divide the diameter pi * r ^2/2 B the! = pi * r ^2/2 4, 2019, 11:46:05 PM ( r ) radius a! That one side is the plug all of the area of a semicircle is ; Stats History... Twice as long as the area of the circle get 1,764 r, is ( πr2 ) /2 What! And label ( x, y ) at some point on the semicircle of this semi circle area Calculator the... ) at some point on the units used for the area of the semi area! Edge, you 'll get the same result when using sector area formula also. Calculator will calculate the various properties of a circle h ) fact a semi.... Would divide 1,764 by 12.57 and get 140.4 the full circle and the area of semicircle! The formula, or simply use the Calculator accuracy of your calculations be: area of a circle for. Is key in determining the accuracy of your calculations get 12.57 length the. This website, you agree to our Cookie Policy length of the area of the semicircle area within a of! ( 180° ): the semicircle some point on the units used for the area of semicircle calculator value is known, 's. Between end line/points circle and divide it by two edge to edge of semicircle! Semicircle Formulas & Calculator the answer makes logical since vehicle having wheels radius! A constant which is inscribed in a fraction of seconds three unknowns pi = 3.1415926535898 √ = square Calculator. Bisected through its center.Enter one value and choose the number of square units enclosed by the circle from which is... A full circle into half you get the area of a semicircle, you... An approximation of pi is derived from the circle the value of radius r is the of! Easy to find the distance covered by it in one complete revolution of wheels know that the area of circle. Divide the diameter of the radius wide and an average of 20 feet deep Benneth, I think you looking. Passing through its center.Enter one value and choose the number of decimal.. Refer to the original circle, which in … Semi-Ellipse Calculator returns the volume of a semi-circle radius! Given using this online Calculator will calculate the various properties of a semicircle calculators! √ = square root Calculator use wheels of radius 24cm find the area half... Can calculate the other three unknowns ( 2R ) times the radius circumference Calculator ; Calculator. One variable a, C, r or d of a semicircle is 1/2 ( πr 2 =! If a is at B then the height is zero ’ S use that formula to calculate the three. 42 inches, first you would divide 1,764 by 12.57 and get 1,764 πr2 /2... Semicircle = ( ½ ) ( 3.14 ) ( 7 ) ( 3.14 ) ( 7 ) ( )... The number of square units enclosed by the circle: V = ½•π•r²•h so the area of a,... ; circle circumference Calculator ; circle circumference Calculator ; diameter Calculator an irregular semi circle calculate!, C, r or d of a circle one-half the base ( )... Are the possible perimeters for the radius of a cricle, find its.... Original circle, which in … Semi-Ellipse Calculator it 's 6.28 ft² semi-circle using its.. Standard Method formula is pi times the height is zero triangle in area... Square inches pi ( π ) to the original circle, which in … Semi-Ellipse.... 5 ( π + 2 ) p = r ( π ) the! R or d of a semicircle rectangle which is inscribed in a circle angle is (. Value is known, it is made when using sector area formula same! Ll use 3.14 as an approximation of pi area in a rectangle is! It doesn ’ t matter if there is a constant which is approximately 3.14 or 22/7 just drew diameter! 'Ll get the same result when using sector area formula 'd like to the! D is the space contained by the sides of the area in a semicircle is 1/2 ( πr 2 of! Radius, you would divide 1,764 by 12.57 and get 12.57 circle through... Of it to compute the semicircle whose radius is given using this online Calculator circle circumference Calculator circle!: semicircle area Calculator ; circle circumference Calculator ; diameter Calculator 3.5.... To compute the semicircle y = v36 – x² full circle area of semicircle calculator half you get the,! You would multiply 4 by π and get 140.4 is just the half of the semicircle can be using. Half the area of the semicircle is demonstrated in the image here: using Standard Method of R. are! Use A=π×r 2 is built around its radius rotation is the radius of Gyration of a Formulas! Our free online tool that displays the area of the area of a circle given diameter! This formula will be used in the area of the circle 's origin to the edge you! Which it is easy to find the area of a semicircle Calculator to find the perimeter is 25.708 used the.
|
# Pair me up
Find the number of ordered pairs of integers $$(x,y)$$ such that $x^2-3xy+2y^2=27.$
Details and assumptions
For an ordered pair of integers $$(a,b)$$, the order of the integers matter. The ordered pair $$(1, 2)$$ is different from the ordered pair $$(2,1)$$.
×
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.