markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Microsoft ML Twitter Sentiment Analysis Tutorial1. [Overview](Overview)2. [Fitting Models That Identify Sentiment](TwitterSentiment)3. [What's Next?](Next) 1. OVERVIEWMicrosoft Machine Learning, or Microsoft ML for short, is an R package within Microsoft R Services that includes powerful machine learning algorithms and associated tools. The tutorial is an introduction to Microsoft ML for data scientists who want to take advantage of its unique capabilities. It is intended primarily for those who are comfortable with using Microsoft R Services for data science, and want to see an end-to-end example that uses Microsoft ML to carry out common data science tasks. 2. FITTING MODELS THAT IDENTIFY SENTIMENTThe tutorial steps through the fitting of a model for identifying sentiment from Twitter text. Identifying how people feel about a product or an event is important to sales. We will focus on identifying which tweets identify happiness or sadness.The tutorial begins from data imported from a twitter database. In this tutorial, the features are automatically extracted from the text using the featurizeText Microsoft ML transform. Then, a model is fit by multiple learning algorithms, and the performance of these fit models is compared to select the best one. The initial and final steps in this process will be familiar to Microsoft R Services users, while the model fitting and performance evaluation steps will involve new Microsoft ML commands. 2.1. LOADING THE PACKAGESThe tutorial is broken into steps, the first being loading the Microsoft ML package. When you execute the first step, there should be no output. | #-----------------------------------------------------------------------
# 1. Load packages.
#-----------------------------------------------------------------------
if (!suppressPackageStartupMessages(require("MicrosoftML",
quietly = TRUE,
warn.conflicts = FALSE))) {
stop("The MicrosoftML package does not seem to be installed, so this\n",
"script cannot be run. If Microsoft R Server with MML is installed,\n",
"you may need to switch the R engine option. In R Tools for Visual\n",
"Studio, this option is under:\n",
"\tR Tools -> Options -> R Engine.\n",
"If Microsoft R Server with MML is not installed, you can download it\n",
"from https://microsoft.sharepoint.com/teams/TLC/SitePages/MicrosoftML.aspx\n")
} | _____no_output_____ | MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
2.2. IMPORT DATAThe second step consists of importing the data we will use to fit a model. There is only one table of data: the HappyOrSad table. This section imports that table into an Xdf. These Xdfs are an efficient way of working with large amounts of data. They are files in which the rows are grouped in blocks whose size is specified by the parameter rowsPerBlock. | #-----------------------------------------------------------------------
# 2. Import data.
#-----------------------------------------------------------------------
# The directory containing data files.
dataDir <- file.path("Data")
# Verify that the data file exists.
if (!file.exists(file.path(dataDir, "HappyOrSad.csv"))) {
stop("The data files needed for running this script cannot be found.\n",
"You may need to set R's working directory to the location of the Data\n",
"directory.")
}
# The data chunk size.
rowsPerBlock <- 1000000 | _____no_output_____ | MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
The HappyOrSad table has one row per tweet, and three columns: id_nfpu, features, and label. Because the id_nfpu column uniquely identifies each tweet, it is ignored. The two remaining columns are text, and they are respectively renamed Text and sentiment. As part of the importing process, we create Label, a logical variable that is TRUE when the sentiment is happiness, and false otherwise. | # The data source has three columns. Keep the tweet text and sentiment.
datasetSource <-
RxTextData(file.path(dataDir, "HappyOrSad.csv"),
varsToKeep = c("features", "label"),
colInfo = list(features = list(type = "character",
newName = "Text"),
label = list(type = "character",
newName = "sentiment")),
quotedDelimiters = TRUE)
# Import the data. Define Label.
dataset <-
rxImport(datasetSource,
transforms = list(Label = sentiment == "happiness"),
outFile = tempfile(fileext = ".xdf"),
rowsPerRead = rowsPerBlock) | Rows Read: 10362, Total Rows Processed: 10362, Total Chunk Time: 0.152 seconds
| MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
We can see from the output that the activity table has 252,204 rows, and its first few rows are | head(dataset) | _____no_output_____ | MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
2.3. SPLIT THE DATASET INTO TRAIN AND TESTTo create train and test sets, the data are randomly split by tweet into two datasets. The training data tweets will be used by the learners to fit models, while the test data tweets will be used as a fair measure the performance of the fit models. Because the split is randomized, we first set the random seed used by the randomizer to guarantee we will be able to reproduce our results at a later date. | #-----------------------------------------------------------------------
# 3. Split the dataset into train and test data.
#-----------------------------------------------------------------------
# Set the random seed for reproducibility of randomness.
set.seed(2345, "L'Ecuyer-CMRG")
# Randomly split the data 80-20 between train and test sets.
dataProb <- c(Train = 0.8, Test = 0.2)
dataSplit <-
rxSplit(dataset,
splitByFactor = "splitVar",
transforms = list(splitVar =
sample(dataFactor,
size = .rxNumRows,
replace = TRUE,
prob = dataProb)),
transformObjects =
list(dataProb = dataProb,
dataFactor = factor(names(dataProb),
levels = names(dataProb))),
outFilesBase = tempfile())
# Name the train and test datasets.
dataTrain <- dataSplit[[1]]
dataTest <- dataSplit[[2]] | Rows Read: 10362, Total Rows Processed: 10362Rows Read: 8279, Total Rows Processed: 8279, Total Chunk Time: 0.020 seconds
Rows Read: 2083, Total Rows Processed: 2083, Total Chunk Time: 0.006 seconds
, Total Chunk Time: 0.219 seconds
| MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
We can explore the distribution of "Label" in the train and test sets. | rxSummary(~ Label, dataTrain)$sDataFrame
rxSummary(~ Label, dataTest)$sDataFrame | Rows Read: 8279, Total Rows Processed: 8279, Total Chunk Time: Less than .001 seconds
Computation time: 0.002 seconds.
| MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
We read from the output that train has 8,279 rows while test has 2,083 rows. Because Label is a boolean, its mean shows the proportion of happy tweets in the data. We see that train has more than 50% happy tweets and that test has almost 49% happy tweets, which is a reasonable split. 2.4. DEFINE THE MODELThe model is a formula that describes what column has the label, and what columns are to be used to rxPredict the label. We will be using as rxPredictors features that will be automatically obtained from the Text column. Then we create a formula that says that Label is to be rxPredicted by these features. | #-----------------------------------------------------------------------
# 4. Define the model to be fit.
#-----------------------------------------------------------------------
# The model is a formula that says that sentiments are to be identified
# using Features, a stand-in for variables created on-the-fly from text
# by the text transform.
(model <- Label ~ Features) | _____no_output_____ | MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
The left-hand side of the formula is the label, while the right-hand side lists the source of the rxPredictors. 2.5. FIT THE MODELThe model will be fit by learners that can rxPredict class data: rxLogisticRegression, rxFastLinear, rxFastTrees, rxFastForest, and rxNeuralNet. In the next section, each fit will be used to score the test data. The comments in this section give a glimpse of the kind of work done by each learner.Each command has two mlTransforms. The featurizeText transform automatically creates text-based features from the tweets, while the selectFeatures transform sorts through the created features to include in the model those that are most informative about each tweet's label. | #-----------------------------------------------------------------------
# 5. Fit the model using different learners.
#-----------------------------------------------------------------------
# Fit the model with logistic regression. This finds the variable
# weights that are most useful for rxPredicting sentiment. The
# rxLogisticRegression learner automatically adjusts the weights to select
# those variables that are most useful for making rxPredictions.
rxLogisticRegressionFit <-
rxLogisticRegression(model, data = dataTrain,
mlTransforms =
list(featurizeText(vars = c(Features = "Text")),
selectFeatures(model, mutualInformation())))
#-----------------------------------------------------------------------
# Fit the model with linear regression. This finds the variable
# weights that are most useful for rxPredicting sentiment. The
# rxFastLinear learner automatically adjusts the weights to select
# those variables that are most useful for making rxPredictions.
rxFastLinearFit <-
rxFastLinear(model, data = dataTrain,
mlTransforms =
list(featurizeText(vars = c(Features = "Text")),
selectFeatures(model, mutualInformation())))
#-----------------------------------------------------------------------
# Fit the model with boosted trees. This finds the combinations of
# variables and threshold values that are useful for rxPredicting sentiment.
# The rxFastTrees learner automatically builds a sequence of trees so that
# trees later in the sequence repair errors made by trees earlier in the
# sequence.
rxFastTreesFit <-
rxFastTrees(model, data = dataTrain,
mlTransforms =
list(featurizeText(vars = c(Features = "Text")),
selectFeatures(model, mutualInformation())),
randomSeed = 23648)
#-----------------------------------------------------------------------
# Fit the model with random forest. This finds the combinations of
# variables and threshold values that are useful for rxPredicting sentiment.
# The rxFastForest learner automatically builds a set of trees whose
# combined rxPredictions are better than the rxPredictions of any one of the
# trees.
rxFastForestFit <-
rxFastForest(model, data = dataTrain,
mlTransforms =
list(featurizeText(vars = c(Features = "Text")),
selectFeatures(model, mutualInformation())),
randomSeed = 23648)
#-----------------------------------------------------------------------
# Fit the model with neural net. This finds the variable weights that
# are most useful for rxPredicting sentiment. Neural net can excel when
# dealing with non-linear relationships between the variables.
rxNeuralNetFit <-
rxNeuralNet(model, data = dataTrain,
mlTransforms =
list(featurizeText(vars = c(Features = "Text")),
selectFeatures(model, mutualInformation()))) | Beginning read for block: 1
Rows Read: 8279, Read Time: 0.007, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Beginning read for block: 1
Rows Read: 8279, Read Time: 0.005, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Computing mutual information
Beginning read for block: 1
Rows Read: 8279, Read Time: 0.009, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Beginning read for block: 1
Rows Read: 8279, Read Time: 0.008, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Beginning read for block: 1
Rows Read: 8279, Read Time: 0.006, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Finished mutual information computation in 00:00:01.2677761
Selecting features to drop
Selected 1000 slots out of 15588 in column 'Features'
Total number of slots selected: 1000
Not adding a normalizer.
Beginning read for block: 1
Rows Read: 8279, Read Time: 0.005, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Beginning read for block: 1
Rows Read: 8279, Read Time: 0.006, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Wrote 8279 rows across 2 columns in 00:00:00.1013217
Using: SSE Math
***** Net definition *****
input Data [1000];
hidden H [100] sigmoid { // Depth 1
from Data all;
}
output Result [1] sigmoid { // Depth 0
from H all;
}
***** End net definition *****
Input count: 1000
Output count: 1
Output Function: Sigmoid
Loss Function: CrossEntropy
PreTrainer: NoPreTrainer
___________________________________________________________________
Starting training...
Learning rate: 0.001000
Momentum: 0.000000
InitWtsDiameter: 0.100000
___________________________________________________________________
Initializing 1 Hidden Layers, 100201 Weights...
Estimated Pre-training MeanError = 0.698938
Iter:1/100, MeanErr=0.694743(-0.60%%), 15065.45M WeightUpdates/sec
Iter:2/100, MeanErr=0.694840(0.01%%), 18442.02M WeightUpdates/sec
Iter:3/100, MeanErr=0.694486(-0.05%%), 17963.72M WeightUpdates/sec
Iter:4/100, MeanErr=0.694071(-0.06%%), 18966.84M WeightUpdates/sec
Iter:5/100, MeanErr=0.694328(0.04%%), 14103.88M WeightUpdates/sec
Iter:6/100, MeanErr=0.694218(-0.02%%), 17415.57M WeightUpdates/sec
Iter:7/100, MeanErr=0.693784(-0.06%%), 18259.03M WeightUpdates/sec
Iter:8/100, MeanErr=0.693539(-0.04%%), 19027.81M WeightUpdates/sec
Iter:9/100, MeanErr=0.693908(0.05%%), 18918.35M WeightUpdates/sec
Iter:10/100, MeanErr=0.693858(-0.01%%), 17816.30M WeightUpdates/sec
Iter:11/100, MeanErr=0.693686(-0.02%%), 18904.39M WeightUpdates/sec
Iter:12/100, MeanErr=0.692638(-0.15%%), 18432.16M WeightUpdates/sec
Iter:13/100, MeanErr=0.693043(0.06%%), 14960.45M WeightUpdates/sec
Iter:14/100, MeanErr=0.692614(-0.06%%), 18872.47M WeightUpdates/sec
Iter:15/100, MeanErr=0.692331(-0.04%%), 18877.39M WeightUpdates/sec
Iter:16/100, MeanErr=0.691354(-0.14%%), 18753.40M WeightUpdates/sec
Iter:17/100, MeanErr=0.690383(-0.14%%), 18039.59M WeightUpdates/sec
Iter:18/100, MeanErr=0.690110(-0.04%%), 18911.54M WeightUpdates/sec
Iter:19/100, MeanErr=0.690243(0.02%%), 18956.40M WeightUpdates/sec
Iter:20/100, MeanErr=0.689222(-0.15%%), 16790.78M WeightUpdates/sec
Iter:21/100, MeanErr=0.688592(-0.09%%), 18898.61M WeightUpdates/sec
Iter:22/100, MeanErr=0.687258(-0.19%%), 18102.53M WeightUpdates/sec
Iter:23/100, MeanErr=0.686780(-0.07%%), 18572.74M WeightUpdates/sec
Iter:24/100, MeanErr=0.685722(-0.15%%), 17177.13M WeightUpdates/sec
Iter:25/100, MeanErr=0.684179(-0.23%%), 19023.85M WeightUpdates/sec
Iter:26/100, MeanErr=0.683205(-0.14%%), 18716.99M WeightUpdates/sec
Iter:27/100, MeanErr=0.680451(-0.40%%), 16627.90M WeightUpdates/sec
Iter:28/100, MeanErr=0.679039(-0.21%%), 19057.83M WeightUpdates/sec
Iter:29/100, MeanErr=0.677644(-0.21%%), 19033.67M WeightUpdates/sec
Iter:30/100, MeanErr=0.675449(-0.32%%), 17345.71M WeightUpdates/sec
Iter:31/100, MeanErr=0.673144(-0.34%%), 18844.55M WeightUpdates/sec
Iter:32/100, MeanErr=0.671218(-0.29%%), 18907.28M WeightUpdates/sec
Iter:33/100, MeanErr=0.668102(-0.46%%), 18898.61M WeightUpdates/sec
Iter:34/100, MeanErr=0.665372(-0.41%%), 15378.79M WeightUpdates/sec
Iter:35/100, MeanErr=0.662624(-0.41%%), 19117.47M WeightUpdates/sec
Iter:36/100, MeanErr=0.659697(-0.44%%), 17391.65M WeightUpdates/sec
Iter:37/100, MeanErr=0.656194(-0.53%%), 17776.21M WeightUpdates/sec
Iter:38/100, MeanErr=0.653975(-0.34%%), 19070.63M WeightUpdates/sec
Iter:39/100, MeanErr=0.650510(-0.53%%), 19058.01M WeightUpdates/sec
Iter:40/100, MeanErr=0.647118(-0.52%%), 19036.25M WeightUpdates/sec
Iter:41/100, MeanErr=0.644360(-0.43%%), 18140.19M WeightUpdates/sec
Iter:42/100, MeanErr=0.641092(-0.51%%), 18903.37M WeightUpdates/sec
Iter:43/100, MeanErr=0.638272(-0.44%%), 18605.30M WeightUpdates/sec
Iter:44/100, MeanErr=0.634489(-0.59%%), 15621.61M WeightUpdates/sec
Iter:45/100, MeanErr=0.630816(-0.58%%), 18980.20M WeightUpdates/sec
Iter:46/100, MeanErr=0.629236(-0.25%%), 19120.26M WeightUpdates/sec
Iter:47/100, MeanErr=0.626434(-0.45%%), 16547.63M WeightUpdates/sec
Iter:48/100, MeanErr=0.623489(-0.47%%), 18871.97M WeightUpdates/sec
Iter:49/100, MeanErr=0.620743(-0.44%%), 18923.29M WeightUpdates/sec
Iter:50/100, MeanErr=0.618342(-0.39%%), 19074.44M WeightUpdates/sec
Iter:51/100, MeanErr=0.615784(-0.41%%), 17540.29M WeightUpdates/sec
Iter:52/100, MeanErr=0.613617(-0.35%%), 18947.35M WeightUpdates/sec
Iter:53/100, MeanErr=0.610573(-0.50%%), 18574.22M WeightUpdates/sec
Iter:54/100, MeanErr=0.608551(-0.33%%), 17783.28M WeightUpdates/sec
Iter:55/100, MeanErr=0.606447(-0.35%%), 18357.44M WeightUpdates/sec
Iter:56/100, MeanErr=0.604080(-0.39%%), 18889.78M WeightUpdates/sec
Iter:57/100, MeanErr=0.601244(-0.47%%), 18894.53M WeightUpdates/sec
Iter:58/100, MeanErr=0.599538(-0.28%%), 14938.23M WeightUpdates/sec
Iter:59/100, MeanErr=0.597053(-0.41%%), 17991.71M WeightUpdates/sec
Iter:60/100, MeanErr=0.594561(-0.42%%), 18999.59M WeightUpdates/sec
Iter:61/100, MeanErr=0.592751(-0.30%%), 17431.61M WeightUpdates/sec
Iter:62/100, MeanErr=0.590446(-0.39%%), 18461.63M WeightUpdates/sec
Iter:63/100, MeanErr=0.588331(-0.36%%), 18935.56M WeightUpdates/sec
Iter:64/100, MeanErr=0.585737(-0.44%%), 19117.82M WeightUpdates/sec
Iter:65/100, MeanErr=0.583760(-0.34%%), 17786.29M WeightUpdates/sec
Iter:66/100, MeanErr=0.581722(-0.35%%), 18881.12M WeightUpdates/sec
Iter:67/100, MeanErr=0.579230(-0.43%%), 18102.06M WeightUpdates/sec
Iter:68/100, MeanErr=0.576833(-0.41%%), 15406.07M WeightUpdates/sec
Iter:69/100, MeanErr=0.574782(-0.36%%), 18947.00M WeightUpdates/sec
Iter:70/100, MeanErr=0.572724(-0.36%%), 17429.73M WeightUpdates/sec
Iter:71/100, MeanErr=0.570328(-0.42%%), 17659.24M WeightUpdates/sec
Iter:72/100, MeanErr=0.568012(-0.41%%), 18887.06M WeightUpdates/sec
Iter:73/100, MeanErr=0.565969(-0.36%%), 19215.89M WeightUpdates/sec
Iter:74/100, MeanErr=0.563822(-0.38%%), 19148.83M WeightUpdates/sec
Iter:75/100, MeanErr=0.561578(-0.40%%), 18060.21M WeightUpdates/sec
Iter:76/100, MeanErr=0.558682(-0.52%%), 18741.03M WeightUpdates/sec
Iter:77/100, MeanErr=0.557239(-0.26%%), 18885.36M WeightUpdates/sec
Iter:78/100, MeanErr=0.554676(-0.46%%), 15681.76M WeightUpdates/sec
Iter:79/100, MeanErr=0.552723(-0.35%%), 18717.49M WeightUpdates/sec
Iter:80/100, MeanErr=0.549682(-0.55%%), 18302.29M WeightUpdates/sec
Iter:81/100, MeanErr=0.547993(-0.31%%), 18738.52M WeightUpdates/sec
Iter:82/100, MeanErr=0.545911(-0.38%%), 16833.96M WeightUpdates/sec
Iter:83/100, MeanErr=0.542927(-0.55%%), 18391.18M WeightUpdates/sec
Iter:84/100, MeanErr=0.540912(-0.37%%), 18538.83M WeightUpdates/sec
Iter:85/100, MeanErr=0.538768(-0.40%%), 18926.36M WeightUpdates/sec
Iter:86/100, MeanErr=0.537217(-0.29%%), 17828.54M WeightUpdates/sec
Iter:87/100, MeanErr=0.534463(-0.51%%), 18861.63M WeightUpdates/sec
Iter:88/100, MeanErr=0.532809(-0.31%%), 18692.18M WeightUpdates/sec
Iter:89/100, MeanErr=0.530901(-0.36%%), 18734.68M WeightUpdates/sec
Iter:90/100, MeanErr=0.528513(-0.45%%), 17563.02M WeightUpdates/sec
Iter:91/100, MeanErr=0.526226(-0.43%%), 18897.25M WeightUpdates/sec
Iter:92/100, MeanErr=0.524161(-0.39%%), 18779.38M WeightUpdates/sec
Iter:93/100, MeanErr=0.522293(-0.36%%), 16593.89M WeightUpdates/sec
Iter:94/100, MeanErr=0.520065(-0.43%%), 18697.51M WeightUpdates/sec
Iter:95/100, MeanErr=0.518003(-0.40%%), 17215.55M WeightUpdates/sec
Iter:96/100, MeanErr=0.515826(-0.42%%), 18950.25M WeightUpdates/sec
Iter:97/100, MeanErr=0.513923(-0.37%%), 18572.91M WeightUpdates/sec
Iter:98/100, MeanErr=0.511994(-0.38%%), 17661.77M WeightUpdates/sec
Iter:99/100, MeanErr=0.510253(-0.34%%), 19049.54M WeightUpdates/sec
Iter:100/100, MeanErr=0.508042(-0.43%%), 18888.93M WeightUpdates/sec
Done!
Estimated Post-training MeanError = 0.505545
___________________________________________________________________
Not training a calibrator because it is not needed.
Elapsed time: 00:00:06.6045457
| MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
2.6. SCORE THE TEST DATAEach fit will be used to score the test data. In order to plot together each fit's performance for convenient side-by-side comparison, we append each rxPrediction column to the test dataset. This will also conveniently include the Label column with the rxPredictions, so that the rxPrediction performance can be computed. When the test data are huge, scoring in this manner may not be possible. In that case, each rxPrediction set will have to be computed separately, and then merged into one data table. | #-----------------------------------------------------------------------
# 6. Score the held-aside test data with the fit models.
#-----------------------------------------------------------------------
# The scores are each test record's probability of being a sentiment.
# This combines each fit model's rxPredictions and the label into one
# table for side-by-side plotting and comparison.
fitScores <-
rxPredict(rxLogisticRegressionFit, dataTest, suffix = ".rxLogisticRegression",
extraVarsToWrite = names(dataTest),
outData = tempfile(fileext = ".xdf"))
fitScores <-
rxPredict(rxFastLinearFit, fitScores, suffix = ".rxFastLinear",
extraVarsToWrite = names(fitScores),
outData = tempfile(fileext = ".xdf"))
fitScores <-
rxPredict(rxFastTreesFit, fitScores, suffix = ".rxFastTrees",
extraVarsToWrite = names(fitScores),
outData = tempfile(fileext = ".xdf"))
fitScores <-
rxPredict(rxFastForestFit, fitScores, suffix = ".rxFastForest",
extraVarsToWrite = names(fitScores),
outData = tempfile(fileext = ".xdf"))
fitScores <-
rxPredict(rxNeuralNetFit, fitScores, suffix = ".rxNeuralNet",
extraVarsToWrite = names(fitScores),
outData = tempfile(fileext = ".xdf")) | Beginning read for block: 1
Rows Read: 2083, Read Time: 0.002, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Elapsed time: 00:00:00.3493465
Finished writing 2083 rows.
Writing completed.
Beginning read for block: 1
Rows Read: 2083, Read Time: 0.002, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Elapsed time: 00:00:00.2835029
Finished writing 2083 rows.
Writing completed.
Beginning read for block: 1
Rows Read: 2083, Read Time: 0.002, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Elapsed time: 00:00:00.4074182
Finished writing 2083 rows.
Writing completed.
Beginning read for block: 1
Rows Read: 2083, Read Time: 0.002, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Elapsed time: 00:00:00.3653417
Finished writing 2083 rows.
Writing completed.
Beginning read for block: 1
Rows Read: 2083, Read Time: 0.002, Transform Time: 0
Beginning read for block: 2
No rows remaining. Finished reading data set.
Elapsed time: 00:00:00.3158979
Finished writing 2083 rows.
Writing completed.
| MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
We see in the output of the command that the number of rows in the results is the same as the number of rows in the test data. 2.7. COMPARE THE FIT MODEL PERFORMANCEFor each fit model, its rxPredictions and the Label are used to compute an ROC curve for that fit. The curves will then be plotted side-by-side in a graph. | #-----------------------------------------------------------------------
# 7. Compare the performance of fit models.
#-----------------------------------------------------------------------
# Compute the fit models's ROC curves.
fitRoc <-
rxRoc("Label",
paste("Probability",
c("rxLogisticRegression", "rxFastLinear", "rxFastTrees",
"rxFastForest", "rxNeuralNet"),
sep = "."),
fitScores)
# Plot the ROC curves and report their AUCs.
plot(fitRoc) | _____no_output_____ | MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
The fit models are then used to compute the fit model AUCs, and these are used to select the best model. | # Create a named list of the fit models.
fitList <-
list(rxLogisticRegression = rxLogisticRegressionFit,
rxFastLinear = rxFastLinearFit,
rxFastTrees = rxFastTreesFit,
rxFastForest = rxFastForestFit,
rxNeuralNet = rxNeuralNetFit)
# Compute the fit models's AUCs.
fitAuc <- rxAuc(fitRoc)
names(fitAuc) <- substring(names(fitAuc), nchar("Probability.") + 1)
# Find the name of the fit with the largest AUC.
bestFitName <- names(which.max(fitAuc))
# Select the fit model with the largest AUC.
bestFit <- fitList[[bestFitName]]
# Report the fit AUCs.
cat("Fit model AUCs:\n")
print(fitAuc, digits = 2)
# Report the best fit.
cat(paste0("Best fit model with ", bestFitName,
", AUC = ", signif(fitAuc[[bestFitName]], digits = 2),
".\n")) | Fit model AUCs:
rxFastForest rxFastLinear rxFastTrees
0.80 0.87 0.88
rxLogisticRegression rxNeuralNet
0.85 0.82
Best fit model with rxFastTrees, AUC = 0.88.
| MIT | microsoft-ml/Microsoft ML Tutorial/Microsoft ML Tutorial Notebooks/Microsoft ML Twitter Sentiment Analysis Tutorial.ipynb | Microsoft/microsoft-r |
Direction recontruction (DL2) **Recommended datasample(s):**Datasets of fully-analyzed showers used to obtain Instrument Response Functions, which in the default pipeline workflow are called ``gamma3``, ``proton2`` and ``electron``.**Data level(s):** DL2 (shower geometry + estimated energy + estimated particle classification)**Description:**This notebook contains benchmarks for the _protopipe_ pipeline regarding the shower geometry of events which have been completely analyzed.**Requirements and steps to reproduce:**- get a TRAINING file generated using ``protopipe-DL2`` or the equivalent command from the DIRAC Grid interface- execute the notebook with ``protopipe-BENCHMARK``,``protopipe-BENCHMARK launch --config_file configs/benchmarks.yaml -n DL2/benchmarks_DL2_direction-reconstruction``To obtain the list of all available parameters add ``--help-notebook``.**Development and testing:** As with any other part of _protopipe_ and being part of the official repository, this notebook can be further developed by any interested contributor. The execution of this notebook is not currently automatic, it must be done locally by the user _before_ pushing a pull-request. Please, strip the output before pushing.**TODO:*** ... Table of contents - [Energy-dependent offset distribution](Energy-dependent-offset-distribution) - [Angular-resolution-as-a-function-of-telescope-multiplicity](Angular-resolution-as-a-function-of-telescope-multiplicity) - [Angular resolution for different containment radii and fixed signal efficiency](Angular-resolution-for-different-containment-radii-and-fixed-signal-efficiency) - [PSF asymmetry](PSF-asymmetry) - [True energy distributions](True-energy-distributions) Imports | import os
from pathlib import Path
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from matplotlib.pyplot import rc
import matplotlib.style as style
from cycler import cycler
import numpy as np
import pandas as pd
import astropy.units as u
from astropy.coordinates import SkyCoord
from ctapipe.coordinates import NominalFrame
from pyirf.binning import (
add_overflow_bins,
create_bins_per_decade
)
from protopipe.benchmarks.utils import get_fig_size, string_to_boolean, get_fig_size
from protopipe.benchmarks.operations import compute_psf
from protopipe.benchmarks.plot import plot_psf | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
Load data | # Parametrized cell
analyses_directory = None
output_directory = Path.cwd() # default output directory for plots
analysis_name = None
analysis_name_2 = None
gammas_infile_name = "DL2_tail_gamma_merged.h5"
protons_infile_name = "DL2_tail_proton_merged.h5"
electrons_infile_name = "DL2_tail_electron_merged.h5"
efficiency_cut = 0.9
export_data = False
superimpose_analysis_2 = False
use_seaborn = True
plots_scale = None
# Handle boolean variables (papermill reads them as strings)
[use_seaborn,
export_data,
superimpose_analysis_2] = string_to_boolean([use_seaborn,
export_data,
superimpose_analysis_2])
input_directory = Path(analyses_directory) / analysis_name / Path("data/DL2")
gammas = pd.read_hdf(os.path.join(input_directory, "gamma", gammas_infile_name), "/reco_events")
protons = pd.read_hdf(os.path.join(input_directory, "proton", protons_infile_name), "/reco_events")
electrons = pd.read_hdf(os.path.join(input_directory, "electron", electrons_infile_name), "/reco_events")
basic_selection_cut = (gammas["is_valid"]==True) & (gammas["NTels_reco"] >= 2)
selected_gammaness = gammas[basic_selection_cut]["gammaness"]
gammaness_cut = np.quantile(selected_gammaness, efficiency_cut)
selected_gammas = gammas[basic_selection_cut & (gammas["gammaness"] >= gammaness_cut)]
#selected_gammas = gammas[(gammas["is_valid"]==True) & (gammas["NTels_reco"] >= 2) & (gammas["gammaness"] >= 0.90)]
# First we check if a _plots_ folder exists already.
# If not, we create it.
plots_folder = Path(output_directory) / "plots"
plots_folder.mkdir(parents=True, exist_ok=True)
# Next we check if a _data_ folder exists already.
# If not, we create it.
data_folder = Path(output_directory) / "data"
data_folder.mkdir(parents=True, exist_ok=True)
input_directory_data_2 = Path(analyses_directory) / analysis_name_2/ "benchmarks_results/TRAINING"
# Plot aesthetics settings
scale = matplotlib_settings["scale"] if plots_scale is None else float(plots_scale)
style.use(matplotlib_settings["style"])
cmap = matplotlib_settings["cmap"]
if matplotlib_settings["style"] == "seaborn-colorblind":
colors_order = ['#0072B2', '#D55E00', '#F0E442', '#009E73', '#CC79A7', '#56B4E9']
rc('axes', prop_cycle=cycler(color=colors_order))
if use_seaborn:
import seaborn as sns
sns.set_theme(context=seaborn_settings["theme"]["context"] if "context" in seaborn_settings["theme"] else "talk",
style=seaborn_settings["theme"]["style"] if "style" in seaborn_settings["theme"] else "whitegrid",
palette=seaborn_settings["theme"]["palette"] if "palette" in seaborn_settings["theme"] else None,
font=seaborn_settings["theme"]["font"] if "font" in seaborn_settings["theme"] else "Fira Sans",
font_scale=seaborn_settings["theme"]["font_scale"] if "font_scale" in seaborn_settings["theme"] else 1.0,
color_codes=seaborn_settings["theme"]["color_codes"] if "color_codes" in seaborn_settings["theme"] else True
)
sns.set_style(seaborn_settings["theme"]["style"], rc=seaborn_settings["rc_style"])
sns.set_context(seaborn_settings["theme"]["context"],
font_scale=seaborn_settings["theme"]["font_scale"] if "font_scale" in seaborn_settings["theme"] else 1.0) | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
Benchmarks Here we use events with the following cuts:- valid reconstructed events- at least 2 reconstructed images, regardless of the camera (on top of any other hardware trigger)- gammaness > 0.75 (mostly a conservative choice) | min_true_energy = 0.006
max_true_energy = 660 | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
Energy-dependent offset distribution[back to top](Table-of-contents) | n_bins = 4
true_energy_bin_edges = np.logspace(np.log10(min_true_energy),
np.log10(max_true_energy), n_bins + 1)
plt.figure(figsize=get_fig_size(ratio=4./3., scale=scale))
plt.xlabel("Offset [deg]")
plt.ylabel("Number of events")
for i in range(len(true_energy_bin_edges)-1):
low_E = true_energy_bin_edges[i]
high_E = true_energy_bin_edges[i+1]
selected_events = selected_gammas[(selected_gammas["true_energy"]>low_E) & (selected_gammas["true_energy"]<high_E)]
plt.hist(selected_events["offset"],
bins=100,
#range = [0,10],
label=f"{low_E:.2f} < E_true [TeV] < {high_E:.2f}",
histtype="step",
linewidth=2)
plt.yscale("log")
plt.legend(loc="best")
plt.grid(which="both")
plt.savefig(plots_folder / f"DL2_offsets_{analysis_name}.png")
plt.show()
min_true_energy = [0.02, 0.2, 2, 20]
max_true_energy = [0.2, 2, 20, 200]
plt.figure(figsize=(10,5))
plt.xlabel("Offset [deg]")
plt.ylabel("Number of events")
for low_E, high_E in zip(min_true_energy, max_true_energy):
selected_events = selected_gammas[(selected_gammas["true_energy"]>low_E) & (selected_gammas["true_energy"]<high_E)]
plt.hist(selected_events["offset"],
bins=100,
range = [0,10],
label=f"{low_E} < E_true [TeV] < {high_E}",
histtype="step",
linewidth=2)
plt.yscale("log")
plt.legend(loc="best")
plt.grid(which="both")
plt.savefig(plots_folder / f"DL2_offsets_{analysis_name}.png")
plt.show()
min_true_energy = [0.02, 0.2, 2, 20]
max_true_energy = [0.2, 2, 20, 200]
plt.figure(figsize=(10,5))
plt.xlabel("Offset [deg]")
plt.ylabel("Number of events")
for low_E, high_E in zip(min_true_energy, max_true_energy):
selected_events = selected_gammas[(selected_gammas["true_energy"]>low_E) & (selected_gammas["true_energy"]<high_E)]
plt.hist(selected_events["offset"],
bins=100,
range = [0,10],
label=f"{low_E} < E_true [TeV] < {high_E}",
histtype="step",
linewidth=2)
plt.yscale("log")
plt.legend(loc="best")
plt.grid(which="both")
plt.savefig(plots_folder / f"DL2_offsets_{analysis_name}.png")
plt.show() | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
Angular resolution as a function of telescope multiplicity[back to top](Table-of-contents) Here we compare how the multiplicity influences the performance of reconstructed events with a 90% gamma efficiency within a 68% containment radius. | r_containment = 68
min_true_energy = 0.003
max_true_energy = 330
n_true_energy_bins = 21
true_energy_bin_edges = np.logspace(np.log10(min_true_energy),
np.log10(max_true_energy),
n_true_energy_bins)
true_energy_bin_centers = 0.5 * (true_energy_bin_edges[:-1]+true_energy_bin_edges[1:])
multiplicity_cuts = ['NTels_reco == 2',
'NTels_reco == 3',
'NTels_reco == 4',
'NTels_reco >= 2']
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16, 8))
axes = axes.flatten()
for cut_idx, cut in enumerate(multiplicity_cuts):
data_mult = selected_gammas.query(cut)
psf, err_psf = compute_psf(data_mult, true_energy_bin_edges, 68)
plot_psf(axes[0], true_energy_bin_centers, psf, err_psf, label=multiplicity_cuts[cut_idx])
y, tmp = np.histogram(data_mult['true_energy'], bins=true_energy_bin_edges)
weights = np.ones_like(y)
#weights = weights / float(np.sum(y))
yerr = np.sqrt(y) * weights
width = np.diff(true_energy_bin_edges)
axes[1].bar(true_energy_bin_centers, y * weights, width=width, yerr=yerr, **{'label': multiplicity_cuts[cut_idx], 'lw': 2, 'fill': False})
axes[1].set_ylabel('Number of events')
for ax in axes:
#ax.set_xlim(limit)
ax.set_xscale('log')
ax.legend(loc='best')
ax.grid(which='both', visible=True)
ax.set_xlabel('True energy [TeV]')
plt.tight_layout()
fig.savefig(plots_folder / f"DL2_PSF_{analysis_name}.png") | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
Angular resolution for different containment radii and fixed signal efficiency[back to top](Table-of-contents) Apply fixed signal efficiency cut (requires well defined ML separator and ML train-ing)Calculate angular resolution for 68%, 80%, and 95% containment radius. | scale=0.75
plt.figure(figsize=(16*scale,9*scale))
true_energy_bins = create_bins_per_decade(10**-1.9 * u.TeV, 10**2.31 * u.TeV, 5).value
gamma_efficiency = 0.9
reconstructed_gammas = gammas.query("is_valid == True")
gammaness = reconstructed_gammas["gammaness"]
gammaness_cut = np.quantile(gammaness, gamma_efficiency)
selected_events = reconstructed_gammas.query(f"gammaness > {gammaness_cut}")
ax = plt.gca()
def angular_resolution_vs_true_energy(ax, events, true_energy_bins, containment):
ang_res = []
for i in range(len(true_energy_bins)-1):
true_energy_mask = f"true_energy > {true_energy_bins[i]} & true_energy < {true_energy_bins[i+1]}"
selected_offsets = events.query(true_energy_mask)["offset"]
if len(selected_offsets)==0:
ang_res.append(np.nan)
else:
ang_res.append(np.quantile(selected_offsets, containment/100.))
ax.errorbar(
0.5 * (true_energy_bins[:-1] + true_energy_bins[1:]),
ang_res,
xerr=0.5 * (true_energy_bins[:-1] - true_energy_bins[1:]),
label=f'{containment}% containment radius',
fmt='o',
)
return ax
angular_resolution_vs_true_energy(ax, selected_events, true_energy_bins, 68)
angular_resolution_vs_true_energy(ax, selected_events, true_energy_bins, 80)
angular_resolution_vs_true_energy(ax, selected_events, true_energy_bins, 95)
plt.xlabel("True energy [TeV]")
plt.xscale("log")
plt.ylabel("Angular resolution [deg]")
plt.legend()
plt.title(f"Reconstructed gammas with {gamma_efficiency*100}% signal efficiency")
plt.grid() | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
H_max as a function of energy for gammas and protons Fixed gamma efficiency at 90% | reconstructed_gammas = gammas.query("is_valid == True")
reconstructed_protons = protons.query("is_valid == True")
plt.figure(figsize=(12,6))
mask_gammaness = f"gammaness > 0.9"
plt.subplot(1, 2, 1)
hist_opt = {"bins":[100,100],
"range": [[0.003, 300],[1,8]],
"norm": LogNorm(vmin=1,vmax=1.e6),
"cmap": cmap}
plt.hist2d(reconstructed_gammas.query(mask_gammaness)["reco_energy"],
np.log10(reconstructed_gammas.query(mask_gammaness)["h_max"]),
**hist_opt
)
plt.xlabel("Reconstructed energy [TeV]")
plt.ylabel("log10(H max)")
plt.colorbar()
plt.title("DL2 gammas")
plt.subplot(1, 2, 2)
plt.hist2d(reconstructed_protons.query(mask_gammaness)["reco_energy"],
np.log10(reconstructed_protons.query(mask_gammaness)["h_max"]),
**hist_opt
)
plt.xlabel("Reconstructed energy [TeV]")
plt.ylabel("log10(H max)")
plt.colorbar()
plt.title("DL2 protons")
None | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
PSF asymmetry[back to top](Table-of-contents) | reco_alt = selected_gammas.reco_alt
reco_az = selected_gammas.reco_az
# right now all reco_az for a 180° deg simualtion turn out to be all around -180°
#if ~np.count_nonzero(np.sign(reco_az) + 1):
reco_az = np.abs(reco_az)
# this is needed for projecting the angle onto the sky
reco_az_corr = reco_az * np.cos(np.deg2rad(selected_gammas.reco_alt))
true_alt = selected_gammas.iloc[0].true_alt
true_az = selected_gammas.iloc[0].true_az
daz = reco_az - true_az
daz_corr = daz * np.cos(np.deg2rad(reco_alt))
dalt = reco_alt - true_alt
plt.figure(figsize=(5, 5))
plt.xlabel("Mis-recontruction [deg]")
plt.ylabel("Number of events")
plt.hist(daz_corr, bins=100, alpha=0.5, label = "azimuth")
plt.hist(dalt, bins=100, alpha=0.5, label = "altitude")
plt.legend()
plt.yscale("log")
plt.grid()
print("Mean and STDs of sky-projected mis-reconstruction axes")
print('daz = {:.4f} +/- {:.4f} deg'.format(daz_corr.mean(), daz_corr.std()))
print('dalt = {:.4f} +/- {:.4f} deg'.format(dalt.mean(), dalt.std()))
plt.show() | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
2D representation with **orange** events being those with **offset 20 TeV** | angcut = (selected_gammas['offset'] < 1) & (selected_gammas['true_energy'] > 20)
plt.figure(figsize=(5,5))
ax = plt.gca()
FOV_size = 2.5 # deg
ax.scatter(daz_corr, dalt, alpha=0.1, s=1, label='no angular cut')
ax.scatter(daz_corr[angcut], dalt[angcut], alpha=0.05, s=1, label='offset < 1 deg & E_true > 20 TeV')
ax.set_aspect('equal')
ax.set_xlabel('cent. Az [deg]')
ax.set_ylabel('cent. Alt [deg]')
ax.set_xlim(-FOV_size,FOV_size)
ax.set_ylim(-FOV_size,FOV_size)
plt.tight_layout()
plt.grid(which="both")
fig.savefig(plots_folder / f"PSFasymmetry_2D_altaz_{analysis_name}.png") | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
True energy distributions[back to top](Table-of-contents) | #min_true_energy = 0.003
#max_true_energy = 330
true_energy_bins_edges = np.logspace(np.log10(min_true_energy), np.log10(max_true_energy), 6 + 1)
if len(np.unique(gammas["true_az"]))==1:
true_az = np.unique(gammas["true_az"]) * u.deg
true_alt = np.unique(gammas["true_alt"]) * u.deg
else:
print("WARNING: diffuse simulations not yet supported.")
print(f"true AZ = {true_az}")
print(f"true ALT = {true_alt}")
get_fig_size(ratio=(9/16), scale=None)
plt.subplots_adjust(wspace=0.5, hspace=0.3)
center = SkyCoord(az=true_az, alt=true_alt, frame="altaz")
nominal_frame = NominalFrame(origin=center)
for i in range(len(true_energy_bins_edges)-1):
plt.subplot(3,2,i+1)
ax = plt.gca()
ax.set_aspect("equal")
reconstruction_mask = "is_valid == True and "
true_energy_mask = f"true_energy > {true_energy_bins_edges[i]} and true_energy < {true_energy_bins_edges[i+1]}"
selected_gammas = gammas.query(reconstruction_mask + true_energy_mask)
reconstructed_coordinates = SkyCoord(az=selected_gammas.reco_az.values * u.degree,
alt=selected_gammas.reco_alt.values * u.degree,
frame="altaz")
reconstructed_coordinates_nominal_frame = reconstructed_coordinates.transform_to(nominal_frame)
hist_opt = {"bins":[100,100],
"range":[[-10, 10], [-10, 10]],
"norm":LogNorm(),
"cmap":cmap}
plt.hist2d(reconstructed_coordinates_nominal_frame.fov_lon.value,
reconstructed_coordinates_nominal_frame.fov_lat.value,
**hist_opt)
plt.plot(0, 0, "*", markersize=15, color='#D55E00')
plt.colorbar()
plt.xlabel("FOV Longitude [deg]")
plt.ylabel("FOV Latitude [deg]")
plt.title(f"{true_energy_bins_edges[i]:.2f} TeV < True energy < {true_energy_bins_edges[i+1]:.2f} TeV")
None | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
Same, but with a fixed gamma efficiency cut of 90% | plt.figure(figsize=(15,15))
plt.subplots_adjust(wspace=0.3, hspace=0.3)
center = SkyCoord(az=true_az, alt=true_alt, frame="altaz")
nominal_frame = NominalFrame(origin=center)
for i in range(len(true_energy_bins_edges)-1):
plt.subplot(3,2,i+1)
ax = plt.gca()
ax.set_aspect("equal")
reconstruction_mask = "is_valid == True and "
true_energy_mask = f"true_energy > {true_energy_bins_edges[i]} and true_energy < {true_energy_bins_edges[i+1]}"
reconstructed_gammas_per_true_energy = gammas.query(reconstruction_mask + true_energy_mask)
gammaness = reconstructed_gammas_per_true_energy["gammaness"]
gammaness_cut = np.quantile(gammaness, gamma_efficiency)
selected_gammas = reconstructed_gammas_per_true_energy.query(f"gammaness > {gammaness_cut}")
selected_gammas
reconstructed_coordinates = SkyCoord(az=selected_gammas.reco_az.values * u.degree,
alt=selected_gammas.reco_alt.values * u.degree,
frame="altaz")
reconstructed_coordinates_nominal_frame = reconstructed_coordinates.transform_to(nominal_frame)
hist_opt = {"bins":[100,100],
"range":[[-10, 10], [-10, 10]],
"norm":LogNorm(),
"cmap":cmap}
plt.hist2d(reconstructed_coordinates_nominal_frame.fov_lon.value,
reconstructed_coordinates_nominal_frame.fov_lat.value,
**hist_opt)
plt.plot(0, 0, "*", markersize=20, color='#D55E00')
plt.colorbar()
plt.xlabel("FOV Longitude [deg]")
plt.ylabel("FOV Latitude [deg]")
plt.title(f"{true_energy_bins_edges[i]:.2f} TeV < True energy < {true_energy_bins_edges[i+1]:.2f} TeV")
None | _____no_output_____ | CECILL-B | protopipe/benchmarks/notebooks/DL2/benchmarks_DL2_direction-reconstruction.ipynb | Iburelli/protopipe |
Xarray-I: Data Structure * [**Sign up to the JupyterHub**](https://www.phenocube.org/) to run this notebook interactively from your browser* **Compatibility:** Notebook currently compatible with the Open Data Cube environments of the University of Wuerzburg* **Prerequisites**: There is no prerequisite learning required. BackgroundIn the previous notebook, we experienced that the data we wanna access are loaded in a form called **`xarray.dataset`**. This is the form in which earth observation data are usually stored in a datacube.**`xarray`** is an open source project and Python package which offers a toolkit for working with ***multi-dimensional arrays*** of data. **`xarray.dataset`** is an in-memory representation of a netCDF (network Common Data Form) file. Understanding the structure of a **`xarray.dataset`** is the key to enable us work with these data. Thus, in this notebook, we are mainly dedicated to helping users of our datacube understand its data structure.Firstly let's come to the end stage of the previous notebook, where we have loaded a data product. The data product "s2_l2a_bavaria" is used as example in this notebook. DescriptionThe following topics are convered in this notebook:* **What is inside a `xrray.dataset` (the structure)?*** **(Basic) Subset Dataset / DataArray*** **Reshape a Dataset** | import datacube
import pandas as pd
from odc.ui import DcViewer
from odc.ui import with_ui_cbk
import xarray as xr
import matplotlib.pyplot as plt
# Set config for displaying tables nicely
# !! USEFUL !! otherwise parts of longer infos won't be displayed in tables
pd.set_option("display.max_colwidth", 200)
pd.set_option("display.max_rows", None)
# Connect to DataCube
# argument "app" --- user defined name for a session (e.g. choose one matching the purpose of this notebook)
dc = datacube.Datacube(app = "nb_understand_ndArrays")
# Load Data Product
ds = dc.load(product= "s2_l2a",
x= [12.94 ,13.05],
y= [53.88,53.94],
output_crs = "EPSG:32632",
time = ("2020-10-01", "2020-12-31"),
measurements= ["blue", "green", "red","nir"],
resolution = (-10,10),
group_by = "solar_day",
progress_cbk=with_ui_cbk())
ds
#da = ds.to_array().rename({"variable":"band"})
#print(da)
#ds2 = da.to_dataset(dim="time")
#ds2 | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
**What is inside a `xarray.dataset`?**The figure below is a diagramm depicting the structure of the **`xarray.dataset`** we've just loaded. Combined with the diagramm, we hope you may better interpret the texts below explaining the data strucutre of a **`xarray.dataset`**. As read from the output block, this dataset has three ***Data Variables*** , "blue", "green" and "red" (shown with colors in the diagramm), referring to individual spectral band.Each data variable can be regarded as a **multi-dimensional *Data Array*** of same structure; in this case, it is a **three-dimensional array** (shown as 3D Cube in the diagramm) where `time`, `x` and `y` are its ***Dimensions*** (shown as axis along each cube in the diagramm).In this dataset, there are 35 ***Coordinates*** under `time` dimension, which means there are 35 time steps along the `time` axis. There are 164 coordinates under `x` dimension and 82 coordinates under `y` dimension, indicating that there are 164 pixels along `x` axis and 82 pixels along `y` axis.As for the term ***Dataset***, it is like a *Container* holding all the multi-dimensional arrays of same structure (shown as the red-lined box holding all 3D Cubes in the diagramm).So this instance dataset has a spatial extent of 164 by 82 pixels at given lon/lat locations, spans over 35 time stamps and 3 spectral band.**In summary, *`xarray.dataset`* is substantially a container for high-dimensional *`DataArray`* with common attributes (e.g. crs) attached**, :* **Data Variables (`values`)**: **it's generally the first/highest dimension to subset from a high dimensional array.** Each `data variable` contains a multi-dimensional array of all other dimensions.* **Dimensions (`dims`)**: other dimensions arranged in hierachical order *(e.g. 'time', 'y', 'x')*.* **Coordinates (`coords`)**: Coordinates along each `Dimension` *(e.g. timesteps along 'time' dimension, latitudes along 'y' dimension, longitudes along 'x' dimension)** **Attributes (`attrs`)**: A dictionary(`dict`) containing Metadata. Now let's deconstruct the dataset we have just loaded a bit further to have things more clarified!:D * **To check existing dimensions of a dataset** | ds.dims | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **To check the coordinates of a dataset** | ds.coords#['time'] | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **To check all coordinates along a specific dimension** | ds.time
# OR
#ds.coords['time'] | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **To check attributes of the dataset** | ds.attrs | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
**Subset Dataset / DataArray*** **To select all data of "blue" band** | ds.blue
# OR
#ds['blue']
# Only print pixel values
ds.blue.values | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **To select blue band data at the first time stamp** | ds.blue[0] | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **To select blue band data at the first time stamp while the latitude is the largest in the defined spatial extent** | ds.blue[0][0] | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **To select the upper-left corner pixel** | ds.blue[0][0][0] | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
**subset dataset with `isel` vs. `sel`*** Use `isel` when subsetting with **index*** Use `sel` when subsetting with **labels** * **To select data of all spectral bands at the first time stamp** | ds.isel(time=[0]) | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **To select data of all spectral bands of year 2020** | ds.sel(time='2020-12')
#print(ds.sel(time='2019')) | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
***Tip: More about indexing and sebsetting Dataset or DataArray is presented in the [Notebook_05](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/05_xarrayII.ipynb).*** **Reshape Dataset*** **Convert the Dataset (subset to 2019) to a *4-dimension* DataArray** | da = ds.sel(time='2020-12').to_array().rename({"variable":"band"})
da | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
* **Convert the *4-dimension* DataArray back to a Dataset by setting the "time" as DataVariable (reshaped)** | ds_reshp = da.to_dataset(dim="time")
print(ds_reshp) | _____no_output_____ | MIT | 3_Data_Cube_Basics/notebooks/02_xarrayI_data_structure.ipynb | eo2cube/datacube_eagles |
Seaborn Übung - AufgabeZeit unsere neu gelernten Seaborn Fähigkeiten anzuwenden! Versucht die gezeigten Diagramme selbst nachzustellen. Dabei ist die Farbgebung nicht entscheidend, es geht um die Inhalte. Die DatenWir werde dazu ein berühmten Datensatz zur Titanic benutzen. Später im Machine Learning Teil des Kurses kommen wir auf diesen Datensatz zurück, um Überlebenswahrscheinlichkeiten zu berechnen. | import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
titanic = sns.load_dataset('titanic')
titanic.head() | _____no_output_____ | BSD-3-Clause | 3-Visualization/2-Seaborn/7-Seaborn Uebung - Aufgabe.ipynb | Klaynie/Jupyter-Test |
ÜbungVersucht die gezeigten Diagramme selbst nachzustellen. Dabei ist die Farbgebung nicht entscheidend, es geht um die Inhalte.Achtet außerdem darauf, die Zelle direkt über dem Diagramm nicht zu verwenden. So könnt ihr verhindern, dass das Diagramm verloren geht. Wir haben eine extra Zelle zum Coden eingebaut. | # Hier eigenen Code schreiben
# Nächste Zelle nicht nutzen
# Hier eigenen Code schreiben
# Nächste Zelle nicht nutzen
# Hier eigenen Code schreiben
# Nächste Zelle nicht nutzen
# Hier eigenen Code schreiben
# Nächste Zelle nicht nutzen
sns.swarmplot(x='class', y='age', data=titanic, palette='Set2')
# Hier eigenen Code schreiben
# Nächste Zelle nicht nutzen
# Hier eigenen Code schreiben
# Nächste Zelle nicht nutzen
# Hier eigenen Code schreiben
# Nächste Zelle nicht nutzen
g = sns.FacetGrid(data=titanic, col='sex')
g.map(sns.distplot,'age') | _____no_output_____ | BSD-3-Clause | 3-Visualization/2-Seaborn/7-Seaborn Uebung - Aufgabe.ipynb | Klaynie/Jupyter-Test |
Fundamentos de Numpy> Hasta ahora sólo hemos hablado acerca de tipos (clases) de variables y funciones que vienen por defecto en Python.> Sin embargo, una de las mejores cosas de Python (especialmente si eres o te preparas para ser un científico de datos) es la gran cantidad de librerías de alto nivel que se encuentran disponibles.> Algunas de estas librerías se encuentran en la librería estándar, es decir, se pueden encontrar donde sea que esté Python. Otras librerías se pueden añadir fácilmente.> La primer librería externa que cubriremos en este curso es NumPy (Numerical Python).Referencias:- https://www.numpy.org/- https://towardsdatascience.com/first-step-in-data-science-with-python-numpy-5e99d6821953___ 0. Motivación ¿Recuerdan algo de álgebra lineal? Por ejemplo:- vectores;- suma de vectores;- producto por un escalar ...¿Cómo se les ocurre que podríamos manejar lo anterior en Python? | # Crear dos vectores
x = [4, 5, 8, -2, 3]
y = [3, 1, -7, -9, 5]
# Suma de vectores
x + y
# ¿con ciclos quizá?
sum_ = [x[i] + y[i] for i in range(len(x))]
sum_
# Producto por escalar
3 * x
# ¿con ciclos quizá?
prod_ = [3 * x[i] for i in range(len(x))]
prod_ | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Solución: NumPyNumPy es la librería fundamental para computación científica con Python. Contiene, entre otros:- una clase de objetos tipo arreglo N-dimensional muy poderso;- funciones matemáticas sofisticadas;- herramientas matemáticas útiles de álgebra lineal, transformada de Fourier y números aleatorios. Aparte de sus usos científicos, NumPy puede ser usada como un contenedor eficiente de datos multidimensional, lo que le otorga a NumPy una capacidad impresionante de integración con bases de datos.Por otra parte, casi todas las librerías de Python relacionadas con ciencia de datos y machine learning tales como SciPy (Scientific Python), Mat-plotlib (librería de gráficos), Scikit-learn, dependen de NumPy razonablemente. Para nuestra fortuna, NumPy ya viene instalado por defecto en la instalación de Anaconda.Así que si queremos empezar a utilizarlo, lo único que debemos hacer es importarlo: | # Importar numpy
import numpy as np
np.sin(np.pi / 2) | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Lo que acabamos de hacer es un procedimiento genérico para importar librerías:- se comienza con la palabra clave `import`;- a continuación el nombre de la librería, en este caso `numpy`;- opcionalmente se puede incluir una cláusula `as` y una abreviación del nombre de la librería. Para el caso de NumPy, la comunidad comúmente usa la abreviación `np`. Ahora, intentemos hacer lo mismo que que antes, pero con el arreglo n-dimensional que provee NumPy como vector: | # Ayuda sobre arreglo N-dimensional
help(np.array)
a = [5, 6]
a
import copy
b = copy.copy(a)
b
a[1] = 8
b
# Crear dos vectores
x = np.array([4, 5, 8, -2, 3])
y = np.array([3, 1, -7, -9, 5])
x, y
# Tipo
type(x)
# Suma de vectores
x + y
# Producto interno
7 * x
x.dtype
np.array([5], dtype="float64")**np.array([28], dtype="float64") # == 5**28
5**28 | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Diferencias fundamentales entre Listas de Python y Arreglos de NumPyMientras que las listas y los arreglos tienen algunas similaridades (ambos son colecciones ordenadas de valores), existen ciertas diferencias abismales entre este tipo de estructuras de datos:- A diferencia de las listas, todos los elementos en un arreglo de NumPy deben ser del mismo tipo de datos (esto es, todos enteros, o flotantes, o strings, etc).- Por lo anterior, los arreglos de NumPy soportan operaciones aritméticas y otras funciones matemáticas que se ejecutan en cada elemento del arreglo. Las listas no soportan estos cálculos.- Los arreglos de NumPy tienen dimensionalidad. | np.array([6, 'hola', help]) | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
1. ¿Qué podemos hacer en NumPy?Ya vimos como crear arreglos básicos en NumPy, con el comando `np.array()` | x | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
¿Cuál es el tipo de estos arreglos? | type(x)
len(x)
x.shape
x.size
x.ndim | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
También podemos crear arreglos multidimensionales: | # Matriz 4x5
A = np.array([[1, 2, 0, 5, -2],
[9, -7, 5, 3, 0],
[2, 1, 1, 1, -3],
[4, 8, -3, 2, 1]])
len([[1, 2, 0, 5, -2],
[9, -7, 5, 3, 0],
[2, 1, 1, 1, -3],
[4, 8, -3, 2, 1]])
# Tipo
type(A)
# Atributos
A.shape
A.size
A.ndim
len(A) | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
1.1 Funciones de NumPy Seguiremos nuestra introducción a NumPy mediante la resolución del siguiente problema: Problema 1> Dados cinco (5) contenedores cilíndricos con diferentes radios y alturas que pueden variar entre 5 y 25 cm, encontrar:> 1. El volumen del agua que puede almacenar cada contenedor;> 2. El volumen total del agua que pueden almacenar todos los contenedores juntos;> 3. Cual contenedor puede almacenar más volumen, y cuanto;> 4. Cual contenedor puede almacenar menos volumen, y cuanto;> 5. Obtener la media, la mediana y la desviación estándar de los volúmenes de agua que pueden ser almacenados en los contenedores. Antes que nada, definamos las variables que nos dan: | # Definir numero de contenedores, medida minima y medida maxima
n_contenedores = 5
medida_min = 5
medida_max = 25 | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
A continuación, generaremos un arreglo de números enteros aleatorios entre 5 y 25 cm que representarán los radios y las alturas de los cilindros: | # Ayuda de np.random.randint()
help(np.random.randint)
help(np.random.seed)
import numpy as np
# Números aleatorios que representan radios y alturas.
# Inicializar la semilla
np.random.seed(1001)
medidas = np.random.randint(medida_min, medida_max, size=(10,))
# Ver valores
medidas
help(medidas.reshape)
# array.reshape
medidas = medidas.reshape((2, 5))
medidas | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
De los números generados, separemos los que corresponden a los radios, y los que corresponden a las alturas: | # Radios
radios = medidas[0, :]
radios
medidas[:, 3:5]
medidas[:, ::2]
# Alturas
alturas = medidas[1, :]
alturas | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
1. Con lo anterior, calculemos cada uno los volúmenes: | radios
radios**2
alturas
# Volúmenes de los contenedores
volumenes = (np.pi * radios**2) * alturas
volumenes | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
¡Excelente!Con esta línea de código tan sencilla, pudimos obtener de un solo jalón todos los volúmenes de nuestros contenedores.Esta es la potencia que nos ofrece NumPy. Podemos operar los arreglos de forma rápida, sencilla, y muy eficiente. 2. Ahora, el volumen total | # Volumen total
volumenes.sum() | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
3. ¿Cuál contenedor puede almacenar más volumen? ¿Cuánto? | volumenes
# Contenedor que puede almacenar más volumen
volumenes.argmax()
# Volumen máximo
volumenes.max()
# También se puede, pero no es recomendable. Ver comparación de tiempos
max(volumenes)
random_vector = np.random.randint(0, 1000, size=(1000,))
%timeit random_vector.max()
%timeit np.max(random_vector)
%timeit max(random_vector) | 181 µs ± 4.44 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
| MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
4. ¿Cuál contenedor puede almacenar menos volumen? ¿Cuánto? | # Contenedor que puede almacenar menos volumen
volumenes.argmin()
# Volumen mínimo
volumenes.min() | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
5. Media, mediana y desviación estándar de los volúmenes | # Media, mediana y desviación estándar
volumenes.mean(), volumenes.std()
np.median(volumenes)
# Atributos shape y dtype
volumenes.shape
volumenes.dtype
A
A.shape
A.size | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
1.2 Trabajando con matrices Problema 2> 25 cartas numeradas de la 1 a la 25 se distribuyen aleatoriamente y en partes iguales a 5 personas. Encuentre la suma de cartas para cada persona tal que: > - para la primera persona, la suma es el valor de la primera carta menos la suma del resto de las cartas;> - para la segunda persona, la suma es el valor de la segunda carta menos la suma del resto de las cartas;> - y así sucesivamente ...> La persona para la cual la suma sea mayor, será el ganador. Encontrar el ganador. Lo primero será generar los números del 1 al 25. ¿Cómo podemos hacer esto? np.arange = np.array(range) | # Ayuda en la función np.arange()
help(np.arange)
# Números del 1 al 25
cartas = np.arange(1, 26)
cartas | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Luego, tal y como en un juego de cartas, deberíamos barajarlos, antes de repartirlos: | # Ayuda en la función np.random.shuffle()
help(np.random.shuffle)
# Barajar
np.random.shuffle(cartas)
# Ver valores
cartas | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Bien. Ahora, deberíamos distribuir las cartas. Podemos imaginarnos la distribución como una matriz 5x5: | # Repartir cartas
cartas = cartas.reshape((5, 5))
# Ver valores
cartas | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Entonces, tenemos 5 cartas para cada una de las 5 personas, visualizadas como una matriz 5x5.Lo único que nos falta es encontrar la suma para cada uno, es decir, sumar el elemento de la diagonal principal y restar las demás entradas de la fila (o columna).¿Cómo hacemos esto? | # Ayuda en la función np.eye()
help(np.eye)
# Matriz con la diagonal principal
I5 = np.eye(5)
I5
I5 * cartas
# Ayuda en la función np.ones()
help(np.ones)
# Matriz con los elementos fuera de la diagonal negativos
complement = np.ones((5, 5)) - I5
complement
complement * cartas
# Matriz completa
matriz_para_suma = I5 * cartas - complement * cartas
matriz_para_suma
# Sumar por filas
suma = matriz_para_suma.sum(axis=0)
suma | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
¿Quién es el ganador? | suma.argmax() | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
2. Algo de álgebra lineal con NumPyBueno, ya hemos utilizado NumPy para resolver algunos problemas de juguete. A través de estos problemas, hemos introducido el tipo de objetos que podemos manipular con NumPy, además de varias funcionalidades que podemos utilizar.Pues bien, este tipo de objetos nos sirven perfectamente para representar vectores y matrices con entradas reales o complejas... si, de las que estudiamos en algún momento en álgebra lineal.Mejor aún, NumPy nos ofrece un módulo de álgebra lineal para efectuar las operaciones básicas que podríamos necesitar. Consideremos la siguiente matriz: | A = np.array([[1, 0, 1],
[-1, 2, 4],
[2, 1, 1]])
A | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Podemos obtener varios cálculos útiles alrededor de la matriz A: | # Rango de la matriz A
np.linalg.matrix_rank(A)
# Determinante de la matriz A
np.linalg.det(A)
# Inversa de la matriz A
np.linalg.inv(A)
A.dot(np.linalg.inv(A))
np.linalg.inv(A).dot(A)
np.dot(A, np.linalg.inv(A))
np.dot(np.linalg.inv(A), A)
# Potencia de la matriz A
# A.dot(A).dot(A).dot(A).dot(A)
np.linalg.matrix_power(A, 5)
A.dot(A).dot(A).dot(A).dot(A)
A**5
A
# Eigenvalores y eigenvectores de la matriz A
l, v = np.linalg.eig(A)
l
v | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Por otra parte, si tenemos dos vectores: | x, y | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
podemos calcular su producto interno (producto punto) | x.dot(y)
(x * y).sum()
x[:3], y[:3]
np.cross(x[:3], y[:3]) | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
De la misma manera, podemos calcular la multiplicación de la matriz A por un vector | A
z = np.array([1, 0, 1])
A.dot(z) | _____no_output_____ | MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
$$A x = z$$ | np.linalg.inv(A).dot(z)
np.linalg.solve(A, z)
help(np.linalg.cross)
help(np.linalg.svd)
a = np.arange(25).reshape(5,5)
a
np.einsum('ii', a)
help(a) | Help on ndarray object:
class ndarray(builtins.object)
| ndarray(shape, dtype=float, buffer=None, offset=0,
| strides=None, order=None)
|
| An array object represents a multidimensional, homogeneous array
| of fixed-size items. An associated data-type object describes the
| format of each element in the array (its byte-order, how many bytes it
| occupies in memory, whether it is an integer, a floating point number,
| or something else, etc.)
|
| Arrays should be constructed using `array`, `zeros` or `empty` (refer
| to the See Also section below). The parameters given here refer to
| a low-level method (`ndarray(...)`) for instantiating an array.
|
| For more information, refer to the `numpy` module and examine the
| methods and attributes of an array.
|
| Parameters
| ----------
| (for the __new__ method; see Notes below)
|
| shape : tuple of ints
| Shape of created array.
| dtype : data-type, optional
| Any object that can be interpreted as a numpy data type.
| buffer : object exposing buffer interface, optional
| Used to fill the array with data.
| offset : int, optional
| Offset of array data in buffer.
| strides : tuple of ints, optional
| Strides of data in memory.
| order : {'C', 'F'}, optional
| Row-major (C-style) or column-major (Fortran-style) order.
|
| Attributes
| ----------
| T : ndarray
| Transpose of the array.
| data : buffer
| The array's elements, in memory.
| dtype : dtype object
| Describes the format of the elements in the array.
| flags : dict
| Dictionary containing information related to memory use, e.g.,
| 'C_CONTIGUOUS', 'OWNDATA', 'WRITEABLE', etc.
| flat : numpy.flatiter object
| Flattened version of the array as an iterator. The iterator
| allows assignments, e.g., ``x.flat = 3`` (See `ndarray.flat` for
| assignment examples; TODO).
| imag : ndarray
| Imaginary part of the array.
| real : ndarray
| Real part of the array.
| size : int
| Number of elements in the array.
| itemsize : int
| The memory use of each array element in bytes.
| nbytes : int
| The total number of bytes required to store the array data,
| i.e., ``itemsize * size``.
| ndim : int
| The array's number of dimensions.
| shape : tuple of ints
| Shape of the array.
| strides : tuple of ints
| The step-size required to move from one element to the next in
| memory. For example, a contiguous ``(3, 4)`` array of type
| ``int16`` in C-order has strides ``(8, 2)``. This implies that
| to move from element to element in memory requires jumps of 2 bytes.
| To move from row-to-row, one needs to jump 8 bytes at a time
| (``2 * 4``).
| ctypes : ctypes object
| Class containing properties of the array needed for interaction
| with ctypes.
| base : ndarray
| If the array is a view into another array, that array is its `base`
| (unless that array is also a view). The `base` array is where the
| array data is actually stored.
|
| See Also
| --------
| array : Construct an array.
| zeros : Create an array, each element of which is zero.
| empty : Create an array, but leave its allocated memory unchanged (i.e.,
| it contains "garbage").
| dtype : Create a data-type.
|
| Notes
| -----
| There are two modes of creating an array using ``__new__``:
|
| 1. If `buffer` is None, then only `shape`, `dtype`, and `order`
| are used.
| 2. If `buffer` is an object exposing the buffer interface, then
| all keywords are interpreted.
|
| No ``__init__`` method is needed because the array is fully initialized
| after the ``__new__`` method.
|
| Examples
| --------
| These examples illustrate the low-level `ndarray` constructor. Refer
| to the `See Also` section above for easier ways of constructing an
| ndarray.
|
| First mode, `buffer` is None:
|
| >>> np.ndarray(shape=(2,2), dtype=float, order='F')
| array([[0.0e+000, 0.0e+000], # random
| [ nan, 2.5e-323]])
|
| Second mode:
|
| >>> np.ndarray((2,), buffer=np.array([1,2,3]),
| ... offset=np.int_().itemsize,
| ... dtype=int) # offset = 1*itemsize, i.e. skip first element
| array([2, 3])
|
| Methods defined here:
|
| __abs__(self, /)
| abs(self)
|
| __add__(self, value, /)
| Return self+value.
|
| __and__(self, value, /)
| Return self&value.
|
| __array__(...)
| a.__array__(|dtype) -> reference if type unchanged, copy otherwise.
|
| Returns either a new reference to self if dtype is not given or a new array
| of provided data type if dtype is different from the current dtype of the
| array.
|
| __array_function__(...)
|
| __array_prepare__(...)
| a.__array_prepare__(obj) -> Object of same type as ndarray object obj.
|
| __array_ufunc__(...)
|
| __array_wrap__(...)
| a.__array_wrap__(obj) -> Object of same type as ndarray object a.
|
| __bool__(self, /)
| self != 0
|
| __complex__(...)
|
| __contains__(self, key, /)
| Return key in self.
|
| __copy__(...)
| a.__copy__()
|
| Used if :func:`copy.copy` is called on an array. Returns a copy of the array.
|
| Equivalent to ``a.copy(order='K')``.
|
| __deepcopy__(...)
| a.__deepcopy__(memo, /) -> Deep copy of array.
|
| Used if :func:`copy.deepcopy` is called on an array.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __divmod__(self, value, /)
| Return divmod(self, value).
|
| __eq__(self, value, /)
| Return self==value.
|
| __float__(self, /)
| float(self)
|
| __floordiv__(self, value, /)
| Return self//value.
|
| __format__(...)
| Default object formatter.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getitem__(self, key, /)
| Return self[key].
|
| __gt__(self, value, /)
| Return self>value.
|
| __iadd__(self, value, /)
| Return self+=value.
|
| __iand__(self, value, /)
| Return self&=value.
|
| __ifloordiv__(self, value, /)
| Return self//=value.
|
| __ilshift__(self, value, /)
| Return self<<=value.
|
| __imatmul__(self, value, /)
| Return self@=value.
|
| __imod__(self, value, /)
| Return self%=value.
|
| __imul__(self, value, /)
| Return self*=value.
|
| __index__(self, /)
| Return self converted to an integer, if self is suitable for use as an index into a list.
|
| __int__(self, /)
| int(self)
|
| __invert__(self, /)
| ~self
|
| __ior__(self, value, /)
| Return self|=value.
|
| __ipow__(self, value, /)
| Return self**=value.
|
| __irshift__(self, value, /)
| Return self>>=value.
|
| __isub__(self, value, /)
| Return self-=value.
|
| __iter__(self, /)
| Implement iter(self).
|
| __itruediv__(self, value, /)
| Return self/=value.
|
| __ixor__(self, value, /)
| Return self^=value.
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lshift__(self, value, /)
| Return self<<value.
|
| __lt__(self, value, /)
| Return self<value.
|
| __matmul__(self, value, /)
| Return self@value.
|
| __mod__(self, value, /)
| Return self%value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __neg__(self, /)
| -self
|
| __or__(self, value, /)
| Return self|value.
|
| __pos__(self, /)
| +self
|
| __pow__(self, value, mod=None, /)
| Return pow(self, value, mod).
|
| __radd__(self, value, /)
| Return value+self.
|
| __rand__(self, value, /)
| Return value&self.
|
| __rdivmod__(self, value, /)
| Return divmod(value, self).
|
| __reduce__(...)
| a.__reduce__()
|
| For pickling.
|
| __reduce_ex__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| __rfloordiv__(self, value, /)
| Return value//self.
|
| __rlshift__(self, value, /)
| Return value<<self.
|
| __rmatmul__(self, value, /)
| Return value@self.
|
| __rmod__(self, value, /)
| Return value%self.
|
| __rmul__(self, value, /)
| Return value*self.
|
| __ror__(self, value, /)
| Return value|self.
|
| __rpow__(self, value, mod=None, /)
| Return pow(value, self, mod).
|
| __rrshift__(self, value, /)
| Return value>>self.
|
| __rshift__(self, value, /)
| Return self>>value.
|
| __rsub__(self, value, /)
| Return value-self.
|
| __rtruediv__(self, value, /)
| Return value/self.
|
| __rxor__(self, value, /)
| Return value^self.
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __setstate__(...)
| a.__setstate__(state, /)
|
| For unpickling.
|
| The `state` argument must be a sequence that contains the following
| elements:
|
| Parameters
| ----------
| version : int
| optional pickle version. If omitted defaults to 0.
| shape : tuple
| dtype : data-type
| isFortran : bool
| rawdata : string or list
| a binary string with the data (or a list if 'a' is an object array)
|
| __sizeof__(...)
| Size of object in memory, in bytes.
|
| __str__(self, /)
| Return str(self).
|
| __sub__(self, value, /)
| Return self-value.
|
| __truediv__(self, value, /)
| Return self/value.
|
| __xor__(self, value, /)
| Return self^value.
|
| all(...)
| a.all(axis=None, out=None, keepdims=False)
|
| Returns True if all elements evaluate to True.
|
| Refer to `numpy.all` for full documentation.
|
| See Also
| --------
| numpy.all : equivalent function
|
| any(...)
| a.any(axis=None, out=None, keepdims=False)
|
| Returns True if any of the elements of `a` evaluate to True.
|
| Refer to `numpy.any` for full documentation.
|
| See Also
| --------
| numpy.any : equivalent function
|
| argmax(...)
| a.argmax(axis=None, out=None)
|
| Return indices of the maximum values along the given axis.
|
| Refer to `numpy.argmax` for full documentation.
|
| See Also
| --------
| numpy.argmax : equivalent function
|
| argmin(...)
| a.argmin(axis=None, out=None)
|
| Return indices of the minimum values along the given axis of `a`.
|
| Refer to `numpy.argmin` for detailed documentation.
|
| See Also
| --------
| numpy.argmin : equivalent function
|
| argpartition(...)
| a.argpartition(kth, axis=-1, kind='introselect', order=None)
|
| Returns the indices that would partition this array.
|
| Refer to `numpy.argpartition` for full documentation.
|
| .. versionadded:: 1.8.0
|
| See Also
| --------
| numpy.argpartition : equivalent function
|
| argsort(...)
| a.argsort(axis=-1, kind=None, order=None)
|
| Returns the indices that would sort this array.
|
| Refer to `numpy.argsort` for full documentation.
|
| See Also
| --------
| numpy.argsort : equivalent function
|
| astype(...)
| a.astype(dtype, order='K', casting='unsafe', subok=True, copy=True)
|
| Copy of the array, cast to a specified type.
|
| Parameters
| ----------
| dtype : str or dtype
| Typecode or data-type to which the array is cast.
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout order of the result.
| 'C' means C order, 'F' means Fortran order, 'A'
| means 'F' order if all the arrays are Fortran contiguous,
| 'C' order otherwise, and 'K' means as close to the
| order the array elements appear in memory as possible.
| Default is 'K'.
| casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
| Controls what kind of data casting may occur. Defaults to 'unsafe'
| for backwards compatibility.
|
| * 'no' means the data types should not be cast at all.
| * 'equiv' means only byte-order changes are allowed.
| * 'safe' means only casts which can preserve values are allowed.
| * 'same_kind' means only safe casts or casts within a kind,
| like float64 to float32, are allowed.
| * 'unsafe' means any data conversions may be done.
| subok : bool, optional
| If True, then sub-classes will be passed-through (default), otherwise
| the returned array will be forced to be a base-class array.
| copy : bool, optional
| By default, astype always returns a newly allocated array. If this
| is set to false, and the `dtype`, `order`, and `subok`
| requirements are satisfied, the input array is returned instead
| of a copy.
|
| Returns
| -------
| arr_t : ndarray
| Unless `copy` is False and the other conditions for returning the input
| array are satisfied (see description for `copy` input parameter), `arr_t`
| is a new array of the same shape as the input array, with dtype, order
| given by `dtype`, `order`.
|
| Notes
| -----
| .. versionchanged:: 1.17.0
| Casting between a simple data type and a structured one is possible only
| for "unsafe" casting. Casting to multiple fields is allowed, but
| casting from multiple fields is not.
|
| .. versionchanged:: 1.9.0
| Casting from numeric to string types in 'safe' casting mode requires
| that the string dtype length is long enough to store the max
| integer/float value converted.
|
| Raises
| ------
| ComplexWarning
| When casting from complex to float or int. To avoid this,
| one should use ``a.real.astype(t)``.
|
| Examples
| --------
| >>> x = np.array([1, 2, 2.5])
| >>> x
| array([1. , 2. , 2.5])
|
| >>> x.astype(int)
| array([1, 2, 2])
|
| byteswap(...)
| a.byteswap(inplace=False)
|
| Swap the bytes of the array elements
|
| Toggle between low-endian and big-endian data representation by
| returning a byteswapped array, optionally swapped in-place.
| Arrays of byte-strings are not swapped. The real and imaginary
| parts of a complex number are swapped individually.
|
| Parameters
| ----------
| inplace : bool, optional
| If ``True``, swap bytes in-place, default is ``False``.
|
| Returns
| -------
| out : ndarray
| The byteswapped array. If `inplace` is ``True``, this is
| a view to self.
|
| Examples
| --------
| >>> A = np.array([1, 256, 8755], dtype=np.int16)
| >>> list(map(hex, A))
| ['0x1', '0x100', '0x2233']
| >>> A.byteswap(inplace=True)
| array([ 256, 1, 13090], dtype=int16)
| >>> list(map(hex, A))
| ['0x100', '0x1', '0x3322']
|
| Arrays of byte-strings are not swapped
|
| >>> A = np.array([b'ceg', b'fac'])
| >>> A.byteswap()
| array([b'ceg', b'fac'], dtype='|S3')
|
| ``A.newbyteorder().byteswap()`` produces an array with the same values
| but different representation in memory
|
| >>> A = np.array([1, 2, 3])
| >>> A.view(np.uint8)
| array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0,
| 0, 0], dtype=uint8)
| >>> A.newbyteorder().byteswap(inplace=True)
| array([1, 2, 3])
| >>> A.view(np.uint8)
| array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0,
| 0, 3], dtype=uint8)
|
| choose(...)
| a.choose(choices, out=None, mode='raise')
|
| Use an index array to construct a new array from a set of choices.
|
| Refer to `numpy.choose` for full documentation.
|
| See Also
| --------
| numpy.choose : equivalent function
|
| clip(...)
| a.clip(min=None, max=None, out=None, **kwargs)
|
| Return an array whose values are limited to ``[min, max]``.
| One of max or min must be given.
|
| Refer to `numpy.clip` for full documentation.
|
| See Also
| --------
| numpy.clip : equivalent function
|
| compress(...)
| a.compress(condition, axis=None, out=None)
|
| Return selected slices of this array along given axis.
|
| Refer to `numpy.compress` for full documentation.
|
| See Also
| --------
| numpy.compress : equivalent function
|
| conj(...)
| a.conj()
|
| Complex-conjugate all elements.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| conjugate(...)
| a.conjugate()
|
| Return the complex conjugate, element-wise.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| copy(...)
| a.copy(order='C')
|
| Return a copy of the array.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout of the copy. 'C' means C-order,
| 'F' means F-order, 'A' means 'F' if `a` is Fortran contiguous,
| 'C' otherwise. 'K' means match the layout of `a` as closely
| as possible. (Note that this function and :func:`numpy.copy` are very
| similar, but have different default values for their order=
| arguments.)
|
| See also
| --------
| numpy.copy
| numpy.copyto
|
| Examples
| --------
| >>> x = np.array([[1,2,3],[4,5,6]], order='F')
|
| >>> y = x.copy()
|
| >>> x.fill(0)
|
| >>> x
| array([[0, 0, 0],
| [0, 0, 0]])
|
| >>> y
| array([[1, 2, 3],
| [4, 5, 6]])
|
| >>> y.flags['C_CONTIGUOUS']
| True
|
| cumprod(...)
| a.cumprod(axis=None, dtype=None, out=None)
|
| Return the cumulative product of the elements along the given axis.
|
| Refer to `numpy.cumprod` for full documentation.
|
| See Also
| --------
| numpy.cumprod : equivalent function
|
| cumsum(...)
| a.cumsum(axis=None, dtype=None, out=None)
|
| Return the cumulative sum of the elements along the given axis.
|
| Refer to `numpy.cumsum` for full documentation.
|
| See Also
| --------
| numpy.cumsum : equivalent function
|
| diagonal(...)
| a.diagonal(offset=0, axis1=0, axis2=1)
|
| Return specified diagonals. In NumPy 1.9 the returned array is a
| read-only view instead of a copy as in previous NumPy versions. In
| a future version the read-only restriction will be removed.
|
| Refer to :func:`numpy.diagonal` for full documentation.
|
| See Also
| --------
| numpy.diagonal : equivalent function
|
| dot(...)
| a.dot(b, out=None)
|
| Dot product of two arrays.
|
| Refer to `numpy.dot` for full documentation.
|
| See Also
| --------
| numpy.dot : equivalent function
|
| Examples
| --------
| >>> a = np.eye(2)
| >>> b = np.ones((2, 2)) * 2
| >>> a.dot(b)
| array([[2., 2.],
| [2., 2.]])
|
| This array method can be conveniently chained:
|
| >>> a.dot(b).dot(b)
| array([[8., 8.],
| [8., 8.]])
|
| dump(...)
| a.dump(file)
|
| Dump a pickle of the array to the specified file.
| The array can be read back with pickle.load or numpy.load.
|
| Parameters
| ----------
| file : str or Path
| A string naming the dump file.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| dumps(...)
| a.dumps()
|
| Returns the pickle of the array as a string.
| pickle.loads or numpy.loads will convert the string back to an array.
|
| Parameters
| ----------
| None
|
| fill(...)
| a.fill(value)
|
| Fill the array with a scalar value.
|
| Parameters
| ----------
| value : scalar
| All elements of `a` will be assigned this value.
|
| Examples
| --------
| >>> a = np.array([1, 2])
| >>> a.fill(0)
| >>> a
| array([0, 0])
| >>> a = np.empty(2)
| >>> a.fill(1)
| >>> a
| array([1., 1.])
|
| flatten(...)
| a.flatten(order='C')
|
| Return a copy of the array collapsed into one dimension.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| 'C' means to flatten in row-major (C-style) order.
| 'F' means to flatten in column-major (Fortran-
| style) order. 'A' means to flatten in column-major
| order if `a` is Fortran *contiguous* in memory,
| row-major order otherwise. 'K' means to flatten
| `a` in the order the elements occur in memory.
| The default is 'C'.
|
| Returns
| -------
| y : ndarray
| A copy of the input array, flattened to one dimension.
|
| See Also
| --------
| ravel : Return a flattened array.
| flat : A 1-D flat iterator over the array.
|
| Examples
| --------
| >>> a = np.array([[1,2], [3,4]])
| >>> a.flatten()
| array([1, 2, 3, 4])
| >>> a.flatten('F')
| array([1, 3, 2, 4])
|
| getfield(...)
| a.getfield(dtype, offset=0)
|
| Returns a field of the given array as a certain type.
|
| A field is a view of the array data with a given data-type. The values in
| the view are determined by the given type and the offset into the current
| array in bytes. The offset needs to be such that the view dtype fits in the
| array dtype; for example an array of dtype complex128 has 16-byte elements.
| If taking a view with a 32-bit integer (4 bytes), the offset needs to be
| between 0 and 12 bytes.
|
| Parameters
| ----------
| dtype : str or dtype
| The data type of the view. The dtype size of the view can not be larger
| than that of the array itself.
| offset : int
| Number of bytes to skip before beginning the element view.
|
| Examples
| --------
| >>> x = np.diag([1.+1.j]*2)
| >>> x[1, 1] = 2 + 4.j
| >>> x
| array([[1.+1.j, 0.+0.j],
| [0.+0.j, 2.+4.j]])
| >>> x.getfield(np.float64)
| array([[1., 0.],
| [0., 2.]])
|
| By choosing an offset of 8 bytes we can select the complex part of the
| array for our view:
|
| >>> x.getfield(np.float64, offset=8)
| array([[1., 0.],
| [0., 4.]])
|
| item(...)
| a.item(*args)
|
| Copy an element of an array to a standard Python scalar and return it.
|
| Parameters
| ----------
| \*args : Arguments (variable number and type)
|
| * none: in this case, the method only works for arrays
| with one element (`a.size == 1`), which element is
| copied into a standard Python scalar object and returned.
|
| * int_type: this argument is interpreted as a flat index into
| the array, specifying which element to copy and return.
|
| * tuple of int_types: functions as does a single int_type argument,
| except that the argument is interpreted as an nd-index into the
| array.
|
| Returns
| -------
| z : Standard Python scalar object
| A copy of the specified element of the array as a suitable
| Python scalar
|
| Notes
| -----
| When the data type of `a` is longdouble or clongdouble, item() returns
| a scalar array object because there is no available Python scalar that
| would not lose information. Void arrays return a buffer object for item(),
| unless fields are defined, in which case a tuple is returned.
|
| `item` is very similar to a[args], except, instead of an array scalar,
| a standard Python scalar is returned. This can be useful for speeding up
| access to elements of the array and doing arithmetic on elements of the
| array using Python's optimized math.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.item(3)
| 1
| >>> x.item(7)
| 0
| >>> x.item((0, 1))
| 2
| >>> x.item((2, 2))
| 1
|
| itemset(...)
| a.itemset(*args)
|
| Insert scalar into an array (scalar is cast to array's dtype, if possible)
|
| There must be at least 1 argument, and define the last argument
| as *item*. Then, ``a.itemset(*args)`` is equivalent to but faster
| than ``a[args] = item``. The item should be a scalar value and `args`
| must select a single item in the array `a`.
|
| Parameters
| ----------
| \*args : Arguments
| If one argument: a scalar, only used in case `a` is of size 1.
| If two arguments: the last argument is the value to be set
| and must be a scalar, the first argument specifies a single array
| element location. It is either an int or a tuple.
|
| Notes
| -----
| Compared to indexing syntax, `itemset` provides some speed increase
| for placing a scalar into a particular location in an `ndarray`,
| if you must do this. However, generally this is discouraged:
| among other problems, it complicates the appearance of the code.
| Also, when using `itemset` (and `item`) inside a loop, be sure
| to assign the methods to a local variable to avoid the attribute
| look-up at each loop iteration.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.itemset(4, 0)
| >>> x.itemset((2, 2), 9)
| >>> x
| array([[2, 2, 6],
| [1, 0, 6],
| [1, 0, 9]])
|
| max(...)
| a.max(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the maximum along a given axis.
|
| Refer to `numpy.amax` for full documentation.
|
| See Also
| --------
| numpy.amax : equivalent function
|
| mean(...)
| a.mean(axis=None, dtype=None, out=None, keepdims=False)
|
| Returns the average of the array elements along given axis.
|
| Refer to `numpy.mean` for full documentation.
|
| See Also
| --------
| numpy.mean : equivalent function
|
| min(...)
| a.min(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the minimum along a given axis.
|
| Refer to `numpy.amin` for full documentation.
|
| See Also
| --------
| numpy.amin : equivalent function
|
| newbyteorder(...)
| arr.newbyteorder(new_order='S')
|
| Return the array with the same data viewed with a different byte order.
|
| Equivalent to::
|
| arr.view(arr.dtype.newbytorder(new_order))
|
| Changes are also made in all fields and sub-arrays of the array data
| type.
|
|
|
| Parameters
| ----------
| new_order : string, optional
| Byte order to force; a value from the byte order specifications
| below. `new_order` codes can be any of:
|
| * 'S' - swap dtype from current to opposite endian
| * {'<', 'L'} - little endian
| * {'>', 'B'} - big endian
| * {'=', 'N'} - native order
| * {'|', 'I'} - ignore (no change to byte order)
|
| The default value ('S') results in swapping the current
| byte order. The code does a case-insensitive check on the first
| letter of `new_order` for the alternatives above. For example,
| any of 'B' or 'b' or 'biggish' are valid to specify big-endian.
|
|
| Returns
| -------
| new_arr : array
| New array object with the dtype reflecting given change to the
| byte order.
|
| nonzero(...)
| a.nonzero()
|
| Return the indices of the elements that are non-zero.
|
| Refer to `numpy.nonzero` for full documentation.
|
| See Also
| --------
| numpy.nonzero : equivalent function
|
| partition(...)
| a.partition(kth, axis=-1, kind='introselect', order=None)
|
| Rearranges the elements in the array in such a way that the value of the
| element in kth position is in the position it would be in a sorted array.
| All elements smaller than the kth element are moved before this element and
| all equal or greater are moved behind it. The ordering of the elements in
| the two partitions is undefined.
|
| .. versionadded:: 1.8.0
|
| Parameters
| ----------
| kth : int or sequence of ints
| Element index to partition by. The kth element value will be in its
| final sorted position and all smaller elements will be moved before it
| and all equal or greater elements behind it.
| The order of all elements in the partitions is undefined.
| If provided with a sequence of kth it will partition all elements
| indexed by kth of them into their sorted position at once.
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'introselect'}, optional
| Selection algorithm. Default is 'introselect'.
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need to be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.partition : Return a parititioned copy of an array.
| argpartition : Indirect partition.
| sort : Full sort.
|
| Notes
| -----
| See ``np.partition`` for notes on the different algorithms.
|
| Examples
| --------
| >>> a = np.array([3, 4, 2, 1])
| >>> a.partition(3)
| >>> a
| array([2, 1, 3, 4])
|
| >>> a.partition((1, 3))
| >>> a
| array([1, 2, 3, 4])
|
| prod(...)
| a.prod(axis=None, dtype=None, out=None, keepdims=False, initial=1, where=True)
|
| Return the product of the array elements over the given axis
|
| Refer to `numpy.prod` for full documentation.
|
| See Also
| --------
| numpy.prod : equivalent function
|
| ptp(...)
| a.ptp(axis=None, out=None, keepdims=False)
|
| Peak to peak (maximum - minimum) value along a given axis.
|
| Refer to `numpy.ptp` for full documentation.
|
| See Also
| --------
| numpy.ptp : equivalent function
|
| put(...)
| a.put(indices, values, mode='raise')
|
| Set ``a.flat[n] = values[n]`` for all `n` in indices.
|
| Refer to `numpy.put` for full documentation.
|
| See Also
| --------
| numpy.put : equivalent function
|
| ravel(...)
| a.ravel([order])
|
| Return a flattened array.
|
| Refer to `numpy.ravel` for full documentation.
|
| See Also
| --------
| numpy.ravel : equivalent function
|
| ndarray.flat : a flat iterator on the array.
|
| repeat(...)
| a.repeat(repeats, axis=None)
|
| Repeat elements of an array.
|
| Refer to `numpy.repeat` for full documentation.
|
| See Also
| --------
| numpy.repeat : equivalent function
|
| reshape(...)
| a.reshape(shape, order='C')
|
| Returns an array containing the same data with a new shape.
|
| Refer to `numpy.reshape` for full documentation.
|
| See Also
| --------
| numpy.reshape : equivalent function
|
| Notes
| -----
| Unlike the free function `numpy.reshape`, this method on `ndarray` allows
| the elements of the shape parameter to be passed in as separate arguments.
| For example, ``a.reshape(10, 11)`` is equivalent to
| ``a.reshape((10, 11))``.
|
| resize(...)
| a.resize(new_shape, refcheck=True)
|
| Change shape and size of array in-place.
|
| Parameters
| ----------
| new_shape : tuple of ints, or `n` ints
| Shape of resized array.
| refcheck : bool, optional
| If False, reference count will not be checked. Default is True.
|
| Returns
| -------
| None
|
| Raises
| ------
| ValueError
| If `a` does not own its own data or references or views to it exist,
| and the data memory must be changed.
| PyPy only: will always raise if the data memory must be changed, since
| there is no reliable way to determine if references or views to it
| exist.
|
| SystemError
| If the `order` keyword argument is specified. This behaviour is a
| bug in NumPy.
|
| See Also
| --------
| resize : Return a new array with the specified shape.
|
| Notes
| -----
| This reallocates space for the data area if necessary.
|
| Only contiguous arrays (data elements consecutive in memory) can be
| resized.
|
| The purpose of the reference count check is to make sure you
| do not use this array as a buffer for another Python object and then
| reallocate the memory. However, reference counts can increase in
| other ways so if you are sure that you have not shared the memory
| for this array with another Python object, then you may safely set
| `refcheck` to False.
|
| Examples
| --------
| Shrinking an array: array is flattened (in the order that the data are
| stored in memory), resized, and reshaped:
|
| >>> a = np.array([[0, 1], [2, 3]], order='C')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [1]])
|
| >>> a = np.array([[0, 1], [2, 3]], order='F')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [2]])
|
| Enlarging an array: as above, but missing entries are filled with zeros:
|
| >>> b = np.array([[0, 1], [2, 3]])
| >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
| >>> b
| array([[0, 1, 2],
| [3, 0, 0]])
|
| Referencing an array prevents resizing...
|
| >>> c = a
| >>> a.resize((1, 1))
| Traceback (most recent call last):
| ...
| ValueError: cannot resize an array that references or is referenced ...
|
| Unless `refcheck` is False:
|
| >>> a.resize((1, 1), refcheck=False)
| >>> a
| array([[0]])
| >>> c
| array([[0]])
|
| round(...)
| a.round(decimals=0, out=None)
|
| Return `a` with each element rounded to the given number of decimals.
|
| Refer to `numpy.around` for full documentation.
|
| See Also
| --------
| numpy.around : equivalent function
|
| searchsorted(...)
| a.searchsorted(v, side='left', sorter=None)
|
| Find indices where elements of v should be inserted in a to maintain order.
|
| For full documentation, see `numpy.searchsorted`
|
| See Also
| --------
| numpy.searchsorted : equivalent function
|
| setfield(...)
| a.setfield(val, dtype, offset=0)
|
| Put a value into a specified place in a field defined by a data-type.
|
| Place `val` into `a`'s field defined by `dtype` and beginning `offset`
| bytes into the field.
|
| Parameters
| ----------
| val : object
| Value to be placed in field.
| dtype : dtype object
| Data-type of the field in which to place `val`.
| offset : int, optional
| The number of bytes into the field at which to place `val`.
|
| Returns
| -------
| None
|
| See Also
| --------
| getfield
|
| Examples
| --------
| >>> x = np.eye(3)
| >>> x.getfield(np.float64)
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
| >>> x.setfield(3, np.int32)
| >>> x.getfield(np.int32)
| array([[3, 3, 3],
| [3, 3, 3],
| [3, 3, 3]], dtype=int32)
| >>> x
| array([[1.0e+000, 1.5e-323, 1.5e-323],
| [1.5e-323, 1.0e+000, 1.5e-323],
| [1.5e-323, 1.5e-323, 1.0e+000]])
| >>> x.setfield(np.eye(3), np.int32)
| >>> x
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
|
| setflags(...)
| a.setflags(write=None, align=None, uic=None)
|
| Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY),
| respectively.
|
| These Boolean-valued flags affect how numpy interprets the memory
| area used by `a` (see Notes below). The ALIGNED flag can only
| be set to True if the data is actually aligned according to the type.
| The WRITEBACKIFCOPY and (deprecated) UPDATEIFCOPY flags can never be set
| to True. The flag WRITEABLE can only be set to True if the array owns its
| own memory, or the ultimate owner of the memory exposes a writeable buffer
| interface, or is a string. (The exception for string is made so that
| unpickling can be done without copying memory.)
|
| Parameters
| ----------
| write : bool, optional
| Describes whether or not `a` can be written to.
| align : bool, optional
| Describes whether or not `a` is aligned properly for its type.
| uic : bool, optional
| Describes whether or not `a` is a copy of another "base" array.
|
| Notes
| -----
| Array flags provide information about how the memory area used
| for the array is to be interpreted. There are 7 Boolean flags
| in use, only four of which can be changed by the user:
| WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED.
|
| WRITEABLE (W) the data area can be written to;
|
| ALIGNED (A) the data and strides are aligned appropriately for the hardware
| (as determined by the compiler);
|
| UPDATEIFCOPY (U) (deprecated), replaced by WRITEBACKIFCOPY;
|
| WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced
| by .base). When the C-API function PyArray_ResolveWritebackIfCopy is
| called, the base array will be updated with the contents of this array.
|
| All flags can be accessed using the single (upper case) letter as well
| as the full name.
|
| Examples
| --------
| >>> y = np.array([[3, 1, 7],
| ... [2, 0, 0],
| ... [8, 5, 9]])
| >>> y
| array([[3, 1, 7],
| [2, 0, 0],
| [8, 5, 9]])
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : True
| ALIGNED : True
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(write=0, align=0)
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : False
| ALIGNED : False
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(uic=1)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: cannot set WRITEBACKIFCOPY flag to True
|
| sort(...)
| a.sort(axis=-1, kind=None, order=None)
|
| Sort an array in-place. Refer to `numpy.sort` for full documentation.
|
| Parameters
| ----------
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional
| Sorting algorithm. The default is 'quicksort'. Note that both 'stable'
| and 'mergesort' use timsort under the covers and, in general, the
| actual implementation will vary with datatype. The 'mergesort' option
| is retained for backwards compatibility.
|
| .. versionchanged:: 1.15.0.
| The 'stable' option was added.
|
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.sort : Return a sorted copy of an array.
| numpy.argsort : Indirect sort.
| numpy.lexsort : Indirect stable sort on multiple keys.
| numpy.searchsorted : Find elements in sorted array.
| numpy.partition: Partial sort.
|
| Notes
| -----
| See `numpy.sort` for notes on the different sorting algorithms.
|
| Examples
| --------
| >>> a = np.array([[1,4], [3,1]])
| >>> a.sort(axis=1)
| >>> a
| array([[1, 4],
| [1, 3]])
| >>> a.sort(axis=0)
| >>> a
| array([[1, 3],
| [1, 4]])
|
| Use the `order` keyword to specify a field to use when sorting a
| structured array:
|
| >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
| >>> a.sort(order='y')
| >>> a
| array([(b'c', 1), (b'a', 2)],
| dtype=[('x', 'S1'), ('y', '<i8')])
|
| squeeze(...)
| a.squeeze(axis=None)
|
| Remove single-dimensional entries from the shape of `a`.
|
| Refer to `numpy.squeeze` for full documentation.
|
| See Also
| --------
| numpy.squeeze : equivalent function
|
| std(...)
| a.std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the standard deviation of the array elements along given axis.
|
| Refer to `numpy.std` for full documentation.
|
| See Also
| --------
| numpy.std : equivalent function
|
| sum(...)
| a.sum(axis=None, dtype=None, out=None, keepdims=False, initial=0, where=True)
|
| Return the sum of the array elements over the given axis.
|
| Refer to `numpy.sum` for full documentation.
|
| See Also
| --------
| numpy.sum : equivalent function
|
| swapaxes(...)
| a.swapaxes(axis1, axis2)
|
| Return a view of the array with `axis1` and `axis2` interchanged.
|
| Refer to `numpy.swapaxes` for full documentation.
|
| See Also
| --------
| numpy.swapaxes : equivalent function
|
| take(...)
| a.take(indices, axis=None, out=None, mode='raise')
|
| Return an array formed from the elements of `a` at the given indices.
|
| Refer to `numpy.take` for full documentation.
|
| See Also
| --------
| numpy.take : equivalent function
|
| tobytes(...)
| a.tobytes(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| .. versionadded:: 1.9.0
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]], dtype='<u2')
| >>> x.tobytes()
| b'\x00\x00\x01\x00\x02\x00\x03\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x02\x00\x01\x00\x03\x00'
|
| tofile(...)
| a.tofile(fid, sep="", format="%s")
|
| Write array to a file as text or binary (default).
|
| Data is always written in 'C' order, independent of the order of `a`.
| The data produced by this method can be recovered using the function
| fromfile().
|
| Parameters
| ----------
| fid : file or str or Path
| An open file object, or a string containing a filename.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| sep : str
| Separator between array items for text output.
| If "" (empty), a binary file is written, equivalent to
| ``file.write(a.tobytes())``.
| format : str
| Format string for text file output.
| Each entry in the array is formatted to text by first converting
| it to the closest Python type, and then using "format" % item.
|
| Notes
| -----
| This is a convenience function for quick storage of array data.
| Information on endianness and precision is lost, so this method is not a
| good choice for files intended to archive data or transport data between
| machines with different endianness. Some of these problems can be overcome
| by outputting the data as text files, at the expense of speed and file
| size.
|
| When fid is a file object, array contents are directly written to the
| file, bypassing the file object's ``write`` method. As a result, tofile
| cannot be used with files objects supporting compression (e.g., GzipFile)
| or file-like objects that do not support ``fileno()`` (e.g., BytesIO).
|
| tolist(...)
| a.tolist()
|
| Return the array as an ``a.ndim``-levels deep nested list of Python scalars.
|
| Return a copy of the array data as a (nested) Python list.
| Data items are converted to the nearest compatible builtin Python type, via
| the `~numpy.ndarray.item` function.
|
| If ``a.ndim`` is 0, then since the depth of the nested list is 0, it will
| not be a list at all, but a simple Python scalar.
|
| Parameters
| ----------
| none
|
| Returns
| -------
| y : object, or list of object, or list of list of object, or ...
| The possibly nested list of array elements.
|
| Notes
| -----
| The array may be recreated via ``a = np.array(a.tolist())``, although this
| may sometimes lose precision.
|
| Examples
| --------
| For a 1D array, ``a.tolist()`` is almost the same as ``list(a)``,
| except that ``tolist`` changes numpy scalars to Python scalars:
|
| >>> a = np.uint32([1, 2])
| >>> a_list = list(a)
| >>> a_list
| [1, 2]
| >>> type(a_list[0])
| <class 'numpy.uint32'>
| >>> a_tolist = a.tolist()
| >>> a_tolist
| [1, 2]
| >>> type(a_tolist[0])
| <class 'int'>
|
| Additionally, for a 2D array, ``tolist`` applies recursively:
|
| >>> a = np.array([[1, 2], [3, 4]])
| >>> list(a)
| [array([1, 2]), array([3, 4])]
| >>> a.tolist()
| [[1, 2], [3, 4]]
|
| The base case for this recursion is a 0D array:
|
| >>> a = np.array(1)
| >>> list(a)
| Traceback (most recent call last):
| ...
| TypeError: iteration over a 0-d array
| >>> a.tolist()
| 1
|
| tostring(...)
| a.tostring(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| This function is a compatibility alias for tobytes. Despite its name it returns bytes not strings.
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]], dtype='<u2')
| >>> x.tobytes()
| b'\x00\x00\x01\x00\x02\x00\x03\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x02\x00\x01\x00\x03\x00'
|
| trace(...)
| a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)
|
| Return the sum along diagonals of the array.
|
| Refer to `numpy.trace` for full documentation.
|
| See Also
| --------
| numpy.trace : equivalent function
|
| transpose(...)
| a.transpose(*axes)
|
| Returns a view of the array with axes transposed.
|
| For a 1-D array this has no effect, as a transposed vector is simply the
| same vector. To convert a 1-D array into a 2D column vector, an additional
| dimension must be added. `np.atleast2d(a).T` achieves this, as does
| `a[:, np.newaxis]`.
| For a 2-D array, this is a standard matrix transpose.
| For an n-D array, if axes are given, their order indicates how the
| axes are permuted (see Examples). If axes are not provided and
| ``a.shape = (i[0], i[1], ... i[n-2], i[n-1])``, then
| ``a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])``.
|
| Parameters
| ----------
| axes : None, tuple of ints, or `n` ints
|
| * None or no argument: reverses the order of the axes.
|
| * tuple of ints: `i` in the `j`-th place in the tuple means `a`'s
| `i`-th axis becomes `a.transpose()`'s `j`-th axis.
|
| * `n` ints: same as an n-tuple of the same ints (this form is
| intended simply as a "convenience" alternative to the tuple form)
|
| Returns
| -------
| out : ndarray
| View of `a`, with axes suitably permuted.
|
| See Also
| --------
| ndarray.T : Array property returning the array transposed.
| ndarray.reshape : Give a new shape to an array without changing its data.
|
| Examples
| --------
| >>> a = np.array([[1, 2], [3, 4]])
| >>> a
| array([[1, 2],
| [3, 4]])
| >>> a.transpose()
| array([[1, 3],
| [2, 4]])
| >>> a.transpose((1, 0))
| array([[1, 3],
| [2, 4]])
| >>> a.transpose(1, 0)
| array([[1, 3],
| [2, 4]])
|
| var(...)
| a.var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the variance of the array elements, along given axis.
|
| Refer to `numpy.var` for full documentation.
|
| See Also
| --------
| numpy.var : equivalent function
|
| view(...)
| a.view(dtype=None, type=None)
|
| New view of array with the same data.
|
| Parameters
| ----------
| dtype : data-type or ndarray sub-class, optional
| Data-type descriptor of the returned view, e.g., float32 or int16. The
| default, None, results in the view having the same data-type as `a`.
| This argument can also be specified as an ndarray sub-class, which
| then specifies the type of the returned object (this is equivalent to
| setting the ``type`` parameter).
| type : Python type, optional
| Type of the returned view, e.g., ndarray or matrix. Again, the
| default None results in type preservation.
|
| Notes
| -----
| ``a.view()`` is used two different ways:
|
| ``a.view(some_dtype)`` or ``a.view(dtype=some_dtype)`` constructs a view
| of the array's memory with a different data-type. This can cause a
| reinterpretation of the bytes of memory.
|
| ``a.view(ndarray_subclass)`` or ``a.view(type=ndarray_subclass)`` just
| returns an instance of `ndarray_subclass` that looks at the same array
| (same shape, dtype, etc.) This does not cause a reinterpretation of the
| memory.
|
| For ``a.view(some_dtype)``, if ``some_dtype`` has a different number of
| bytes per entry than the previous dtype (for example, converting a
| regular array to a structured array), then the behavior of the view
| cannot be predicted just from the superficial appearance of ``a`` (shown
| by ``print(a)``). It also depends on exactly how ``a`` is stored in
| memory. Therefore if ``a`` is C-ordered versus fortran-ordered, versus
| defined as a slice or transpose, etc., the view may give different
| results.
|
|
| Examples
| --------
| >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])
|
| Viewing array data using a different type and dtype:
|
| >>> y = x.view(dtype=np.int16, type=np.matrix)
| >>> y
| matrix([[513]], dtype=int16)
| >>> print(type(y))
| <class 'numpy.matrix'>
|
| Creating a view on a structured array so it can be used in calculations
|
| >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
| >>> xv = x.view(dtype=np.int8).reshape(-1,2)
| >>> xv
| array([[1, 2],
| [3, 4]], dtype=int8)
| >>> xv.mean(0)
| array([2., 3.])
|
| Making changes to the view changes the underlying array
|
| >>> xv[0,1] = 20
| >>> x
| array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')])
|
| Using a view to convert an array to a recarray:
|
| >>> z = x.view(np.recarray)
| >>> z.a
| array([1, 3], dtype=int8)
|
| Views share data:
|
| >>> x[0] = (9, 10)
| >>> z[0]
| (9, 10)
|
| Views that change the dtype size (bytes per entry) should normally be
| avoided on arrays defined by slices, transposes, fortran-ordering, etc.:
|
| >>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
| >>> y = x[:, 0:2]
| >>> y
| array([[1, 2],
| [4, 5]], dtype=int16)
| >>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
| Traceback (most recent call last):
| ...
| ValueError: To change to a dtype of a different size, the array must be C-contiguous
| >>> z = y.copy()
| >>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
| array([[(1, 2)],
| [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| T
| The transposed array.
|
| Same as ``self.transpose()``.
|
| Examples
| --------
| >>> x = np.array([[1.,2.],[3.,4.]])
| >>> x
| array([[ 1., 2.],
| [ 3., 4.]])
| >>> x.T
| array([[ 1., 3.],
| [ 2., 4.]])
| >>> x = np.array([1.,2.,3.,4.])
| >>> x
| array([ 1., 2., 3., 4.])
| >>> x.T
| array([ 1., 2., 3., 4.])
|
| See Also
| --------
| transpose
|
| __array_finalize__
| None.
|
| __array_interface__
| Array protocol: Python side.
|
| __array_priority__
| Array priority.
|
| __array_struct__
| Array protocol: C-struct side.
|
| base
| Base object if memory is from some other object.
|
| Examples
| --------
| The base of an array that owns its memory is None:
|
| >>> x = np.array([1,2,3,4])
| >>> x.base is None
| True
|
| Slicing creates a view, whose memory is shared with x:
|
| >>> y = x[2:]
| >>> y.base is x
| True
|
| ctypes
| An object to simplify the interaction of the array with the ctypes
| module.
|
| This attribute creates an object that makes it easier to use arrays
| when calling shared libraries with the ctypes module. The returned
| object has, among others, data, shape, and strides attributes (see
| Notes below) which themselves return ctypes objects that can be used
| as arguments to a shared library.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| c : Python object
| Possessing attributes data, shape, strides, etc.
|
| See Also
| --------
| numpy.ctypeslib
|
| Notes
| -----
| Below are the public attributes of this object which were documented
| in "Guide to NumPy" (we have omitted undocumented public attributes,
| as well as documented private attributes):
|
| .. autoattribute:: numpy.core._internal._ctypes.data
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.shape
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.strides
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.data_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.shape_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.strides_as
| :noindex:
|
| If the ctypes module is not available, then the ctypes attribute
| of array objects still returns something useful, but ctypes objects
| are not returned and errors may be raised instead. In particular,
| the object will still have the ``as_parameter`` attribute which will
| return an integer equal to the data attribute.
|
| Examples
| --------
| >>> import ctypes
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.ctypes.data
| 30439712
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long))
| <ctypes.LP_c_long object at 0x01F01300>
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents
| c_long(0)
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents
| c_longlong(4294967296L)
| >>> x.ctypes.shape
| <numpy.core._internal.c_long_Array_2 object at 0x01FFD580>
| >>> x.ctypes.shape_as(ctypes.c_long)
| <numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
| >>> x.ctypes.strides
| <numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
| >>> x.ctypes.strides_as(ctypes.c_longlong)
| <numpy.core._internal.c_longlong_Array_2 object at 0x01F01300>
|
| data
| Python buffer object pointing to the start of the array's data.
|
| dtype
| Data-type of the array's elements.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| d : numpy dtype object
|
| See Also
| --------
| numpy.dtype
|
| Examples
| --------
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.dtype
| dtype('int32')
| >>> type(x.dtype)
| <type 'numpy.dtype'>
|
| flags
| Information about the memory layout of the array.
|
| Attributes
| ----------
| C_CONTIGUOUS (C)
| The data is in a single, C-style contiguous segment.
| F_CONTIGUOUS (F)
| The data is in a single, Fortran-style contiguous segment.
| OWNDATA (O)
| The array owns the memory it uses or borrows it from another object.
| WRITEABLE (W)
| The data area can be written to. Setting this to False locks
| the data, making it read-only. A view (slice, etc.) inherits WRITEABLE
| from its base array at creation time, but a view of a writeable
| array may be subsequently locked while the base array remains writeable.
| (The opposite is not true, in that a view of a locked array may not
| be made writeable. However, currently, locking a base object does not
| lock any views that already reference it, so under that circumstance it
| is possible to alter the contents of a locked array via a previously
| created writeable view onto it.) Attempting to change a non-writeable
| array raises a RuntimeError exception.
| ALIGNED (A)
| The data and all elements are aligned appropriately for the hardware.
| WRITEBACKIFCOPY (X)
| This array is a copy of some other array. The C-API function
| PyArray_ResolveWritebackIfCopy must be called before deallocating
| to the base array will be updated with the contents of this array.
| UPDATEIFCOPY (U)
| (Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array.
| When this array is
| deallocated, the base array will be updated with the contents of
| this array.
| FNC
| F_CONTIGUOUS and not C_CONTIGUOUS.
| FORC
| F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).
| BEHAVED (B)
| ALIGNED and WRITEABLE.
| CARRAY (CA)
| BEHAVED and C_CONTIGUOUS.
| FARRAY (FA)
| BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.
|
| Notes
| -----
| The `flags` object can be accessed dictionary-like (as in ``a.flags['WRITEABLE']``),
| or by using lowercased attribute names (as in ``a.flags.writeable``). Short flag
| names are only supported in dictionary access.
|
| Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be
| changed by the user, via direct assignment to the attribute or dictionary
| entry, or by calling `ndarray.setflags`.
|
| The array flags cannot be set arbitrarily:
|
| - UPDATEIFCOPY can only be set ``False``.
| - WRITEBACKIFCOPY can only be set ``False``.
| - ALIGNED can only be set ``True`` if the data is truly aligned.
| - WRITEABLE can only be set ``True`` if the array owns its own memory
| or the ultimate owner of the memory exposes a writeable buffer
| interface or is a string.
|
| Arrays can be both C-style and Fortran-style contiguous simultaneously.
| This is clear for 1-dimensional arrays, but can also be true for higher
| dimensional arrays.
|
| Even for contiguous arrays a stride for a given dimension
| ``arr.strides[dim]`` may be *arbitrary* if ``arr.shape[dim] == 1``
| or the array has no elements.
| It does *not* generally hold that ``self.strides[-1] == self.itemsize``
| for C-style contiguous arrays or ``self.strides[0] == self.itemsize`` for
| Fortran-style contiguous arrays is true.
|
| flat
| A 1-D iterator over the array.
|
| This is a `numpy.flatiter` instance, which acts similarly to, but is not
| a subclass of, Python's built-in iterator object.
|
| See Also
| --------
| flatten : Return a copy of the array collapsed into one dimension.
|
| flatiter
|
| Examples
| --------
| >>> x = np.arange(1, 7).reshape(2, 3)
| >>> x
| array([[1, 2, 3],
| [4, 5, 6]])
| >>> x.flat[3]
| 4
| >>> x.T
| array([[1, 4],
| [2, 5],
| [3, 6]])
| >>> x.T.flat[3]
| 5
| >>> type(x.flat)
| <class 'numpy.flatiter'>
|
| An assignment example:
|
| >>> x.flat = 3; x
| array([[3, 3, 3],
| [3, 3, 3]])
| >>> x.flat[[1,4]] = 1; x
| array([[3, 1, 3],
| [3, 1, 3]])
|
| imag
| The imaginary part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.imag
| array([ 0. , 0.70710678])
| >>> x.imag.dtype
| dtype('float64')
|
| itemsize
| Length of one array element in bytes.
|
| Examples
| --------
| >>> x = np.array([1,2,3], dtype=np.float64)
| >>> x.itemsize
| 8
| >>> x = np.array([1,2,3], dtype=np.complex128)
| >>> x.itemsize
| 16
|
| nbytes
| Total bytes consumed by the elements of the array.
|
| Notes
| -----
| Does not include memory consumed by non-element attributes of the
| array object.
|
| Examples
| --------
| >>> x = np.zeros((3,5,2), dtype=np.complex128)
| >>> x.nbytes
| 480
| >>> np.prod(x.shape) * x.itemsize
| 480
|
| ndim
| Number of array dimensions.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3])
| >>> x.ndim
| 1
| >>> y = np.zeros((2, 3, 4))
| >>> y.ndim
| 3
|
| real
| The real part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.real
| array([ 1. , 0.70710678])
| >>> x.real.dtype
| dtype('float64')
|
| See Also
| --------
| numpy.real : equivalent function
|
| shape
| Tuple of array dimensions.
|
| The shape property is usually used to get the current shape of an array,
| but may also be used to reshape the array in-place by assigning a tuple of
| array dimensions to it. As with `numpy.reshape`, one of the new shape
| dimensions can be -1, in which case its value is inferred from the size of
| the array and the remaining dimensions. Reshaping an array in-place will
| fail if a copy is required.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3, 4])
| >>> x.shape
| (4,)
| >>> y = np.zeros((2, 3, 4))
| >>> y.shape
| (2, 3, 4)
| >>> y.shape = (3, 8)
| >>> y
| array([[ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.]])
| >>> y.shape = (3, 6)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: total size of new array must be unchanged
| >>> np.zeros((4,2))[::2].shape = (-1,)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| AttributeError: incompatible shape for a non-contiguous array
|
| See Also
| --------
| numpy.reshape : similar function
| ndarray.reshape : similar method
|
| size
| Number of elements in the array.
|
| Equal to ``np.prod(a.shape)``, i.e., the product of the array's
| dimensions.
|
| Notes
| -----
| `a.size` returns a standard arbitrary precision Python integer. This
| may not be the case with other methods of obtaining the same value
| (like the suggested ``np.prod(a.shape)``, which returns an instance
| of ``np.int_``), and may be relevant if the value is used further in
| calculations that may overflow a fixed size integer type.
|
| Examples
| --------
| >>> x = np.zeros((3, 5, 2), dtype=np.complex128)
| >>> x.size
| 30
| >>> np.prod(x.shape)
| 30
|
| strides
| Tuple of bytes to step in each dimension when traversing an array.
|
| The byte offset of element ``(i[0], i[1], ..., i[n])`` in an array `a`
| is::
|
| offset = sum(np.array(i) * a.strides)
|
| A more detailed explanation of strides can be found in the
| "ndarray.rst" file in the NumPy reference guide.
|
| Notes
| -----
| Imagine an array of 32-bit integers (each 4 bytes)::
|
| x = np.array([[0, 1, 2, 3, 4],
| [5, 6, 7, 8, 9]], dtype=np.int32)
|
| This array is stored in memory as 40 bytes, one after the other
| (known as a contiguous block of memory). The strides of an array tell
| us how many bytes we have to skip in memory to move to the next position
| along a certain axis. For example, we have to skip 4 bytes (1 value) to
| move to the next column, but 20 bytes (5 values) to get to the same
| position in the next row. As such, the strides for the array `x` will be
| ``(20, 4)``.
|
| See Also
| --------
| numpy.lib.stride_tricks.as_strided
|
| Examples
| --------
| >>> y = np.reshape(np.arange(2*3*4), (2,3,4))
| >>> y
| array([[[ 0, 1, 2, 3],
| [ 4, 5, 6, 7],
| [ 8, 9, 10, 11]],
| [[12, 13, 14, 15],
| [16, 17, 18, 19],
| [20, 21, 22, 23]]])
| >>> y.strides
| (48, 16, 4)
| >>> y[1,1,1]
| 17
| >>> offset=sum(y.strides * np.array((1,1,1)))
| >>> offset/y.itemsize
| 17
|
| >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
| >>> x.strides
| (32, 4, 224, 1344)
| >>> i = np.array([3,5,2,2])
| >>> offset = sum(i * x.strides)
| >>> x[3,5,2,2]
| 813
| >>> offset / x.itemsize
| 813
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
| MIT | Semana3/Clase4_NumpyFundamentos.ipynb | mr1023/prope_programacion_2021 |
Reducing the problem sizeI reduced the number of qubits for my simulation considering the following:- I froze the core electrons that do not contribute significantly to chemistry and considered only the valence electrons. Qiskit already has this functionality implemented. So inspected the different transformers in `qiskit_nature.transformers` and find the one that performs the freeze core approximation. Still for further optimization, I removed two orbitals(indexed 3 and 4) which was also less contributing to these cases.- Used `ParityMapper` with `two_qubit_reduction=True` to eliminate 2 qubits, motive was to reduce the number of qubits used- There weren't any symmetry left after implementing the above. | from qiskit_nature.drivers import PySCFDriver
from qiskit_nature.transformers import FreezeCoreTransformer, ActiveSpaceTransformer
from qiskit.visualization import array_to_latex
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
freezeCoreTransfomer = FreezeCoreTransformer(True, [3,4])
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
qmolecule = freezeCoreTransfomer.transform(qmolecule)
print(qmolecule.energy_shift)
array_to_latex(qmolecule.mo_coeff)
# WRITE YOUR CODE BETWEEN THESE LINES - START
print("Total number of electrons is {}".format(qmolecule.num_alpha + qmolecule.num_beta))
print("Total number of molecular orbitals is {}".format(qmolecule.num_molecular_orbitals))
print("Total number of spin orbitals is {}".format(2 * qmolecule.num_molecular_orbitals))
print("qubits you need to simulate this molecule with Jordan-Wigner mapping is {}".format(2 * qmolecule.num_molecular_orbitals))
print("The value of the nuclear repulsion energy is {}".format(qmolecule.nuclear_repulsion_energy))
# WRITE YOUR CODE BETWEEN THESE LINES - END | Total number of electrons is 2
Total number of molecular orbitals is 3
Total number of spin orbitals is 6
qubits you need to simulate this molecule with Jordan-Wigner mapping is 6
The value of the nuclear repulsion energy is 1.0259348796432726
| Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
2. Electronic structure problemCreated an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings), included the 'freezecore' parameter. | from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
problem = ElectronicStructureProblem(driver, [freezeCoreTransfomer])
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
print(problem.__dict__) | {'driver': <qiskit_nature.drivers.pyscfd.pyscfdriver.PySCFDriver object at 0x7fe3a68f8e50>, 'transformers': [<qiskit_nature.transformers.freeze_core_transformer.FreezeCoreTransformer object at 0x7fe3a2965850>], '_molecule_data': <qiskit_nature.drivers.qmolecule.QMolecule object at 0x7fe398925250>, '_molecule_data_transformed': <qiskit_nature.drivers.qmolecule.QMolecule object at 0x7fe39d6db9d0>}
| Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
3. QubitConverterMapping defined as `ParityMapper` with ``two_qubit_reduction=True`` | from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper_type = 'ParityMapper'
if mapper_type == 'ParityMapper':
mapper = ParityMapper()
elif mapper_type == 'JordanWignerMapper':
mapper = JordanWignerMapper()
elif mapper_type == 'BravyiKitaevMapper':
mapper = BravyiKitaevMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True, z2symmetry_reduction=None)
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
print(converter.z2symmetries) | Z2 symmetries:
Symmetries:
Single-Qubit Pauli X:
Cliffords:
Qubit index:
[]
Tapering values:
- Possible values: []
| Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
4. Initial stateAs we described in the Theory section, a good initial state in chemistry is the HF state (i.e. $|\Psi_{HF} \rangle = |0101 \rangle$). I initialize it as follows: | from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
print(init_state) | ┌───┐
q_0: ┤ X ├
├───┤
q_1: ┤ X ├
└───┘
q_2: ─────
q_3: ─────
| Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
5. AnsatzThe ansatz used was `TwoLocal`, with rotation layers as `['ry', 'rx', 'ry', 'rx']`, entanglement gate was only `cx`, `linear` type of entanglement, `repetitions` set to `1`. Idea was to get maximum entanglement with minimum circuit depth, all the while satisfying the costs. | from qiskit.circuit.library import TwoLocal
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
# Choose the ansatz
ansatz_type = "TwoLocal"
# Parameters for q-UCC antatze
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry', 'rx', 'ry', 'rx']
# Entangling gates
entanglement_blocks = ['cx']
# How the qubits are entangled
entanglement = "linear"
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 1
# Skip the final rotation_blocks layer
skip_final_rotation_layer = False
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
# Define the variational parameter
thetas = []
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a Hadamard gate
# qc.h(qubit_label)
# qc.rz(theta_z, range(n))
qc.rx(theta_x, range(n))
qc.ry(theta_y, range(n))
# Place a CNOT ladder
for i in range(n-1):
qc.cx(i, i+1)
# Visual separator
# qc.barrier()
# rz rotations on all qubits
# qc.rz(theta_z, range(n))
# qc.ry(theta_x, range(n))
# qc.rx(theta_y, range(n))
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
print(ansatz)
from qiskit import Aer
backend = Aer.get_backend('statevector_simulator') | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
7. OptimizerThe optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU. Here it was set to `COBYLA` with sufficient amount of maximum iterations possible before convergence | from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
optimizer_type = 'COBYLA'
# You may want to tune the parameters
# of each optimizer, here the defaults are used
if optimizer_type == 'COBYLA':
optimizer = COBYLA(maxiter=30000)
elif optimizer_type == 'L_BFGS_B':
optimizer = L_BFGS_B(maxfun=60000)
elif optimizer_type == 'SPSA':
optimizer = SPSA(maxiter=50000)
elif optimizer_type == 'SLSQP':
optimizer = SLSQP(maxiter=3000) | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
8. Exact eigensolver | from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy after freezing core, for the valence electrons is", exact_energy)
# print(result_exact) | Exact electronic energy after freezing core, for the valence electrons is -1.0887060157347412
| Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
9. VQE and initial parameters for the ansatzNow we can import the VQE class and run the algorithm. | from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result) | { 'aux_operator_eigenvalues': None,
'cost_function_evals': 9660,
'eigenstate': array([ 1.42330541e-04+1.43431962e-03j, -4.23490958e-04-4.89065970e-03j,
2.23678404e-03+2.63113429e-02j, -8.30964184e-02-9.87905676e-01j,
-3.50684854e-03-5.38647331e-02j, -1.74375790e-05-3.17583535e-04j,
4.88898409e-05+9.08366879e-04j, -1.18573222e-03-2.40722890e-02j,
-3.32766414e-04-2.74166551e-03j, -5.66367067e-07-8.21282984e-06j,
-1.45660908e-06+3.46419632e-06j, 1.14985281e-04+3.80322133e-04j,
9.77945785e-03+1.13200984e-01j, 3.25326673e-05+4.16855646e-04j,
-4.70472273e-05-5.61752304e-04j, -1.69759079e-06+1.89927731e-05j]),
'eigenvalue': -1.08636624859473,
'optimal_parameters': { ParameterVectorElement(θ[16]): 0.6023040974600175,
ParameterVectorElement(θ[2]): 0.2754752513533806,
ParameterVectorElement(θ[11]): 0.08498065538345674,
ParameterVectorElement(θ[12]): 0.49405851441676,
ParameterVectorElement(θ[14]): -0.13158130545342372,
ParameterVectorElement(θ[6]): 0.1367791221567639,
ParameterVectorElement(θ[13]): 0.17919625235084016,
ParameterVectorElement(θ[31]): -0.4430577995088451,
ParameterVectorElement(θ[3]): -0.5287436933434927,
ParameterVectorElement(θ[0]): 0.9194205453662906,
ParameterVectorElement(θ[10]): -0.2779671919120382,
ParameterVectorElement(θ[15]): 0.27088658149510303,
ParameterVectorElement(θ[5]): -0.1821021597121472,
ParameterVectorElement(θ[4]): -0.6817850823310849,
ParameterVectorElement(θ[25]): 0.026278322889119826,
ParameterVectorElement(θ[26]): 0.173200889411082,
ParameterVectorElement(θ[24]): -0.6514974023548069,
ParameterVectorElement(θ[27]): -0.70010160265166,
ParameterVectorElement(θ[23]): 0.18593894311708528,
ParameterVectorElement(θ[30]): 0.3535595958735285,
ParameterVectorElement(θ[1]): 0.18118039040665185,
ParameterVectorElement(θ[9]): -0.18998201653013846,
ParameterVectorElement(θ[18]): -0.11380267575601392,
ParameterVectorElement(θ[19]): 1.1198256842268843,
ParameterVectorElement(θ[21]): 1.998490447687628,
ParameterVectorElement(θ[20]): 0.6493468545437504,
ParameterVectorElement(θ[28]): -0.5224124263847214,
ParameterVectorElement(θ[29]): 1.1427592303144563,
ParameterVectorElement(θ[17]): 0.020817230460885735,
ParameterVectorElement(θ[8]): -0.7806273236070762,
ParameterVectorElement(θ[22]): -0.35721805043525556,
ParameterVectorElement(θ[7]): 0.06268198714828736},
'optimal_point': array([ 0.91942055, -0.27796719, 0.08498066, 0.49405851, 0.17919625,
-0.13158131, 0.27088658, 0.6023041 , 0.02081723, -0.11380268,
1.11982568, 0.18118039, 0.64934685, 1.99849045, -0.35721805,
0.18593894, -0.6514974 , 0.02627832, 0.17320089, -0.7001016 ,
-0.52241243, 1.14275923, 0.27547525, 0.3535596 , -0.4430578 ,
-0.52874369, -0.68178508, -0.18210216, 0.13677912, 0.06268199,
-0.78062732, -0.18998202]),
'optimal_value': -1.08636624859473,
'optimizer_evals': 9660,
'optimizer_time': 75.38813042640686}
| Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
9. Scoring function The following was the simple scoring function:$$ score = N_{CNOT}$$where $N_{CNOT}$ is the number of CNOTs. We had to reach the chemical accuracy which is $\delta E_{chem} = 0.004$ Ha $= 4$ mHa.The lower the score the better! | # Store results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
import pandas as pd
import os.path
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']] | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-AnurananDas-3cnot-2.339767mHa-32params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
Starbucks Capstone Challenge IntroductionThis data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks. Not all users receive the same offer, and that is the challenge to solve with this data set.Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer. Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer. ExampleTo give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer. CleaningThis makes data cleaning especially important and tricky.You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers. Final AdviceBecause this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A). Data SetsThe data is contained in three files:* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)* profile.json - demographic data for each customer* transcript.json - records for transactions, offers received, offers viewed, and offers completedHere is the schema and explanation of each variable in the files:**portfolio.json*** id (string) - offer id* offer_type (string) - type of offer ie BOGO, discount, informational* difficulty (int) - minimum required spend to complete an offer* reward (int) - reward given for completing an offer* duration (int) - time for offer to be open, in days* channels (list of strings)**profile.json*** age (int) - age of the customer * became_member_on (int) - date when customer created an app account* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)* id (str) - customer id* income (float) - customer's income**transcript.json*** event (str) - record description (ie transaction, offer received, offer viewed, etc.)* person (str) - customer id* time (int) - time in hours since start of test. The data begins at time t=0* value - (dict of strings) - either an offer id or transaction amount depending on the record**Note:** If you are using the workspace, you will need to go to the terminal and run the command `conda update pandas` before reading in the files. This is because the version of pandas in the workspace cannot read in the transcript.json file correctly, but the newest version of pandas can. You can access the termnal from the orange icon in the top left of this notebook. Problem statement and Approach In scope of this program and with the data presented, we will looking at the association of promotion types, customer demographics and transaction records. The maingoals are:**1. Which promotion would return the best outcome?**The "best" outcome measured by the reach to audience and amount spending on each promotion. This is comprarison of one promotion versus the other over the sampling of customers (all customers/users in dataset)**2. Which group of customer would enjoy the promotion or which promotion is prefer?**This measured by comparative data from other group similar to a the same promotion. This is subset data, in which one group of users are compared to other or the rest of users to a certain promotion or offer**3. Could we recommend a user a certain promotion, knowing that there is similarity of this user vs. other similar users in the dataset?**Essentially, for \3 we can build a recommendation system having following concepts:- if user is in the data base, we can check the performance on received offers vs. the most similar n users. We then recommend the ranked offer that the group with n users responded in the dataset. The performance could be measured by completion records or actually dollars purchased- if we don't have anything information of the user, we could send the most successful promotion on the step 1, measured by completion response or by dollars purchased. Data Exploration | import warnings
warnings.filterwarnings('ignore') # make the output cleaner
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
plt.rcParams['figure.figsize'] = (10,6)
plt.rcParams['font.size'] = 13
plt.rcParams['font.sans-serif'] = 'Open Sans'
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['text.color'] = '#4c4c4c'
plt.rcParams['axes.labelcolor']= '#4c4c4c'
plt.rcParams['xtick.color'] = '#4c4c4c'
plt.rcParams['ytick.color'] = '#4c4c4c'
import pandas as pd
import numpy as np
import math
import json
# one analysis could ran for 20 minutes, the LOAD_JSON option here let us load file from a json file
# else the actually iteration will take place
LOAD_JSON = True | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
Portfolio | # read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
portfolio | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
three types of promotion:- **BOGO (Buy One Get One free)**: customer received this offer, customer can get two similar drinks and pay for one, essentially 50% except customer now have two drinks, works better for customer who has a friend or a colleague to join that drink- **Discount**: after certain dollars purchased for example \\$10, then a reward of \\$5 is added to buyers' Starbucks account- **Informational**: a notice or email informing a new product, or a new type of service. Customer is considered under *influence* during valid period if customer **viewed** the offerThere are four types of BOGO, four types of discounts, and two types of informational included in this campaign. We will comback with this **portfolio** data after seeing more date in **transcript** Profile | # load data in show first 5 lines
profile = pd.read_json('data/profile.json', orient='records', lines=True)
profile.head()
# shape: rows by columns
profile.shape
profile.info(verbose=True)
# genders
profile.gender.value_counts()
profile.gender.value_counts().plot(kind='bar');
# age distribution, we saw some discrepancy with 118 (years old)
profile.age.value_counts()
# abnormal with 118 value, setting by NaN
profile.loc[profile['age']==118, 'age'] = np.NaN
# all genders
profile.age.plot.hist();
# distribution of age by gender
profile.query('gender=="M"')['age'].hist()
profile.query('gender=="F"')['age'].hist(); | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
- clear more young male members than female members in the dataset. It makes sense since Starbuck sells coffee as the main drink, | # let group genders and age
gender = profile.groupby(['age', 'gender']).count()['id'].unstack()
gender
gender['M/F'] = gender['M']/gender['F']
plt.title('Ratio of Male/Female of Starbucks customers over age')
plt.ylabel('Male/Female')
plt.xlabel('age')
plt.grid()
plt.plot(gender.index, gender['M/F']); | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
- more male customers under 40, and female customers are increased propotionally with age- over 80 years old, more female than male using Starbucks products (or as Starbucks customer) | # explore member join
profile['became_member_on'] = pd.to_datetime(profile.became_member_on, format='%Y%m%d')
latest_ts = profile.became_member_on.max()
profile.became_member_on.apply(lambda x: (latest_ts - x).total_seconds()/
(3600*24*365)).plot(kind='hist', bins=20);
profile['membership_age'] = profile.became_member_on.apply(lambda x: int((latest_ts - x).total_seconds()/
(3600*24*365)))
# portion with nans in incomes
profile.income.isna().sum()/profile.shape[0]
profile.income.plot(kind='hist', bins=20);
profile.head()
# make a combined graph
fig, axs = plt.subplots(2,2, figsize=(14,8))
fig.suptitle('Customer demographics', fontsize=20)
df_ = profile.gender.value_counts()
axs[0,0].set_title('Gender')
axs[0,0].bar(x = df_.index, height=df_.values)
axs[0,1].set_title('Age')
axs[0,1].hist(profile.age)
axs[1,1].set_title('Income, $')
axs[1,1].hist(profile.income)
df_ = profile.membership_age.value_counts()
axs[1,0].set_title('Membership in year')
axs[1,0].bar(x=df_.index, height=df_.values)
fig.tight_layout()
fig.savefig('img/profile.png', optimize=True, dpi=200);
fig, axs = plt.subplots(1,2, figsize=(12,5))
fig.subplots_adjust(wspace=0.5)
fig.suptitle('Customers\' gender distribution over age', fontsize=20)
m = profile.query('gender=="M"')['age']
fm = profile.query('gender=="F"')['age']
axs[0,].set_title('Age distribution by gender')
axs[0,].hist(m.values, alpha=0.5, label='M')
axs[0,].hist(fm.values, alpha=0.5, label='F')
axs[0].legend()
axs[1,].set_title('M/F ratio over age')
axs[1,].set_ylabel('M/F')
axs[1,].plot(gender.index, gender['M/F']);
axs[1,].grid()
# fig.legend()
fig.tight_layout()
fig.savefig('img/age_dist.png', optimize=True, dpi=200); | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
encoded profile | # for columns with a few categorical values, we can get encode them direcly such as gender, became_member_on
# other need to be cut into a larger bin and then encoded by pd.get_dummies
# will we find a similar customer based on characteristics from profile
profile.columns
# membership age
profile['membership_age'] = profile.became_member_on.apply(lambda x: int((latest_ts - x).total_seconds()/
(3600*24*365)))
df_m_age = pd.get_dummies(profile.membership_age, prefix="m_age", dummy_na=True)
# genders
df_gender = pd.get_dummies(profile.gender, prefix="gender", dummy_na=True)
# for age columns, we have 84 members, which is is too large to encoded for each year
profile.age.value_counts()
# instead we gather in to a bin of 10 years
min_, max_ = profile.age.describe().loc['min'], profile.age.describe().loc['max']
min_, max_
age_bins = np.arange(min_, max_+1, 10)
age_labels = [int((age_bins[i]+age_bins[i+1])/2) for i in range(0, len(age_bins)-1)]
age_labels # average values
df_age = pd.get_dummies(pd.cut(profile.age, bins=age_bins,
labels = age_labels), prefix="age", dummy_na=True)
# and finally income
profile.income.describe()
# similar to age, we bins income to smaller groups
min_, max_ = profile.income.describe().loc['min'], profile.income.describe().loc['max']
income_bins = np.arange(min_, max_+1, 10_000)
income_labels = [int((income_bins[i]+income_bins[i+1])/2) for i in range(0, len(income_bins)-1) ]
income_labels # average values
df_income = pd.get_dummies(pd.cut(profile.income, bins=income_bins,
labels = income_labels), prefix="income", dummy_na=True)
# then we concat all users with encoded columns
profile_encoded = pd.concat([profile.id, df_gender, df_age, df_income, df_m_age], axis=1)
profile_encoded.set_index('id', inplace=True)
profile_encoded.shape
# see what np.dot production looks like
np.dot(profile_encoded.iloc[1], profile_encoded.iloc[1])
def encoding_profile(df=None):
'''encode values of columns in user profile.
INPUT: user profile dataframe
OUTPUT: a dataframe with value encoded
'''
# membership age
df['membership_age'] = df.became_member_on.apply(
lambda x: int((latest_ts - x).total_seconds()/(3600*24*365)))
df_m_age = pd.get_dummies(df.membership_age, prefix="m_age", dummy_na=True)
# gender
df_gender = pd.get_dummies(df.gender, prefix="gender", dummy_na=True)
min_, max_ = df.age.describe().loc['min'], df.age.describe().loc['max']
# user age
age_bins = np.arange(min_, max_+1, 10)
age_labels = [int((age_bins[i]+age_bins[i+1])/2) for i in range(0, len(age_bins)-1)]
df_age = pd.get_dummies(pd.cut(df.age, bins=age_bins,
labels = age_labels), prefix="age", dummy_na=True)
# user income
min_, max_ = df.income.describe().loc['min'], df.income.describe().loc['max']
income_bins = np.arange(min_, max_+1, 10_000)
income_labels = [int((income_bins[i]+income_bins[i+1])/2) for i in range(0, len(income_bins)-1)]
df_income = pd.get_dummies(pd.cut(df.income, bins=income_bins,
labels = income_labels), prefix="income", dummy_na=True)
# concatinate
profile_encoded = pd.concat([df.id, df_gender, df_age, df_income, df_m_age], axis=1)
profile_encoded.set_index('id', inplace=True)
return profile_encoded
def find_similar_users(user_id, df=None, n_top=100):
'''find n_top similars to user_id based np.dot product
INPUT:
user_id: a select user id
df: a dataframe contains encoded columns characterize each user
n_top: number of top users would be return
OUTPUT:
a dictionary contain a list of user_id and similar score
'''
# select all users except the user_id
users = df.index.drop(user_id)
# find similarity
scores = [{'user': user, 'score':np.dot(df.loc[user_id],
df.loc[user])} for user in users]
# sort from top score
scores = sorted(scores, key=lambda k: k['score'], reverse=True)
return scores[:n_top]
# select a user based on index
user_id = profile_encoded.index[100]
user_id
profile_encoded = encoding_profile(df=profile)
profile_encoded.head()
find_similar_users(user_id, df=profile_encoded) | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
Transcript | # transaction record
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
transcript.head()
transcript.info()
# long table
transcript.shape
# average traffic per user
transcript.shape[0]/transcript.person.nunique()
# transaction by timestamp (hours)
transcript.time.plot(kind='hist');
# transaction categories
transcript.event.value_counts()
transcript.event.value_counts().iloc[1:].plot(kind='bar')
plt.xticks(rotation=0);
plt.ylabel('Count')
plt.title('Overall transaction summary of dataset');
| _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
- over the all transaction, about 44% promotion received gets completed- 75% of promotions received gets viewed- 58% of promotions viewed gets completed- two informational promotions don't have "completed" record user-offer-matrix | transcript.head()
# get content of value columns, unpack
transcript['amount'] = transcript['value'].apply(lambda x: list(x.values())[0]
if 'amount' in list(x.keys())[0] else np.NaN)
transcript['offer'] = transcript['value'].apply(lambda x: list(x.values())[0]
if 'offer' in list(x.keys())[0] else np.NaN)
transcript.drop(columns='value', inplace=True)
transcript.shape
# let see total purchase in $M
transcript.amount.sum()/1_000_000
# on average, customer make a $12.77, and 50% customers make $8.9 or more
transcript.amount.describe()
transcript.amount.plot(kind='hist', bins=100, logy=True);
# let see who make a $100 or more
transcript[transcript.amount > 100]
# look like we have some office parties, let see if we can filter out transaction outside 1.5IRQ
irq = transcript.amount.describe().loc['75%'] - transcript.amount.describe().loc['25%']
irq
upper_limit = transcript.amount.describe().loc['75%'] + 1.5*irq
upper_limit
transcript.shape
# seem we are losing more than 50% of transaction if we filter out
# we will proceed WITHOUT filter out,
transcript[transcript.amount < upper_limit].shape
# moving on to unique users
transcript.person.nunique()
# first let tackle the promo with BOGO or discount
discount_bogo_ids = portfolio[portfolio['offer_type'] != 'informational'].id.values
discount_bogo_ids
# informational offer id
info_ids = portfolio[portfolio['offer_type'] == 'informational'].id.values
info_ids | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
- We need to rate how a promotion is sucess or not, for discout or BOGO, a **success case** this should be: `offer received >> offer viewed >> offer completed`- A **failed** promotion is: `offer received >> no view on offer >> offer completed`basically, an offer made, customer did not know about an offer but still purchase goods to the minimum amount required by an offer. From pretext of the problem, it is not desirable since the marketing campaign had no influence on customer's purchase- Another failed case: `offer received >> offer viewed >> offer not completed` before time expired- and another case: `offer received >> no view on offer >> offer not completed` before time expiredthe last one represents a case that is more complicated. We do not know offer not completed because they are not aware of a promotion or the promotion is not appeal enough to get it completed.Let tackle problem by trying a simpler approach: rating a success/fail offer by looking in `viewed, completed` records. This approach is only applicable for 8 discounts or BOGO promotions. For informational ones, we could simplify by looking only to `viewed` records Simple rating based on records of viewed, completed | def rate_offer_discount_bogo(offer_id, df=None):
'''
rate a offer based on average number of viewed to number of completed
and total offer received.
score = number of completed / number of received (promotion)
For example:
- if a customer received two offers, viewed two and completed two, the score is 1
- if a customer received two, one viewed, completed 2, score is 2/2 = 1
- if a customer received two, one viewed, completed 0, score is 0/2 = 0
INPUT:
offer_id: id of offer of discount or BOGO
df: dataframe with promoting events (filter out transacion)
OUTPUT: a dataframe with index as the users received the offer and the column name as the offer id
'''
df_group = df.query(f'offer==@offer_id').groupby(
['person', 'event']).count()['time'].unstack(fill_value=0)
df_group = df_group.apply(lambda row:
row['offer completed']/row['offer received']
if row['offer received']>0 else np.NaN, axis=1)
return df_group.rename(offer_id).to_frame()
# for informational events, it is harder to evaluate influence of seeing the offer and the follow up transaction
# for a simple case, I will rate them based on viewed/received ratio
def rate_offer_info(offer_id, df=None):
'''rate informational offer based number of viewed and received.
rate = number of viewed/ number of received
For example:
- if all offers were viewed, the rate = 1
- if none of offers were viewed, teh rate = 0
INPUT: offer_id - id for the offer
OUTPUT: a dataframe with promoting events (filter out transacion)
'''
df_group = df.query('offer==@offer_id').groupby(
['person', 'event']).count()['time'].unstack(fill_value=0)
df_group = df_group.apply(lambda row: row['offer viewed'] /row['offer received']
if row['offer received']>0 else np.NaN, axis=1)
return df_group.rename(offer_id).to_frame()
dfs = list()
for offer_id in discount_bogo_ids:
dft = rate_offer_discount_bogo(offer_id, df=transcript)
dfs.append(dft)
# append to a list of dataframe on the previous step
for offer_id in info_ids:
dft = rate_offer_info(offer_id, df=transcript)
dfs.append(dft)
for df in dfs:
# print(df.info())
print(df.shape)
# check set of users received all offers
common_users = set(dfs[0].index)
for df in dfs[1:]:
users = df.index
common_users = common_users.intersection(users)
print(len(common_users))
# none of user received all offers
# pd.concat is easier for applying, but only along columns or a along the rows which is less flexible
df = dfs[0]
for df_ in dfs[1:]:
df = df.merge(df_, on='person', how='outer')
print(df.shape)
# there is 6 person is missing between transcript and this df
transcript.person.nunique()
df.shape
# let see which ids are not have any record on transaction
no_record_users = set(np.setdiff1d(transcript.person.to_list(), df.index.to_list()))
no_record_users
# look normal, I want to make sure the data is still there
profile.set_index('id').loc[no_record_users] | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
Apply FunkSVD | # we get about 2/3 matrix is null values
sum(df.isnull().sum())/(df.shape[0]*df.shape[1])
# is any user with no promotional available?
# user will get at least 4 offers, and maximum 9 offers
df.isnull().sum(axis=1).sort_values(ascending=False).value_counts().sort_values()
# adopted from Udacity's exercise
def FunkSVD(user_offer_mat, latent_features=10, learning_rate=0.0001, iters=100):
'''
This function performs matrix factorization using a basic form of FunkSVD with no regularization
INPUT:
ratings_mat - (numpy array) a matrix with users as rows, promotion_id as columns, and ratings as values
latent_features - (int) the number of latent features used
learning_rate - (float) the learning rate
iters - (int) the number of iterations
OUTPUT:
user_mat - (numpy array) a user by latent feature matrix
movie_mat - (numpy array) a latent feature by promotion_id matrix
'''
# Set up useful values to be used through the rest of the function
n_users = user_offer_mat.shape[0]
n_offers = user_offer_mat.shape[1]
num_ratings = np.count_nonzero(~np.isnan(user_offer_mat))
# initialize the user and promotion matrices with random values
user_mat = np.random.rand(n_users, latent_features)
offer_mat = np.random.rand(latent_features, n_offers)
# initialize sse at 0 for first iteration
sse_accum = 0
# header for running results
print("Optimizaiton Statistics")
print("Iterations | Mean Squared Error ")
# for each iteration
for iteration in range(iters):
# update our sse
old_sse = sse_accum
sse_accum = 0
# For each user-promotion pair
for i in range(n_users):
for j in range(n_offers):
# if the rating exists
if user_offer_mat[i, j] > 0:
# compute the error as the actual minus the dot product of the user and promotion latent features
diff = user_offer_mat[i, j] - np.dot(user_mat[i, :], offer_mat[:, j])
# Keep track of the sum of squared errors for the matrix
sse_accum += diff**2
# update the values in each matrix in the direction of the gradient
for k in range(latent_features):
user_mat[i, k] += learning_rate * (2*diff*offer_mat[k, j])
offer_mat[k, j] += learning_rate * (2*diff*user_mat[i, k])
# print results for iteration
print("%d \t\t %f" % (iteration+1, sse_accum / num_ratings))
return user_mat, offer_mat
# this is sparse matrix, let see how FunkSVD algorithm performs on this set
df_ = df.to_numpy()
user_mat, offer_mat = FunkSVD(df_, latent_features=10, learning_rate=0.005, iters=10) | Optimizaiton Statistics
Iterations | Mean Squared Error
1 0.042280
2 0.028292
3 0.027818
4 0.027364
5 0.026920
6 0.026486
7 0.026063
8 0.025649
9 0.025245
10 0.024850
| MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
- small error, but is this a good approximation? | # reconstruct user-item matrix based on decomposed matrices
pred_mat = np.dot(user_mat, offer_mat)
# check average value of each columns
df.mean(axis=1)
# a quick check on the mean value is not promissing, we are looking value between -1 to 1
pred_mat.mean(axis=1) | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
- this approximation using FunkSVD seems NOT working well with our dataset How FunkSVD with a denser matrix? | # select matrix have 5 or less cells as null values
df2 = df[df.isnull().sum(axis=1) <= 5]
df_ = df2.to_numpy()
user_mat, offer_mat = FunkSVD(df_, latent_features=20, learning_rate=0.005, iters=10)
pred_mat = np.dot(user_mat, offer_mat)
# still. something is not working right
pred_mat.mean(axis=1)
pred_mat.max(axis=1)
diff_sqr = np.nansum((pred_mat - df_)**2)
# mean squared root error. It is almost a guess work.
np.sqrt(diff_sqr/(pred_mat.shape[0]*pred_mat.shape[1])) | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
Surprise SVD | # let try out another FunkSVD
# https://surprise.readthedocs.io/en/stable/getting_started.html
from surprise import SVD
from surprise import Reader
from surprise import Dataset
from surprise.model_selection import cross_validate
from surprise.model_selection import KFold
from surprise import accuracy
reader = Reader(rating_scale=(0, 1))
# transfer from wide table to long table
df3 = pd.melt(df.reset_index(), id_vars=['person'], value_name='rating')
df3.columns = ['user', 'offer', 'rating']
print(df3.shape)
df3.head()
# load data from dataframe
data = Dataset.load_from_df(df3[['user', 'offer', 'rating']], reader)
kf = KFold(n_splits=3)
algo = SVD()
for trainset, testset in kf.split(data):
# train and test algorithm.
algo.fit(trainset)
predictions = algo.test(testset)
# Compute and print Root Mean Squared Error
accuracy.rmse(predictions, verbose=True) | RMSE: nan
RMSE: nan
RMSE: nan
| MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
- it appeared that SVD algorithm is not converged. It is consistent with the RMSE error of 0.5 as above for value between 0 and zero Deeper dive into `Transcript` Rate transaction by completion | # let see transaction records again
transcript.head()
# and make a wide table by counting records
transcript.groupby(['offer', 'event']).count()['person'].unstack()
df_offer = transcript.groupby(['offer', 'event']).count()['person'].unstack()
df_offer.index.set_names(names='id', inplace=True)
# and calculate some ratios
df_offer['view/receive'] = df_offer.apply(lambda row: row['offer viewed']/row['offer received'], axis=1)
df_offer['comp/receive'] = df_offer.apply(lambda row: row['offer completed']/row['offer received'], axis=1)
df_offer['comp/view'] = df_offer.apply(lambda row: row['offer completed']/row['offer viewed'], axis=1)
df_offer
# checking column on portfolio dataset
portfolio.columns
# join promotion profile and promotion records
portfolio_extra = portfolio.merge(df_offer, how='right', left_on='id', right_on='id')
# list comprehension over 2D array, cast to a set to get unique element
channels = set([channel for row in portfolio_extra.channels.values for channel in row])
# encode channels
for channel in channels:
portfolio_extra[channel] = portfolio_extra.channels.apply(lambda cell: 1 if channel in cell else 0)
portfolio_extra.head()
# or a cleaner view
portfolio_extra[['id', 'channels', 'difficulty', 'duration', 'offer_type',
'view/receive', 'comp/receive', 'comp/view']]
# ratio of views and advertising channels
plt.figure(figsize=(8,5))
plt.plot(portfolio_extra[['email', 'mobile', 'web', 'social']].sum(axis=1),
portfolio_extra['view/receive'],
ls='', marker='o', markersize=20)
plt.title('Marketing channels vs. view rate')
plt.ylabel('#view/#receive per promotion')
plt.xlabel('#channel')
plt.tight_layout()
plt.savefig('img/view_rate.png', optimize=True, dpi=120) | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
summary:- more marketing channels resulted a higher view rate- all promotion has email as one of the channel- view rate is not linear over each channel, - without social marketing, the view rate is about 0.54 (or 54% customers saw their promotion) - without social plus mobile, the view rate drops to 0.35 - without web marketing, the view rate is 0.87, which is the least influence channels - all promotion sent via email so we are not able to draw any relation on this channel | # let see all the rates with each promotion id
categories = ['view/receive','comp/receive', 'comp/view']
portfolio_extra[categories].plot(kind='bar', subplots=True); | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
**Successful rate of completion**- we will define a term to quantify a successful transaction by: s_rate = 1 - (completed/ received)*(completed/ viewed)for each transaction or overall users in this dataset. - in a perfect case, an offer was received, viewed, then completed, `s_rate = 0`- if a customer received an offer, NOT see the offer, but still completed the offer, the `s_rate = -infinity`- if a customer received an offer, saw the offer, but not completed the offer, the `s_rate = 1`- if we want to find the most successful transaction, we can use the absolute value (`abs(s_rate)`) of s_rate, in which s_rate close to zero is more sucess than larger (a far away) from zero. | # for a simpler approach, I will use the ratio of (completed/received)*(completed/viewed)
# as one rating for how succesful the promotion is. This is only applicable for discount for BOGO
categories = ['view/receive','comp/receive', 'comp/view']
portfolio_extra['s_rate'] = 1 - portfolio_extra['comp/receive']*portfolio_extra['comp/view']
portfolio_extra.columns
portfolio_extra[['id', 'offer_type', 'difficulty', 'duration', 's_rate']].\
sort_values(by='s_rate', ascending=True)
def find_pop_offers(transcript, portfolio):
'''find offer that has a high successful rate
INPUT:
transcript: dataframe of records of transaction of customers with promo offered
portfolio: information of promotion including type, duration, difficulty
OUTPUT:
a sorted offers which show the highest successful rate `s_rate`.
s_rate = (number of completed/number of received)*(number of viewed/number of received)
for each promotion
'''
df = transcript.groupby(['offer', 'event']).count()['person'].unstack()
df.index.set_names(names='id', inplace=True)
df['view/receive'] = df.apply(lambda row: row['offer viewed']/row['offer received'], axis=1)
df['comp/receive'] = df.apply(lambda row: row['offer completed']/row['offer received'], axis=1)
df['comp/view'] = df.apply(lambda row: row['offer completed']/row['offer viewed'], axis=1)
df = portfolio.merge(df, how='right', left_on='id', right_on='id')
# define an `s_rate` as sucessful rate based on completion
df['s_rate'] = abs(1-df['comp/receive']*df['comp/view'])
offers = df[['id', 'offer_type', 'difficulty', 'duration',
's_rate']].sort_values(by='s_rate', ascending=True)
return offers
ranked_offers = find_pop_offers(transcript, portfolio)
ranked_offers
labels = ranked_offers[['id', 'offer_type']].apply(
lambda row: row['offer_type']+'_' + row['id'][-5:], axis=1).values
labels
plt.figure(figsize=(12,6))
plt.bar(x=labels, height=ranked_offers['s_rate'], width=0.4)
plt.xticks(rotation=0)
plt.title('Successful rate by promotion id (lower is more completion)')
plt.tight_layout()
plt.grid()
plt.savefig('img/s_rate.png', optimize=True, dpi=120);
transcript.amount.describe()
plt.figure(figsize=(12,8))
# plt.subplot_adjust(h_space=0.5)
plt.suptitle('Transaction records')
ax1 = plt.subplot(221)
ax1.set_title('Overall ')
df_ = transcript.event.value_counts()
short_labels = [label.split(' ')[-1] for label in df_.index]
ax1.bar(x=short_labels, height=df_.values, width=0.3)
ax2 = plt.subplot(222)
ax2.set_title('Amount per transaction, $')
ax2.hist(transcript.amount, log=True, bins=21)
ax3 = plt.subplot(212)
ax3.set_title('Number of events by promotion id')
df_ = transcript.groupby(['offer', 'event']).count()['time'].unstack()
sort_labels = [label[-5:] for label in df_.index]
df_[['offer received', 'offer viewed', 'offer completed']].plot(kind='bar', ax=ax3)
ax3.set_xticklabels(sort_labels, rotation=0)
ax3.legend(loc='best', ncol=3, bbox_to_anchor=(0.2, 0., 0.5, -0.15))
ax3.set_xlabel('')
# # axs[1,0].bar(df_.index, df_.values)
plt.tight_layout()
plt.savefig('img/transaction.png', optimize=True, dpi=120) | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
Grouping transaction by users | def analyze_df(df, offer_id, info_offer=False):
'''summary a transaction if it was viewed, other offer, and amount spent.
INPUT:
df: a dataframe containing transaction records
offer_id: an promotion id about an offer
OUPUT:
a dictionary containing summary
'''
# print(df)
start_idx = df.head(1).index[0]
result = {'viewed': False,
'completed': False,
'other offer viewed': False,
'amount': 0}
count_viewed_offers = df.query('event=="offer viewed"')['offer'].value_counts().index
if offer_id in count_viewed_offers:
result['viewed'] = True
if len(count_viewed_offers) > 1:
result['other offer viewed'] = True
amount = df['amount'].sum()
result['amount'] = amount
if info_offer:
# for informational offer, if we saw at least one transaction, we marked it as a completed one
if len(df['offer']=='transaction') >0:
result['completed'] = True
else:
# for discount for BOGO offers, we look for "offer completed" in event columns
if offer_id in df.query('event=="offer completed"')['offer'].values:
result['completed'] = True
return {start_idx: result}
def slice_df(df, start_point, valid_hours):
'''slice a dataframe based on duration of offer since it's received.
INPUT:
df: a larger dataframe contains all transaction records
start_point: starting index, usually when offer is received
valid_hours: duration of offer to be valid
OUPUT:
a sliced dataframe
'''
time_track = df.loc[start_point]['time']
time_expired = time_track + valid_hours
#
for idx, row in df[['offer', 'event', 'time', 'amount']].loc[start_point:,].iterrows():
time_expired = time_track + valid_hours
if row['time'] > time_expired:
return df.loc[start_point:idx]
else:
return df.loc[start_point:]
portfolio.query('offer_type=="informational"').id.values
transcript.groupby('event').count()
transcript.head()
def info_offer(user, transcript, portfolio):
'''summary of offers for each user by itering through each promotion and count numbers
of received, viewed, and completed transaction.
INPUT:
person: an id associated with an user
portfolio: a dataframe containg promotion id and valid hours
OUTPUT:
a list of dictionary with person_id as the key and summary stat as the value
'''
# slice the main dataset to each user/person with 4 columns
df = transcript.query('person==@user')[['time', 'offer', 'event', 'amount']]
res = dict()
for offer in portfolio['id'].values:
# if informational offer, we count one transaction for completion
if offer in portfolio.query('offer_type=="informational"').id.values:
info_offer = True
else:
info_offer = False
num_offer = df.query(f'offer=="{offer}" & event=="offer received"').index
valid_hours = portfolio.query(f'id=="{offer}"')['duration'].values[0]*24
offer_stats = []
for segment in num_offer:
dft = slice_df(df, segment, valid_hours)
result = analyze_df(dft, offer, info_offer=info_offer)
offer_stats.append(result)
# summary over each offer
result = {
'viewed': 0,
'completed': 0,
'other offer viewed': 0,
'amount': 0}
# counting all offer for each promotion id
result['received'] = len(num_offer)
result['info'] = info_offer
for stat in offer_stats:
stat_v = list(stat.values())[0]
result['viewed'] += stat_v['viewed']
result['completed'] += stat_v['completed']
result['other offer viewed'] += stat_v['other offer viewed']
result['amount'] += stat_v['amount']
res[offer] = result
return res
# iterate through all users and make a summary of transaction based on offer id
# the would take 20 minutes to complete
def user_transaction(transcript=None, portfolio=None, save_file=True):
info_res = dict()
max_value = transcript['person'].nunique()
i= 0
with progressbar.ProgressBar(max_value=max_value) as bar:
for person in transcript['person'].unique():
person_ = info_offer(person, transcript, portfolio)
info_res[person] = person_
i +=1
bar.update(i)
df_info = pd.DataFrame.from_dict(data=info_res, orient='index')
if save_file:
# save to file, and save 20 minutes if we need to load them
df_info.to_json('data/offer_summary.json')
return df_info | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
If you want to test the grouping transaction with with person, this is a screenshot. Otherwise, change `LOAD_JSON` from `True` to `False` at the beginning of the file | # LOAD_JSON = False
if not LOAD_JSON:
import progressbar
df_info = user_transaction(transcript=transcript, portfolio=portfolio)
df_info
else:
df_info = pd.read_json('data/offer_summary.json')
# make a short label from portfolio dataframe
labels = portfolio[['id', 'offer_type']].apply(
lambda row: row['offer_type'][:4]+'_' + row['id'][-5:], axis=1).values
labels
# compare the total dollars spent for each transaction
df_amount = pd.DataFrame()
amounts = dict()
df_info.columns = labels
for offer in df_info.columns:
df_amount[offer] = df_info[offer].apply(lambda x: x['amount']/x['received'] if x['received']>0 else np.NaN)
# save average values to a dictionary
amounts['received'] = df_amount.describe().loc['mean'].to_dict()
# for viewed event
df_amount = pd.DataFrame()
# average spending per transaction if offer NOT viewed
for offer in df_info.columns:
df_amount[offer] = df_info[offer].apply(lambda x: x['amount'] if x['viewed']==0 else np.NaN)
amounts['not_viewed'] = df_amount.describe().loc['mean'].to_dict()
# if offer was viewed
for offer in df_info.columns:
df_amount[offer] = df_info[offer].apply(lambda x: x['amount']/x['viewed'] if x['viewed']>0 else np.NaN)
amounts['viewed'] = df_amount.describe().loc['mean'].to_dict()
# for transaction not completed
df_amount = pd.DataFrame()
for offer in df_info.columns:
df_amount[offer] = df_info[offer].apply(lambda x: x['amount'] if x['completed']==0 else np.NaN)
amounts['not_completed'] = df_amount.describe().loc['mean'].to_dict()
# and completed transaction
for offer in df_info.columns:
df_amount[offer] = df_info[offer].apply(lambda x: x['amount']/x['completed'] if x['completed']>0 else np.NaN)
amounts['completed'] = df_amount.describe().loc['mean'].to_dict()
df_amount = pd.DataFrame().from_dict(amounts, orient='index')
df_amount
fig, ax = plt.subplots(figsize=(12,6))
df_amount.transpose().plot(kind='bar', ax=ax)
ax.legend(ncol=5)
ax.set_title('Amount purchased by offer id')
ax.set_ylabel('Average amount per offer, $')
ax.set_xticklabels(labels=df_amount.columns, rotation=0)
ax.grid()
fig.tight_layout()
fig.savefig('img/count_event.png', optimize=True, dpi=120)
df_amount.loc['viewed']/df_amount.loc['not_viewed'] | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
remarks: - one transaction marked completion of informational offer is not sufficient to distinguish. All received offers were completed based on this assumption, which is minimal. - when a BOGO and discount offer was viewed, the total transaction during the offer was valid make a huge different to the offer not viewed. Similar observation with completed event on offer - the total dollar spent was 7 to 100 times more with offer that user saw. The precaution is that this is confounding with activeness of users. If users used the Starbucks product more, they should checked the app or promotion more offen, made more purchase in general. Recommendation system Approach- if a user is on the database and have completed of a few offesr, combining promotion by user history purchase and user similarity purchase- if a user is totally new to the app, recommend the top promotion by overall population | # load group transaction from json file
df_info = pd.read_json('data/offer_summary.json')
encoded_profile = encoding_profile(df=profile)
def evaluate_similar_users(user_id, profile_df=None,
info_df=None, sort_amount=False, n_top=100):
'''evaluate similar users responded to promotions
INPUT: a dataframe containing similar users responses to promotion
OUTPUT: a dataframe sorted by sucessful rate of view, complete
and average amount of purchase
'''
users = find_similar_users(user_id, df=profile_df, n_top=n_top)
users = [item['user'] for item in users]
df_users = info_df.loc[users]
cols = df_users.columns
ranking = dict()
for col in cols:
receives = df_users[col].apply(lambda x: x['received']).sum()
completes = df_users[col].apply(lambda x: x['completed']).sum()
views = df_users[col].apply(lambda x: x['viewed']).sum()
amount_avg = df_users[col].apply(lambda x: x['amount']).mean()
# rank metrics
ranking[col] = {'rank': abs(1-completes**2/(views*receives)),
'amount': amount_avg}
ranks = pd.DataFrame().from_dict(ranking,orient='index')
if not sort_amount: # sort by completion rate first then amount
ranks = ranks.sort_values(['rank', 'amount'], ascending=[True, False])
else: # by amount first, then rank
ranks = ranks.sort_values(['amount', 'rank'], ascending=[False, True])
ranks = {k:round(v,2) for k,v in ranks['amount'].to_dict().items()}
return ranks
def evaluate_user_history(user_id, df=None):
'''evaluate user history preference by evaluating number of view and completion
on each promotion
INPUT: a user_id - a string encoded for each user
df: a dataframe encoded user demographic
OUTPUT: a ranking dictionary containing promotion id and amount of
dollar purchased on offers that have number of views equals to number
of completion
'''
dft = df.loc[user_id]
user_hist = dict()
for promotion in dft.index:
dft_ = dft.loc[promotion]
if dft_['viewed'] == dft_['completed'] > 0:
user_hist[promotion] = dft_['amount']
# print(dft.loc[promotion])
user_hist = {k:v for k,v in sorted(user_hist.items(),
key=lambda item: item[1],
reverse=True)}
return user_hist
def recommend_offers(user_id, profile_df=encoded_profile, info_df=df_info, sort_amount=False):
'''recommend a few promotions for a user based on user history preference,
group similarity, overall
INPUT: user_id (str) - user identification on dataset
OUTPUT: a dictionary of top three promotions with expected dollar spent
'''
# existing users:
if user_id in profile_df.index:
offers = evaluate_similar_users(user_id, profile_df=profile_df,
info_df = info_df, sort_amount=sort_amount)
user_pref = evaluate_user_history(user_id, df=info_df)
# if user make a large dollar amount, the group preference will be updated
for k,v in user_pref.items():
if v > offers[k]:
offers[k] = v
if sort_amount:
offers = {k:v for k,v in sorted(offers.items(),
key=lambda item: item[1],
reverse=True)}
else: # new user
offers = find_pop_offers(transcript, portfolio).set_index('id')['s_rate'].to_dict()
offers = {k: round(v,2) for k,v in offers.items()}
return offers | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
Test recommendation | import random
random.seed(2021)
max_idx = len(profile)
user_idx = random.randint(0, max_idx)
user_id = profile.iloc[user_idx].id
user_id
# '3713b8ef49c541beaa07ed83ed0136d5'
# group preference
group = evaluate_similar_users(user_id, profile_df=encoded_profile,
info_df=df_info, sort_amount=True)
group
# user preference
user = evaluate_user_history(user_id, df=df_info)
user
# based on both the user and similar users preference
recommend_offers(user_id=user_id, profile_df=encoded_profile,
info_df=df_info, sort_amount=False)
recommend_offers(user_id=user_id, profile_df=encoded_profile,
info_df=df_info, sort_amount=True)
# new user, this show the rate of completion
recommend_offers(user_id='new_user') | _____no_output_____ | MIT | Starbucks_Capstone_notebook.ipynb | binh-bk/Starbucks-Promotion-Recommender |
Amazon web services (AWS)- [Loading data into S3 buckets](Loading-data-into-S3-buckets) - via Console, CLI, Boto3- [Setting up an EC2 reserved instance](Setting-up-a-reserved-instance) - via Console, CLI, Boto3- [Spin up containers via Docker Machine](Spin-up-containers-via-Docker-Machine)- [Instance types](Instance-types)- [ECS clusters and Docker Cloud](ECS-clusters-and-Docker-Cloud)TODO:- (Make task) Getting Spark Python and Jupyter notebook running on Amazon EC2- https://medium.com/@josemarcialportilla/getting-spark-python-and-jupyter-notebook-running-on-amazon-ec2-dec599e1c297 IntroductionThere is not special reason to study AWS compared to Google Cloud, Azure, Digital Ocean etc. Amazon Cloud is probably the most popular today, so it offers a nice parralel to Python. AWS is the web-based gateway to the Amazon Cloud computing resources.Of note, AWS [will deploy a region in Sweden](https://aws.amazon.com/blogs/aws/coming-in-2018-new-aws-region-in-sweden/) this year, which will make it interesting for genomics research, especially since it will be made GDPR compliant. Currently no Swedish patient data can be processed on premises outside of Sweden, but the cloud is a player in general non-clinical research.[AWS](https://aws.amazon.com/) is an umbrella for a large number of computing resources, starting from storage and ending with the management of the remote computing infrastructure. To be practical, our focus is on loading data into a bucket, setting up a cloud instance, and later using Docker to remotely spin up cloud instances. We will also learn how to manage these resources via Python. Loading data into S3 bucketsLet us start with loading data. This is a common operation when you want to share your research result with someone, but it can also be useful for yourself as a way to backup your data. Clouds use the concept of 'buckets' to hold data. The 'objects' stored in a bucket can have any encoding, from text to film. There used to be severe penalties on loading super massive objects. Today however, the maximum size for an object is 5TB (on AWS).We will learn how to do this via the web console, via the command line interface and via Python. Note that even thogh these options seem like separated, they are actually using the same API. Web ConsoleTask:- Use the console to load a test file onto a S3 bucket- Follow this doc link: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html- Use the following shell command to generate some test data, or use your own:```$ for i in {1..5}; do echo "l$i">f$i.txt && gzip f$i.txt; done && \zcat f*.txt.gz| gzip > f.gz```- Figure out how much your bucket would cost (tip: it is free up to a threshold)! Amazon CLINow let's repeat those steps using the command line interface. But first, we must install it.Links: - https://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html- https://aws.amazon.com/getting-started/tutorials/backup-to-s3-cli/```$ sudo apt install awscli$ aws configureAWS Access Key ID [None]: AWS Secret Access Key [None]:(also used eu-central-1 for region, and json as format)```The above commang needs SSL certificates. To generate the aws keys:- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-up-ami-tools.html?icmpid=docs_iam_consoleami-tools-managing-certs```$ openssl genrsa 2048 > aws-private.pem$ openssl req -new -x509 -nodes -sha256 -days 365 -key aws-private.pem -outform PEM -out aws-certificate.pem if in dire need for security use:$ sudo apt-get install xclip$ xclip -sel clip < ~/.ssh/aws-private.pem```Now that you installed the CLI, here are the main bucket related activities:```aws s3 mb s3://my-first-backup-bucketupload:aws s3 cp “C:\users\my first backup.bak” s3://my-first-backup-bucket/download:aws s3 cp s3://my-first-backup-bucket/my-first-backup.bak ./delete:aws s3 rm s3://my-first-backup-bucket/my-first-backup.bak```Data can also be streamed towards a bucket. This can be useful to avoid unnecesary space waste onto the local cloud or PC, but it can be just as useful when it comes to using bucket data without storing all that data locally. It can be done via piping, or proccess substitution:```$ aws s3 mb s3://siofuysni78$ zcat f*.txt.gz| gzip | aws s3 cp - s3://siofuysni78/f.gz$ aws s3 rm s3://siofuysni78/f.gz$ aws s3 rb s3://siofuysni78 --force```Why did I use such a weird name? It is because Amazon indexes all buckets by their name, thus a name such as "test123" will never fly. Here is how to stream from S3 to your computing resource (it can be a cloud instance, you local machine or a remore server)```$ aws s3 mb s3://siofuysni78$ zcat f*.txt.gz| gzip | aws s3 cp - s3://siofuysni78/f.gz$ aws s3 cp s3://siofuysni78/f.gz - | gunzip | grep 1l1``` Boto3Links:- http://boto3.readthedocs.io/en/latest/guide/migration.htmlinstallation-configuration- https://boto3.readthedocs.io/en/latest/guide/s3-example-creating-buckets.html- http://boto3.readthedocs.io/en/latest/reference/services/s3.html```conda install -c anaconda boto3pip install boto3``` | import boto3
# initialize the S3 service
s3 = boto3.client('s3')
# create a test bucket (tip: use a different name!)
s3.create_bucket(Bucket='jo8a7fn8sfn8', CreateBucketConfiguration={'LocationConstraint': 'eu-central-1'})
# Call S3 to list current buckets
response = s3.list_buckets()
# Get a list of all bucket names from the response
buckets = [bucket['Name'] for bucket in response['Buckets']]
# Print out the bucket list
print("Bucket List: %s" % buckets)
import boto3
# Create an S3 client
s3 = boto3.client('s3')
filename = '/path/to/test/file'
bucket_name = 'jo8a7fn8sfn8'
# Uploads the given file using a managed uploader, which will split up large
# files automatically and upload parts in parallel.
s3.upload_file(filename, bucket_name, filename)
# or
# s3.Object('mybucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
# https://boto3.readthedocs.io/en/latest/guide/migrations3.html#deleting-a-bucket
import boto3
import botocore
s3 = boto3.resource('s3')
bucket = s3.Bucket('jo8a7fn8sfn8')
for key in bucket.objects.all():
key.delete()
bucket.delete() | _____no_output_____ | CC0-1.0 | day3/aws.ipynb | NBISweden/workshop-advanced-python |
Now I want to test using the buchet without local file storage. Setting up a reserved instanceAmazon names their most popular instances Elastic Compute Cloud (EC2).- https://aws.amazon.com/ec2/- https://docs.aws.amazon.com/efs/latest/ug/gs-step-one-create-ec2-resources.html- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.htmlProbably the most basic level of access to the Amazon computing infrastructure is setting up a free tier reserved instance. Web ConsoleTask:- Setup an AWS instance using the Free Tier (don't forget to close it!).- [https://aws.amazon.com/console/](https://aws.amazon.com/console/) Amazon CLI```aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups my-sg``` Boto3- http://boto3.readthedocs.io/en/latest/reference/services/ec2.htmlinstance- http://boto3.readthedocs.io/en/latest/guide/migrationec2.htmllaunching-new-instancesTask:- A larger task is to create an instance with Boto3, install an SSH client such as Paramaiko and run commands on the remote client.Helpful code: |
import boto3
import botocore
import paramiko
ec2 = boto3.resource('ec2')
instance = ec2.Instance('id')
ec2.create_instances(ImageId='<ami-image-id>', MinCount=1, MaxCount=5)
key = paramiko.RSAKey.from_private_key_file(path/to/mykey.pem)
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Connect/ssh to an instance
try:
# Here 'ubuntu' is user name and 'instance_ip' is public IP of EC2
client.connect(hostname=instance_ip, username="ubuntu", pkey=key)
# Execute a command(cmd) after connecting/ssh to an instance
stdin, stdout, stderr = client.exec_command(cmd)
print stdout.read()
# close the client connection once the job is done
client.close()
break
except Exception, e:
print e | _____no_output_____ | CC0-1.0 | day3/aws.ipynb | NBISweden/workshop-advanced-python |
import pandas as pd
!ls ./drive/MyDrive/Test01
!ls ./smtphP
df = pd.read_csv('./drive/MyDrive/Test01/smtph_total.csv')
df.head(5)
posts = df['Description']
posts
!python -m pip install konlpy
!python -m pip install eunjeon
from konlpy.tag import Mecab
tagger = Mecab()
| _____no_output_____ | Apache-2.0 | NLTK_korean.ipynb | creamcheesesteak/test_deeplearning |
|
Creating two graphs including the trends of TFR and FLPR. One graph includes 3 countries with the highest current HDI and the other graph includes 3 with the lowest current HDI. **Importing Libraries and Uploading Files** | import matplotlib.pyplot as plt
import pandas as pd
import pylab
%matplotlib inline
pylab.rcParams['figure.figsize'] = (10., 8.)
from google.colab import files
uploaded = files.upload()
#upload Africa_FULL_MERGE.csv file
data = 'Africa_FULL_MERGE.csv'
africa = pd.read_csv(data)
africa
africa['HDI'] = pd.to_numeric(africa['HDI'], errors = 'coerce')
africa['HDI Rank (2018)'] = pd.to_numeric(africa['HDI Rank (2018)'], errors = 'coerce') | _____no_output_____ | MIT | Python Analysis/Visualisations/Code/Visualisation_TFR_and_FLPR_of_Highest_and_Lowest_HDI_countries.ipynb | ChangHuaHua/QM2-Group-12 |
**3 Highest and Lowest HDI** | africa_2018 = africa[africa['Year'] == 2018]
africa_2018 = africa_2018.reset_index(drop=True)
africa_2018
africa_string = africa_2018['HDI'].astype(float)
africa_string.nlargest(3)
# corresponding countries to index numbers 43, 35, and 3 are:
# 43 Seychelles 0.801
# 35 Mauritius 0.796
# 3 Algeria 0.759
africa_string.nsmallest(3)
# corresponding countries to index numbers 38, 13, and 14 are:
# 38 Niger 0.377
# 13 Central African Republic 0.381
# 14 Chad 0.401 | _____no_output_____ | MIT | Python Analysis/Visualisations/Code/Visualisation_TFR_and_FLPR_of_Highest_and_Lowest_HDI_countries.ipynb | ChangHuaHua/QM2-Group-12 |
**Segregating Dataframe** | africa_seychelles = africa[africa['Country'] == 'Seychelles']
africa_mauritius = africa[africa['Country'] == 'Mauritius']
africa_algeria = africa[africa['Country'] == 'Algeria']
africa_niger = africa[africa['Country'] == 'Niger']
africa_CAR = africa[africa['Country'] == 'Central African Republic']
africa_chad = africa[africa['Country'] == 'Chad']
frames = [africa_seychelles, africa_mauritius, africa_algeria]
africa_high = pd.concat(frames)
africa_high.reset_index(drop=True)
frames_2 = [africa_niger, africa_CAR, africa_chad]
africa_low = pd.concat(frames_2)
africa_low.reset_index(drop=True) | _____no_output_____ | MIT | Python Analysis/Visualisations/Code/Visualisation_TFR_and_FLPR_of_Highest_and_Lowest_HDI_countries.ipynb | ChangHuaHua/QM2-Group-12 |
**Getting Values for Analysis** | africa_algeria[africa_algeria['Year']==1960]
africa_algeria[africa_algeria['Year']==1990]
africa_algeria[africa_algeria['Year']==2001]
africa_algeria[africa_algeria['Year']==2002]
africa_algeria[africa_algeria['Year']==2003]
africa_algeria[africa_algeria['Year']==2017]
africa_algeria[africa_algeria['Year']==2018]
africa_mauritius[africa_mauritius['Year']==1960]
africa_mauritius[africa_mauritius['Year']==1990]
africa_mauritius[africa_mauritius['Year']==1985]
africa_mauritius[africa_mauritius['Year']==1986]
africa_mauritius[africa_mauritius['Year']==1987]
africa_mauritius[africa_mauritius['Year']==1989]
africa_mauritius[africa_mauritius['Year']==2017]
africa_mauritius[africa_mauritius['Year']==2018]
africa_seychelles[africa_seychelles['Year']==1990]
africa_seychelles[africa_seychelles['Year']==2018]
africa_CAR[africa_CAR['Year']==1990]
africa_CAR[africa_CAR['Year']==1998]
africa_CAR[africa_CAR['Year']==1999]
africa_CAR[africa_CAR['Year']==2000]
africa_CAR[africa_CAR['Year']==2018]
africa_chad[africa_chad['Year']==1990]
africa_chad[africa_chad['Year']==1995]
africa_chad[africa_chad['Year']==1996]
africa_chad[africa_chad['Year']==1997]
africa_chad[africa_chad['Year']==1998]
africa_chad[africa_chad['Year']==1999]
africa_chad[africa_chad['Year']==2000]
africa_chad[africa_chad['Year']==2018]
africa_niger[africa_niger['Year']==1990]
africa_niger[africa_niger['Year']==2018] | _____no_output_____ | MIT | Python Analysis/Visualisations/Code/Visualisation_TFR_and_FLPR_of_Highest_and_Lowest_HDI_countries.ipynb | ChangHuaHua/QM2-Group-12 |
**Creating Visualisations**
The darker the colour, the higher the HDI
Red = highest HDI countries
Blue = lowest HDI countries 3 Highest HDI Countries | ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='Year', y= 'TFR', kind='scatter', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='Year', y= 'TFR', kind='scatter', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='Year', y= 'TFR', kind='scatter', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.legend(loc='upper right')
ax.set_title('3 Highest HDI Countries: TFR 1960-2018')
ax.figure.savefig('Scatter Plot 3 Highest HDI Countries: TFR 1960-2018.png')
#TFR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='Year', y='TFR', kind='line', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='Year', y='TFR', kind='line', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='Year', y='TFR', kind='line', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.set_ylabel('TFR')
ax.legend(loc='upper right')
ax.set_title('3 Highest HDI Countries: TFR 1960-2020')
ax.figure.savefig('Line Plot 3 Highest HDI Countries: TFR 1960-2018.png')
#TFR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: FLPR 1990-2017')
ax.figure.savefig('Scatter Plot 3 Highest HDI Countries: FLPR 1990-2017.png')
#FLPR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.set_ylabel('Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: FLPR 1990-2017')
ax.figure.savefig('Line Plot 3 Highest HDI Countries: FLPR 1990-2017.png')
#FLPR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='Year', y= 'HDI', kind='scatter', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='Year', y= 'HDI', kind='scatter', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='Year', y= 'HDI', kind='scatter', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: HDI 1990-2018')
ax.figure.savefig('Scatter Plot 3 Highest HDI Countries: HDI 1990-2018.png')
#HDI of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='Year', y= 'HDI', kind='line', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='Year', y= 'HDI', kind='line', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='Year', y= 'HDI', kind='line', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.set_ylabel('HDI')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: HDI 1990-2018')
ax.figure.savefig('Line Plot 3 Highest HDI Countries: HDI 1990-2018.png')
#HDI of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='HDI', y= 'TFR', kind='scatter', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='HDI', y= 'TFR', kind='scatter', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='HDI', y= 'TFR', kind='scatter', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: HDI vs TFR')
ax.figure.savefig('Scatter Plot 3 Highest HDI Countries: HDI vs TFR.png')
#HDI vs TFR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='HDI', y= 'TFR', kind='line', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='HDI', y= 'TFR', kind='line', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='HDI', y= 'TFR', kind='line', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.set_ylabel('TFR')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: HDI vs TFR')
ax.figure.savefig('Line Plot 3 Highest HDI Countries: HDI vs TFR.png')
#HDI vs TFR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: HDI vs FLPR')
ax.figure.savefig('Scatter Plot 3 Highest HDI Countries: HDI vs FLPR.png')
#HDI vs FLPR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.set_ylabel('Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: HDI vs FLPR')
ax.figure.savefig('Line Plot 3 Highest HDI Countries: HDI vs FLPR.png')
#HDI vs FLPR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: TFR vs FLPR')
ax.figure.savefig('Scatter Plot 3 Highest HDI Countries: TFR vs FLPR.png')
#TFR vs FLPR of 3 highest HDI countries
ax = africa_high.loc[africa_high['Country']=='Seychelles'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'maroon', label = 'Seychelles')
africa_high.loc[africa_high['Country']=='Mauritius'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'red', ax=ax, label = 'Mauritius')
africa_high.loc[africa_high['Country']=='Algeria'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'lightcoral', ax=ax, label = 'Algeria')
ax.set_ylabel('Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)')
ax.legend(loc='best')
ax.set_title('3 Highest HDI Countries: TFR vs FLPR')
ax.figure.savefig('Line Plot 3 Highest HDI Countries: TFR vs FLPR.png')
#TFR vs FLPR of 3 highest HDI countries | _____no_output_____ | MIT | Python Analysis/Visualisations/Code/Visualisation_TFR_and_FLPR_of_Highest_and_Lowest_HDI_countries.ipynb | ChangHuaHua/QM2-Group-12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.