repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
MLIME/12aMostra | src/Keras Tutorial.ipynb | gpl-3.0 | [
"Keras Tutorial\nhttp://keras.io\nEsse tutorial é uma versão simplificada do tutorial disponível em: https://github.com/MLIME/Frameworks/tree/master/Keras\nO que é Keras?\n\nKeras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.\n\nEsse tutorial é dividido em três partes\n\nFuncionamento Básico do Keras\nExemplo de Deep Feedforward Network\nExemplo de Convolutional Neural Network\n\n1. Funcionamento básico do Keras\nBackends\n\nTheano ou TensorFlow (CPU ou GPU)\n\nTipos de Layers\n\nCore layers: Dense, Activation, Dropout, Flatten\nConvolutional layers: ConvXD, CroppingXD, UpSamplingXD\nPooling Layers: MaxPoolingXD, AveragePoolingXD\nCustom layers can be created\n\nFunções de perda\n\ncategorical_crossentropy\nsparse_categorical_crossentropy\nbinary_crossentropy\nmean_squared_error\nmean_absolute_error\n\nOtimizadores\n\nSGD\nRMSprop\nAdagrad\nAdadelta\nAdam\nAdamax\n\nAtivações\n\nsoftmax\nelu\nrelu\ntanh\nsigmoid\nhard_sigmoid\nlinear\n\nInicializadores\n\nZeros\nRandomNormal\nRandomUniform\nTruncatedNormal\nVarianceScaling\nOrthogonal\nIdentity\nlecun_uniform\nglorot_normal\nglorot_uniform\nhe_normal\nhe_uniform\n\nInicialização\nImportamos bibliotecas e carregamos os dados",
"import util\nimport numpy as np\nimport keras\nfrom keras.utils import np_utils\n\nX_train, y_train, X_test, y_test = util.load_mnist_dataset()\ny_train_labels = np.array(util.get_label_names(y_train))\n\n# Converte em one-hot para treino\ny_train = np_utils.to_categorical(y_train, 10)\ny_test = np_utils.to_categorical(y_test, 10)\n\n#Mostra algumas imagens\nexamples = np.random.randint(0, X_train.shape[0] - 9, 9)\nimage_shape = (X_train.shape[2], X_train.shape[3])\nutil.plot9images(X_train[examples], y_train_labels[examples], image_shape)",
"2. Construindo DFNs com Keras\nReshaping MNIST data",
"#Achatamos imagem em um vetor\nX_train = X_train.reshape(X_train.shape[0], np.prod(X_train.shape[1:]))\nX_test = X_test.reshape(X_test.shape[0], np.prod(X_test.shape[1:]))\n\n#Sequential é a API que permite construirmos um modelo ao adicionar incrementalmente layers\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Flatten\nfrom keras.optimizers import SGD\n\nDFN = Sequential()\nDFN.add(Dense(128, input_shape=(28*28,), activation='relu'))\nDFN.add(Dense(128, activation='relu'))\nDFN.add(Dense(128, activation='relu'))\nDFN.add(Dense(10, activation='softmax'))\n\n#optim = SGD(lr=0.01 ) - pode construir o otimizador por fora para definir parametros\n\nDFN.compile(loss='categorical_crossentropy', \n optimizer='sgd', #ou usar os parâmetros padrão\n metrics=['accuracy'])\n\nDFN.fit(X_train, y_train, batch_size=32, epochs=2,\n validation_split=0.2, \n verbose=1)\n\nprint('\\nAccuracy: %.2f' % DFN.evaluate(X_test, y_test, verbose=1)[1])",
"3. Construindo CNNs com Keras\nReshaping MNIST data",
"X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)\nX_test = X_test.reshape(X_test.shape[0], 28, 28, 1)",
"Compilando e ajustando CNN",
"from keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten\nfrom keras.layers import MaxPooling2D\nfrom keras.layers.convolutional import Conv2D\n\nCNN = Sequential()\nCNN.add(Conv2D(32, (3, 3), padding='same', activation='relu',\n input_shape=(28, 28, 1),))\nCNN.add(MaxPooling2D(pool_size=(2, 2)))\nCNN.add(Conv2D(32, (3, 3), padding='same', activation='relu'))\nCNN.add(MaxPooling2D(pool_size=(2, 2)))\nCNN.add(Dropout(0.25))\nCNN.add(Flatten())\nCNN.add(Dense(256, activation='relu'))\nCNN.add(Dropout(0.5))\nCNN.add(Dense(10, activation='softmax'))\n\nCNN.compile(loss='categorical_crossentropy',\n optimizer='sgd', \n metrics=['accuracy'])\n\nCNN.fit(X_train, y_train, batch_size=32, epochs=2,\n validation_split=0.2, \n verbose=1)\n\nprint('\\nAccuracy: %.2f' % CNN.evaluate(X_test, y_test, verbose=1)[1])",
"Comparamos resultados:",
"cnn_pred = CNN.predict(X_test, verbose=1)\ndfn_pred = DFN.predict(X_test.reshape((X_test.shape[0], np.prod(X_test.shape[1:]))), verbose=1)\n\ncnn_pred = np.array(list(map(np.argmax, cnn_pred)))\ndfn_pred = np.array(list(map(np.argmax, dfn_pred)))\ny_pred = np.array(list(map(np.argmax, y_test)))\n\n\nutil.plotconfusion(util.get_label_names(y_pred), util.get_label_names(dfn_pred))\n\nutil.plotconfusion(util.get_label_names(y_pred), util.get_label_names(cnn_pred))",
"Vamos observar alguns exemplos mal classificados:",
"cnn_missed = cnn_pred != y_pred\ndfn_missed = dfn_pred != y_pred\ncnn_and_dfn_missed = np.logical_and(dfn_missed, cnn_missed)\n\nutil.plot_missed_examples(X_test, y_pred, dfn_missed, dfn_pred)\n\nutil.plot_missed_examples(X_test, y_pred, cnn_missed, cnn_pred)\n\nutil.plot_missed_examples(X_test, y_pred, cnn_and_dfn_missed)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Z0m6ie/Zombie_Code | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week2/Assignment+2 (1).ipynb | mit | [
"You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nAssignment 2 - Pandas Introduction\nAll questions are weighted the same in this assignment.\nPart 1\nThe following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning. \nThe columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below.",
"import pandas as pd\n\ndf = pd.read_csv('olympics.csv', index_col=0, skiprows=1)\n\nfor col in df.columns:\n if col[:2]=='01':\n df.rename(columns={col:'Gold'+col[4:]}, inplace=True)\n if col[:2]=='02':\n df.rename(columns={col:'Silver'+col[4:]}, inplace=True)\n if col[:2]=='03':\n df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)\n if col[:1]=='№':\n df.rename(columns={col:'#'+col[1:]}, inplace=True)\n\nnames_ids = df.index.str.split('\\s\\(') # split the index by '('\n\ndf.index = names_ids.str[0] # the [0] element is the country name (new index) \ndf['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)\n\ndf = df.drop('Totals')\ndf.head()",
"Question 0 (Example)\nWhat is the first country in df?\nThis function should return a Series.",
"# You should write your whole answer within the function provided. The autograder will call\n# this function and compare the return value against the correct solution value\ndef answer_zero():\n # This function returns the row for Afghanistan, which is a Series object. The assignment\n # question description will tell you the general format the autograder is expecting\n return df.iloc[0]\n\n# You can examine what your function returns by calling it in the cell. If you have questions\n# about the assignment formats, check out the discussion forums for any FAQs\nanswer_zero() ",
"Question 1\nWhich country has won the most gold medals in summer games?\nThis function should return a single string value.",
"def answer_one():\n answer = df.sort(['Gold'], ascending = False)\n return answer.index[0]\n\n\nanswer_one()",
"Question 2\nWhich country had the biggest difference between their summer and winter gold medal counts?\nThis function should return a single string value.",
"def answer_two():\n df['Gold_difference'] = df['Gold'] - df['Gold.1']\n answer = df.sort(['Gold_difference'], ascending = False)\n return answer.index[0]\n\n\nanswer_two()",
"Question 3\nWhich country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count? \n$$\\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$\nOnly include countries that have won at least 1 gold in both summer and winter.\nThis function should return a single string value.",
"def answer_three():\n only_gold = df[(df['Gold.1'] > 0) & (df['Gold'] > 0)]\n only_gold['big_difference'] = (only_gold['Gold'] - only_gold['Gold.1']) / only_gold['Gold.2']\n answer = only_gold.sort(['big_difference'], ascending = False)\n return answer.index[0]\n\nanswer_three()",
"Question 4\nWrite a function that creates a Series called \"Points\" which is a weighted value where each gold medal (Gold.2) counts for 3 points, silver medals (Silver.2) for 2 points, and bronze medals (Bronze.2) for 1 point. The function should return only the column (a Series object) which you created.\nThis function should return a Series named Points of length 146",
"def answer_four():\n df['Points'] = (df['Gold.2'] * 3) + (df['Silver.2'] * 2) + (df['Bronze.2'] * 1)\n return df['Points']\n\n\nanswer_four()",
"Part 2\nFor the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.\nThe census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.\nQuestion 5\nWhich state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)\nThis function should return a single string value.",
"census_df = pd.read_csv('census.csv')\ncensus_df\n\ndef answer_five():\n newdf = census_df.groupby(['STNAME']).count()\n answer = newdf.sort(['CTYNAME'], ascending = False)\n return answer.index[0]\n\n\nanswer_five()",
"Question 6\nOnly looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use CENSUS2010POP.\nThis function should return a list of string values.",
"def answer_six():\n ctydf = census_df[census_df['SUMLEV'] == 50]\n ctydfs = ctydf.sort(['CENSUS2010POP'], ascending = False)\n ctydfst3 = ctydfs.groupby('STNAME').head(3)\n ldf = ctydfst3.groupby(['STNAME']).sum()\n answer = ldf.sort(['CENSUS2010POP'], ascending = False)\n return answer.index[0:3].tolist()\n\n\nanswer_six()",
"Question 7\nWhich county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)\ne.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.\nThis function should return a single string value.",
"def answer_seven():\n return \"YOUR ANSWER HERE\"",
"Question 8\nIn this datafile, the United States is broken up into four regions using the \"REGION\" column. \nCreate a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.\nThis function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).",
"def answer_eight():\n ctydf = census_df[(census_df['SUMLEV'] == 50) & (census_df['REGION'] < 3)]\n ctydfwas = ctydf.loc[(ctydf['CTYNAME'] == 'Washington County') & (ctydf['POPESTIMATE2015'] > ctydf['POPESTIMATE2014'])]\n columns_to_keep = ['STNAME',\n 'CTYNAME']\n ansdf = ctydfwas[columns_to_keep]\n return ansdf\n\n\nanswer_eight()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session14/Day2/DeeplearningSolutions.ipynb | mit | [
"Classification with a Multi-layer Perceptron (MLP)\nAuthor: V. Ashley Villar\nIn this problem set, we will not be implementing neural networks from scratch. Yesterday, you built a perceptron in Python. Multi-layer perceptrons (MLPs) are, as discussed in the lecture, several layers of these perceptrons stacked. Here, we will learn how to use one of the most common modules for building neural networks: Pytorch",
"!pip install astronn\nimport torch\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.metrics import accuracy_score, confusion_matrix, ConfusionMatrixDisplay",
"Problem 1: Understanding the Data\nFor this problem set, we will use the Galaxy10 dataset made available via the astroNN module. This dataset is made up of 17736 images of galaxies which have been labelled by hand. See this link for more information. \nFirst we will visualize our data.\nProblem 1a Show one example of each class as an image.",
"from astroNN.datasets import load_galaxy10\nimages, labels_original = load_galaxy10()\nfrom astroNN.datasets.galaxy10 import galaxy10cls_lookup\n%matplotlib inline\n\n# Plot an example image from each class\n\n# First, find an example of each class\nuclasses, counts = np.unique(labels_original,return_counts=True)\nprint(len(labels_original))\nfor i, uclass in enumerate(uclasses):\n print(uclass,counts[i])\n first_example = np.where(labels_original==uclass)[0][0]\n plt.imshow(images[first_example])\n plt.title(galaxy10cls_lookup(uclass))\n plt.show()",
"Problem 2b Make a histogram showing the fraction of each class\nKeep only the top two classes (i.e., the classes with the most galaxies)",
"plt.hist(labels_original)\nplt.xlabel('Class Label')\nplt.show()\n\n\n#Only work with 1 and 2\ngind = np.where((labels_original==1) | (labels_original==2))\nimages_top_two = images[gind]\nlabels_top_two = labels_original[gind]",
"This next block of code converts the data to a format which is more compatible with our neural network.",
"import torch.nn.functional as F\ntorch.set_default_dtype(torch.float)\nlabels_top_two_one_hot = F.one_hot(torch.tensor(labels_top_two - np.min(labels_top_two)).long(), num_classes=2)\nimages_top_two = torch.tensor(images_top_two).float()\nlabels_top_two_one_hot = labels_top_two_one_hot.float()\n# we're going to flatten the images for our MLP\nimages_top_two_flat = images_top_two.reshape(len(images_top_two),-1)\n\n#Normalize the flux of the images here\nimages_top_two_flat = (images_top_two_flat - torch.mean(images_top_two_flat))/torch.std(images_top_two_flat)\n",
"Problem 2c Split the data into a training and test set (66/33 split) using the train_test_split function from sklearn",
"from sklearn.model_selection import train_test_split\nimages_train, images_test, labels_train, labels_test = train_test_split(\n images_top_two_flat, labels_top_two_one_hot, test_size=0.33, random_state=42)\n\n\n\nnp.shape(images_train)",
"The next cell will outline how one can make a MLP with pytorch. \nProblem 3a Talk to a partner about how this code works, line by line. Add another hidden layer which is the same size as the first hidden layer.",
"class MLP(torch.nn.Module):\n # this defines the model\n def __init__(self, input_size, hidden_size):\n super(MLP, self).__init__()\n print(input_size,hidden_size)\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.hiddenlayer = torch.nn.Linear(self.input_size, self.hidden_size)\n self.outputlayer = torch.nn.Linear(self.hidden_size, 2)\n self.sigmoid = torch.nn.Sigmoid()\n self.softmax = torch.nn.Softmax()\n def forward(self, x):\n layer1 = self.hiddenlayer(x)\n activation = self.sigmoid(layer1)\n layer2 = self.outputlayer(activation)\n activation2 = self.sigmoid(layer1)\n layer3 = self.outputlayer(activation2)\n output = self.softmax(layer3)\n return output",
"The next block of code will show how one can train the model for 100 epochs. Note that we use the binary cross-entropy as our objective function and stochastic gradient descent as our optimization method.\nProblem 3b Edit the code so that the function plots the loss for the training and test loss for each epoch.",
"# train the model\ndef train_model(training_data,training_labels, test_data,test_labels, model):\n # define the optimization\n criterion = torch.nn.BCELoss()\n optimizer = torch.optim.SGD(model.parameters(), lr=0.007,momentum=0.9)\n for epoch in range(100):\n # clear the gradient\n optimizer.zero_grad()\n # compute the model output\n myoutput = model(training_data)\n # calculate loss\n loss = criterion(myoutput, training_labels)\n # credit assignment\n loss.backward()\n # update model weights\n optimizer.step()\n\n # STUDENTS ADD THIS PART\n output_test = model(test_data)\n loss_test = criterion(output_test, test_labels)\n plt.plot(epoch,loss.detach().numpy(),'ko')\n plt.plot(epoch,loss_test.detach().numpy(),'ro')\n print(epoch,loss.detach().numpy())\n plt.show() \n",
"The next block trains the code, assuming a hidden layer size of 100 neurons.\nProblem 3c Change the learning rate lr to minimize the cross entropy score",
"model = MLP(np.shape(images_train[0])[0],50)\ntrain_model(images_train, labels_train, images_test, labels_test, model)\n",
"Write a function called evaluate_model which takes the image data, labels and model as input, and the accuracy as output. you can use the accuracy_score function.",
"# evaluate the model\ndef evaluate_model(data,labels, model):\n yhat = model(data)\n yhat = yhat.detach().numpy()\n best_class = np.argmax(yhat,axis=1)\n acc = accuracy_score(best_class,np.argmax(labels,axis=1))\n return(acc)\n# evaluate the model\nacc = evaluate_model(images_test,labels_test, model)\nprint('Accuracy: %.3f' % acc)",
"Problem 3d make a confusion matrix for the test set",
"yhat = model(images_test)\nyhat = yhat.detach().numpy()\nbest_class = np.argmax(yhat,axis=1)\ntruth = np.argmax(labels_test,axis=1)\ncm = confusion_matrix(truth,best_class)\ndisp = ConfusionMatrixDisplay(confusion_matrix=cm)\ndisp.plot()\nplt.show()",
"Challenge Problem Add a third class to your classifier and begin accounting for uneven classes. There are several steps to this:\n\nEdit the neural network to output 3 classes\nChange the criterion to a custom criterion function, such that the entropy of each class is weighted by the inverse fraction of each class size (e.g., if the galaxy class breakdowns are 1:2:3, the weights would be 6:3:2).",
"class MLP_new(torch.nn.Module):\n # this defines the model\n def __init__(self, input_size, hidden_size):\n super(MLP_new, self).__init__()\n print(input_size,hidden_size)\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.hiddenlayer = torch.nn.Linear(self.input_size, self.hidden_size)\n self.outputlayer = torch.nn.Linear(self.hidden_size, 3)\n self.sigmoid = torch.nn.Sigmoid()\n self.softmax = torch.nn.Softmax()\n def forward(self, x):\n layer1 = self.hiddenlayer(x)\n activation = self.sigmoid(layer1)\n layer2 = self.outputlayer(activation)\n activation2 = self.sigmoid(layer1)\n layer3 = self.outputlayer(activation2)\n output = self.softmax(layer3)\n return output\n\n\n#Only work with 0,1,2\ngind = np.where((labels_original==0) | (labels_original==1) | (labels_original==2))\nimages_top_three = images[gind]\nlabels_top_three = labels_original[gind]\n\n\nx,counts = np.unique(labels_top_three,return_counts=True)\nprint(counts)\n\n\ntorch.set_default_dtype(torch.float)\nlabels_top_three_one_hot = F.one_hot(torch.tensor(labels_top_three - np.min(labels_top_three)).long(), num_classes=3)\nimages_top_three = torch.tensor(images_top_three).float()\nlabels_top_three_one_hot = labels_top_three_one_hot.float()\n# we're going to flatten the images for our MLP\nimages_top_three_flat = images_top_three.reshape(len(images_top_three),-1)\n\n#Normalize the flux of the images here\nimages_top_three_flat = (images_top_three_flat - torch.mean(images_top_three_flat))/torch.std(images_top_three_flat)\nimages_train, images_test, labels_train, labels_test = train_test_split(\n images_top_three_flat, labels_top_three_one_hot, test_size=0.33, random_state=42)\n\n\n# train the model\ndef train_model(training_data,training_labels, test_data,test_labels, model):\n # define the optimization\n criterion = torch.nn.CrossEntropyLoss(weight=torch.Tensor(np.sum(counts)/counts))\n optimizer = torch.optim.SGD(model.parameters(), lr=0.005,momentum=0.9)\n for epoch in range(100):\n # clear the gradient\n optimizer.zero_grad()\n # compute the model output\n myoutput = model(training_data)\n # calculate loss\n loss = criterion(myoutput, training_labels)\n # credit assignment\n loss.backward()\n # update model weights\n optimizer.step()\n\n # STUDENTS ADD THIS PART\n output_test = model(test_data)\n loss_test = criterion(output_test, test_labels)\n plt.plot(epoch,loss.detach().numpy(),'ko')\n plt.plot(epoch,loss_test.detach().numpy(),'ro')\n print(epoch,loss.detach().numpy())\n plt.show() \n\nmodel = MLP_new(np.shape(images_train[0])[0],50)\ntrain_model(images_train, labels_train, images_test, labels_test, model)\n\n\n# evaluate the model\ndef evaluate_model(data,labels, model):\n yhat = model(data)\n yhat = yhat.detach().numpy()\n best_class = np.argmax(yhat,axis=1)\n acc = accuracy_score(best_class,np.argmax(labels,axis=1))\n return(acc)\n# evaluate the model\nacc = evaluate_model(images_test,labels_test, model)\nprint('Accuracy: %.3f' % acc)\n\nyhat = model(images_test)\nyhat = yhat.detach().numpy()\nbest_class = np.argmax(yhat,axis=1)\ntruth = np.argmax(labels_test,axis=1)\ncm = confusion_matrix(truth,best_class)\ndisp = ConfusionMatrixDisplay(confusion_matrix=cm)\ndisp.plot()\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
saashimi/code_guild | wk0/notebooks/challenges/compress/.ipynb_checkpoints/compress_challenge-checkpoint.ipynb | mit | [
"<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>\nChallenge Notebook\nProblem: Compress a string such that 'AAABCCDDDD' becomes 'A3B1C2D4'. Only compress the string if it saves space.\n\nConstraints\nTest Cases\nAlgorithm\nCode\nUnit Test\nSolution Notebook\n\nConstraints\n\nCan we assume the string is ASCII?\nYes\nNote: Unicode strings could require special handling depending on your language\n\n\nCan you use additional data structures? \nYes\n\n\nIs this case sensitive?\nYes\n\n\n\nTest Cases\n\nNone -> None\n'' -> ''\n'AABBCC' -> 'AABBCC'\n'AAABCCDDDD' -> 'A3B1C2D4'\n\nAlgorithm\nRefer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.\nCode",
"def compress_string(string):\n # TODO: Implement me\n string \"!\"\n pass",
"Unit Test\nThe following unit test is expected to fail until you solve the challenge.",
"# %load test_compress.py\nfrom nose.tools import assert_equal\n\n\nclass TestCompress(object):\n\n def test_compress(self, func):\n assert_equal(func(None), None)\n assert_equal(func(''), '')\n assert_equal(func('AABBCC'), 'AABBCC')\n assert_equal(func('AAABCCDDDD'), 'A3B1C2D4')\n print('Success: test_compress')\n\n\ndef main():\n test = TestCompress()\n test.test_compress(compress_string)\n\n\nif __name__ == '__main__':\n main()",
"Solution Notebook\nReview the Solution Notebook for a discussion on algorithms and code solutions."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n | site/ja/probability/examples/Understanding_TensorFlow_Distributions_Shapes.ipynb | apache-2.0 | [
"Copyright 2018 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TensorFlow Distributions の形状を理解する\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Understanding_TensorFlow_Distributions_Shapes\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Understanding_TensorFlow_Distributions_Shapes.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab で実行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Understanding_TensorFlow_Distributions_Shapes.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Understanding_TensorFlow_Distributions_Shapes.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a></td>\n</table>",
"import collections\n\nimport tensorflow as tf\ntf.compat.v2.enable_v2_behavior()\n\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\ntfb = tfp.bijectors",
"基礎\nTensorFlow Distributions の形状には関連する 3 つの重要な概念があります。\n\nイベントの形状は、分布からの 1 つの抽出の形状を表します。抽出は次元間で依存する場合があります。スカラー分布の場合、イベントの形状は [] です。5 次元の MultivariateNormal の場合、イベントの形状は [5] です。\nバッチの形状は、独立した、同一に分布されていない抽出である「バッチ」の分布を表します。\nサンプルの形状は、 分布ファミリからの独立した、同一に分布されたバッチの抽出を表します。\n\nイベントの形状とバッチの形状は Distribution オブジェクトのプロパティですが、サンプルの形状は sample または log_prob への特定の呼び出しに関連付けられています。\nこのノートブックでは、例を使ってこれらの概念を説明していくので、すぐに分からなくても、心配する必要はありません。\nまた、これらの概念の概要については、このブログ記事を参照してください。\nTensorFlow Eager に関する注意\nこのノートブックは、すべて TensorFlow Eager を使用して記述されています。提示された概念は Eager に依存していませんが、Eager では、Distribution オブジェクトが Python で作成されるときに、分布バッチとイベントの形状が評価されます(したがって既知です)。一方、グラフ(非 Eager モード)では、グラフが実行されるまでイベントとバッチの形状が決定されていない分布を定義することができます。\nスカラー分布\n上記のように、Distribution オブジェクトではイベントとバッチの形状が定義されています。まず、分布を説明するユーティリティから始めます。",
"def describe_distributions(distributions):\n print('\\n'.join([str(d) for d in distributions]))",
"このセクションでは、スカラー分布(イベントの形状が [] の分布)について説明します。典型的な例は、rate で指定されたポアソン分布です。",
"poisson_distributions = [\n tfd.Poisson(rate=1., name='One Poisson Scalar Batch'),\n tfd.Poisson(rate=[1., 10., 100.], name='Three Poissons'),\n tfd.Poisson(rate=[[1., 10., 100.,], [2., 20., 200.]],\n name='Two-by-Three Poissons'),\n tfd.Poisson(rate=[1.], name='One Poisson Vector Batch'),\n tfd.Poisson(rate=[[1.]], name='One Poisson Expanded Batch')\n]\n\ndescribe_distributions(poisson_distributions)",
"ポアソン分布はスカラー分布であるため、そのイベントの形状は常に [] です。より多くのレートを指定すると、これらはバッチ形式で表示されます。例の最後のペアは興味深いものです。レートは 1 つだけですが、そのレートは空でない形状の numpy 配列に埋め込まれているため、その形状がバッチ形状になります。\n標準の正規分布もスカラーです。イベントの形状は、ポアソンの場合と同じように [] ですが、ブロードキャストの最初の例で見ていきます。正規分布は、loc および scale パラメーターを使用して指定されます。",
"normal_distributions = [\n tfd.Normal(loc=0., scale=1., name='Standard'),\n tfd.Normal(loc=[0.], scale=1., name='Standard Vector Batch'),\n tfd.Normal(loc=[0., 1., 2., 3.], scale=1., name='Different Locs'),\n tfd.Normal(loc=[0., 1., 2., 3.], scale=[[1.], [5.]],\n name='Broadcasting Scale')\n]\n\ndescribe_distributions(normal_distributions)",
"上記の Broadcasting Scale 分布は興味深い例です。loc パラメーターは [4] の形状、scale パラメーターは [2, 1] の形状をもちます。Numpy ブロードキャストルールを使用すると、バッチ形状は [2, 4] になります。 \"Broadcasting Scale\" 分布を定義するための同等の(ただし、あまりエレガントではなく、推奨されない)方法は次のとおりです。",
"describe_distributions(\n [tfd.Normal(loc=[[0., 1., 2., 3], [0., 1., 2., 3.]],\n scale=[[1., 1., 1., 1.], [5., 5., 5., 5.]])])",
"以上のようにブロードキャストの表記は頭痛やバグの原因にもなりますが便利です。\nスカラー分布のサンプリング\n分布で実行できる主なことは sample と log_prob の 2 つです。まず、サンプリングについて見ていきましょう。基本的なルールは、分布からサンプリングする場合、結果のテンソルは形状 [sample_shape, batch_shape, event_shape] になります。batch_shape と event_shape は Distribution オブジェクトにより提供され、sample_shape は、sample の呼び出しにより提供されます。スカラー分布の場合、event_shape = [] であるため、サンプルから返されるテンソルの形状は [sample_shape, batch_shape] になります。では、試してみましょう。",
"def describe_sample_tensor_shape(sample_shape, distribution):\n print('Sample shape:', sample_shape)\n print('Returned sample tensor shape:',\n distribution.sample(sample_shape).shape)\n\ndef describe_sample_tensor_shapes(distributions, sample_shapes):\n started = False\n for distribution in distributions:\n print(distribution)\n for sample_shape in sample_shapes:\n describe_sample_tensor_shape(sample_shape, distribution)\n print()\n\nsample_shapes = [1, 2, [1, 5], [3, 4, 5]]\ndescribe_sample_tensor_shapes(poisson_distributions, sample_shapes)\n\ndescribe_sample_tensor_shapes(normal_distributions, sample_shapes)",
"sample についての説明は以上です。返されたサンプルテンソルの形状は [sample_shape, batch_shape, event_shape] です。\nスカラー分布の log_prob の計算\n次に、log_prob を見てみましょう。これは少し注意する必要があります。log_prob は、分布の log_prob を計算する場所を表す(空でない)テンソルを入力として受け取ります。最も単純なケースでは、このテンソルは [sample_shape, batch_shape, event_shape] の形式になります。batch_shape と event_shape は 分布のバッチおよびイベントの形状に一致します。スカラー分布の場合は、event_shape = [] なので、入力テンソルの形状は [sample_shape, batch_shape] です。この場合、[sample_shape, batch_shape] 形状のテンソルが返されます。",
"three_poissons = tfd.Poisson(rate=[1., 10., 100.], name='Three Poissons')\nthree_poissons\n\nthree_poissons.log_prob([[1., 10., 100.], [100., 10., 1]]) # sample_shape is [2].\n\nthree_poissons.log_prob([[[[1., 10., 100.], [100., 10., 1.]]]]) # sample_shape is [1, 1, 2].",
"最初の例では、入力と出力の形状が [2, 3] であり、2 番目の例では形状が [1, 1, 2, 3] であることに注意してください。\nブロードキャストがない場合はそれだけです。ブロードキャストを考慮する場合のルールは次のとおりです。これは一般的な説明であり、スカラー分布は簡略化されていることに注意してください。\n\nn = len(batch_shape) + len(event_shape) を定義します。(スカラー分布の場合は、len(event_shape)=0。)\n入力テンソル t の次元が n 未満の場合、正確に n 次元になるまで、左側にサイズ 1 の次元を追加して形状をパッディングします。\nt' の右端の次元 n を log_prob 計算している分布の [batch_shape, event_shape] に対してブロードキャストします。詳しく説明すると、t' がすでに分布と一致している次元の場合は何もせず、t' の次元がシングルトンの場合は、そのシングルトンを適切な数で複製します。その他の場合はエラーです。(スカラー分布の場合、event_shape = [] であるため、 batch_shape に対してのみブロードキャストします。)\nこれで、log_prob を計算できるようになりました。結果のテンソルの形状は、[sample_shape, batch_shape] です。sample_shape は、右端の次元 n の左側にある t または t' の任意の次元として定義されます(sample_shape = shape(t)[:-n])。\n\nこれが何を意味するのかわからないと混乱するかもしれないので、いくつかの例を見てみましょう。",
"three_poissons.log_prob([10.])",
"テンソル [10.] (形状 [1])は 3 つのbatch_shape でブロードキャストされるため、値 10 での 3 つのポワソンの対数確率をすべて評価します。",
"three_poissons.log_prob([[[1.], [10.]], [[100.], [1000.]]])",
"上記の例では、入力テンソルの形状は [2, 2, 1] ですが、分布オブジェクトの形状は 3 です。したがって、[2, 2] サンプル次元のそれぞれについて、提供された単一の値は、3 つのポワソンのそれぞれにブロードキャストします。\nこれは役に立つ考え方です。three_poissons には batch_shape = [2, 3] があるため、log_prob の呼び出しには最後の次元が 1 または 3 のテンソルが必要です。それ以外はエラーです。(numpy ブロードキャストルールは、スカラーの特殊なケースを、形状 [1] のテンソルと完全に同等であるものとして扱います。)\nでは、batch_shape = [2, 3] を使用して、より複雑なポアソン分布を使用して試してみましょう。",
"poisson_2_by_3 = tfd.Poisson(\n rate=[[1., 10., 100.,], [2., 20., 200.]],\n name='Two-by-Three Poissons')\n\npoisson_2_by_3.log_prob(1.)\n\npoisson_2_by_3.log_prob([1.]) # Exactly equivalent to above, demonstrating the scalar special case.\n\npoisson_2_by_3.log_prob([[1., 1., 1.], [1., 1., 1.]]) # Another way to write the same thing. No broadcasting.\n\npoisson_2_by_3.log_prob([[1., 10., 100.]]) # Input is [1, 3] broadcast to [2, 3].\n\npoisson_2_by_3.log_prob([[1., 10., 100.], [1., 10., 100.]]) # Equivalent to above. No broadcasting.\n\npoisson_2_by_3.log_prob([[1., 1., 1.], [2., 2., 2.]]) # No broadcasting.\n\npoisson_2_by_3.log_prob([[1.], [2.]]) # Equivalent to above. Input shape [2, 1] broadcast to [2, 3].",
"上記の例では、バッチを介したブロードキャストを見ていきましたが、サンプルの形状は空でした。値のコレクションがあり、バッチの各ポイントで各値の対数確率を取得する場合は、以下のように手動で実行できます。",
"poisson_2_by_3.log_prob([[[1., 1., 1.], [1., 1., 1.]], [[2., 2., 2.], [2., 2., 2.]]]) # Input shape [2, 2, 3].",
"または、ブロードキャストに最後のバッチ次元を処理させることもできます。",
"poisson_2_by_3.log_prob([[[1.], [1.]], [[2.], [2.]]]) # Input shape [2, 2, 1].",
"また、やや不自然ですがブロードキャストに最初のバッチ次元のみを処理させることもできます。",
"poisson_2_by_3.log_prob([[[1., 1., 1.]], [[2., 2., 2.]]]) # Input shape [2, 1, 3].",
"または、ブロードキャストに両方のバッチ次元を処理させることもできます。",
"poisson_2_by_3.log_prob([[[1.]], [[2.]]]) # Input shape [2, 1, 1].",
"上記は、必要な値が 2 つしかない場合は問題ありませんでした。しかし、すべてのバッチポイントで評価する値のリストが長い場合は、次の表記を使用します。形状の右側にサイズ 1 の余分な次元を追加すると、非常に便利です。",
"poisson_2_by_3.log_prob(tf.constant([1., 2.])[..., tf.newaxis, tf.newaxis])",
"これはストライドスライス表記のインスタンスであり、知っておく価値があります。\n完全を期すために three_poissons に戻ると、同じ例は次のようになります。",
"three_poissons.log_prob([[1.], [10.], [50.], [100.]])\n\nthree_poissons.log_prob(tf.constant([1., 10., 50., 100.])[..., tf.newaxis]) # Equivalent to above.",
"多変量分布\nここでは、空でないイベント形状を持つ多変量分布を見ていきます。まず、多項分布を見てみましょう。",
"multinomial_distributions = [\n # Multinomial is a vector-valued distribution: if we have k classes,\n # an individual sample from the distribution has k values in it, so the\n # event_shape is `[k]`.\n tfd.Multinomial(total_count=100., probs=[.5, .4, .1],\n name='One Multinomial'),\n tfd.Multinomial(total_count=[100., 1000.], probs=[.5, .4, .1],\n name='Two Multinomials Same Probs'),\n tfd.Multinomial(total_count=100., probs=[[.5, .4, .1], [.1, .2, .7]],\n name='Two Multinomials Same Counts'),\n tfd.Multinomial(total_count=[100., 1000.],\n probs=[[.5, .4, .1], [.1, .2, .7]],\n name='Two Multinomials Different Everything')\n\n]\n\ndescribe_distributions(multinomial_distributions)",
"最後の 3 つの例では、batch_shape は常に [2] でしたが、ブロードキャストを使用して、共有する total_count または共有する probs 使用できます(または、使用しないこともできます)。内部では同じ形状になるようにブロードキャストされるためです。\n既知の事柄を考慮すると、サンプリングは簡単です。",
"describe_sample_tensor_shapes(multinomial_distributions, sample_shapes)",
"対数確率の計算も同様に簡単です。対角多変量正規分布の例を見てみましょう。(カウントと確率の制約により、ブロードキャストは許容できない値を生成することが多いため、多項分布はブロードキャストにあまり適していません。)平均は同じですがスケール(標準偏差)が異なる 2 つの 3 次元分布のバッチを使用します。",
"two_multivariate_normals = tfd.MultivariateNormalDiag(loc=[1., 2., 3.], scale_identity_multiplier=[1., 2.])\ntwo_multivariate_normals",
"(スケールが ID の倍数である分布を使用したが、これは制限ではないことに注意してください。scale_identity_multiplier の代わりに scale を渡すことができます。)\n次に、各バッチポイントの平均とシフトされた平均での対数確率を評価します。",
"two_multivariate_normals.log_prob([[[1., 2., 3.]], [[3., 4., 5.]]]) # Input has shape [2,1,3].",
"まったく同じように、[https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice](ストライドスライス表記)を使用して、定数の中央に追加の形状 = 1 次元を挿入できます。",
"two_multivariate_normals.log_prob(\n tf.constant([[1., 2., 3.], [3., 4., 5.]])[:, tf.newaxis, :]) # Equivalent to above.",
"一方、余分な次元を追加しない場合は、[1., 2., 3.] を最初のバッチポイントに渡し、[3., 4., 5.] を 2 番目のバッチポイントに渡します。",
"two_multivariate_normals.log_prob(tf.constant([[1., 2., 3.], [3., 4., 5.]]))",
"形状変換テクニック\nReshape Bijector\nReshape Bijector を使用すると、分布の event_shape の形状を変換できます。以下に例を示します。",
"six_way_multinomial = tfd.Multinomial(total_count=1000., probs=[.3, .25, .2, .15, .08, .02])\nsix_way_multinomial",
"[6] のイベント形状を持つ多項分布を作成しました。Reshape Bijector を使用すると、これを [2, 3] のイベント形状を持つ分布として扱うことができます。\nBijector は、${\\mathbb R}^n$ の開集合上の微分可能な 1 対 1 の関数を表します。Bijectors は、TransformedDistribution と組み合わせて使用されます。これは、基本分布 $p(x)$ および$Y = g(X)$ を表す Bijector に関して分布 $p(y)$ をモデル化します。では、実際に見てみましょう。",
"transformed_multinomial = tfd.TransformedDistribution(\n distribution=six_way_multinomial,\n bijector=tfb.Reshape(event_shape_out=[2, 3]))\ntransformed_multinomial\n\nsix_way_multinomial.log_prob([500., 100., 100., 150., 100., 50.])\n\ntransformed_multinomial.log_prob([[500., 100., 100.], [150., 100., 50.]])",
"これは、Reshape Bijector が実行できる唯一のことです。イベント次元をバッチ次元に、またはバッチ次元をイベント次元に変換することはできません。\nIndependent 分布\nIndependent 分布は、独立した、必ずしも同一ではない分布(バッチ)のコレクションを単一の分布として扱うために使用されます。より簡潔に言えば、Independent を使用すると、batch_shape の次元を event_shape の次元に変換できます。次に例を示します。",
"two_by_five_bernoulli = tfd.Bernoulli(\n probs=[[.05, .1, .15, .2, .25], [.3, .35, .4, .45, .5]],\n name=\"Two By Five Bernoulli\")\ntwo_by_five_bernoulli",
"これは、表の確率が関連付けられた 2x5 のコインの配列として考えることができます。特定の任意の 1 と 0 のセットの確率を評価します。",
"pattern = [[1., 0., 0., 1., 0.], [0., 0., 1., 1., 1.]]\ntwo_by_five_bernoulli.log_prob(pattern)",
"Independent を使用すると、これを 2 つの異なる「5 つのベルヌーイのセット」に変換できます。これは、特定のパターンで出現するコイントスの「行」を単一の結果と見なす場合に役立ちます。",
"two_sets_of_five = tfd.Independent(\n distribution=two_by_five_bernoulli,\n reinterpreted_batch_ndims=1,\n name=\"Two Sets Of Five\")\ntwo_sets_of_five",
"数学的には、5 つの「セット」ごとの対数確率を計算しています。セット内の 5 つの「独立した」コイントスの対数確率を合計するため、分布は「independent」と呼ばれます。",
"two_sets_of_five.log_prob(pattern)",
"さらに、Independent を使用して、個々のイベントが 2x5 のベルヌーイのセットである分布を作成できます。",
"one_set_of_two_by_five = tfd.Independent(\n distribution=two_by_five_bernoulli, reinterpreted_batch_ndims=2,\n name=\"One Set Of Two By Five\")\none_set_of_two_by_five.log_prob(pattern)",
"sample の観点では、Independent を使用しても何も変更されないことに注意してください。",
"describe_sample_tensor_shapes(\n [two_by_five_bernoulli,\n two_sets_of_five,\n one_set_of_two_by_five],\n [[3, 5]])",
"最後の演習として、サンプリングと対数確率の観点から、Normal 分布のベクトルバッチと MultivariateNormalDiag 分布の相違点と類似点を検討することをお勧めします。Independent を使用して、Normal のバッチから MultivariateNormalDiag を構築するにはどうすればよいでしょうか?(MultivariateNormalDiag は、実際にはこの方法で実装されていません。)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lvrzhn/AstroHackWeek2015 | day3-machine-learning/06 - Model Complexity.ipynb | gpl-2.0 | [
"import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib nbagg",
"Model Complexity, Overfitting and Underfitting",
"from plots import plot_kneighbors_regularization\nplot_kneighbors_regularization()",
"Validation Curves",
"from sklearn.datasets import load_digits\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.learning_curve import validation_curve\n\ndigits = load_digits()\nX, y = digits.data, digits.target\n\nmodel = RandomForestClassifier(n_estimators=20)\nparam_range = range(1, 13)\ntraining_scores, validation_scores = validation_curve(model, X, y,\n param_name=\"max_depth\",\n param_range=param_range, cv=5)\n\ntraining_scores.shape\n\ndef plot_validation_curve(parameter_values, train_scores, validation_scores):\n train_scores_mean = np.mean(train_scores, axis=1)\n train_scores_std = np.std(train_scores, axis=1)\n validation_scores_mean = np.mean(validation_scores, axis=1)\n validation_scores_std = np.std(validation_scores, axis=1)\n\n plt.fill_between(parameter_values, train_scores_mean - train_scores_std,\n train_scores_mean + train_scores_std, alpha=0.1,\n color=\"r\")\n plt.fill_between(parameter_values, validation_scores_mean - validation_scores_std,\n validation_scores_mean + validation_scores_std, alpha=0.1, color=\"g\")\n plt.plot(parameter_values, train_scores_mean, 'o-', color=\"r\",\n label=\"Training score\")\n plt.plot(parameter_values, validation_scores_mean, 'o-', color=\"g\",\n label=\"Cross-validation score\")\n plt.ylim(validation_scores_mean.min() - .1, train_scores_mean.max() + .1)\n plt.legend(loc=\"best\")\n\nplt.figure()\nplot_validation_curve(param_range, training_scores, validation_scores)",
"Exercise\nPlot the validation curve on the digit dataset for:\n* a LinearSVC with a logarithmic range of regularization parameters C.\n* KNeighborsClassifier with a linear range of neighbors k.\nWhat do you expect them to look like? How do they actually look like?",
"# %load solutions/validation_curve.py"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CrowdTruth/CrowdTruth-core | tutorial/notebooks/Free Input Task - Person Annotation in Video.ipynb | apache-2.0 | [
"CrowdTruth for Free Input Tasks: Person Annotation in Video\nIn this tutorial, we will apply CrowdTruth metrics to a free input crowdsourcing task for Person Annotation from video fragments. The workers were asked to watch a video of about 3-5 seconds and then add tags that are relevant for the people that appear in the video fragment. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.\nTo replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript. \nThis is a screenshot of the task as it appeared to workers:\n\nA sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:",
"import pandas as pd\n\ntest_data = pd.read_csv(\"../data/person-video-free-input.csv\")\ntest_data.head()",
"Declaring a pre-processing configuration\nThe pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:",
"import crowdtruth\nfrom crowdtruth.configuration import DefaultConfig",
"Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Type/Role Annotation in Video task:\n\ninputColumns: list of input columns from the .csv file with the input data\noutputColumns: list of output columns from the .csv file with the answers from the workers\nannotation_separator: string that separates between the crowd annotations in outputColumns\nopen_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False\nannotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations\nprocessJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector\n\nSame examples of possible processing functions of crowd answers are given below:",
"import nltk\nnltk.download('stopwords')\nnltk.download('punkt')\nnltk.download('averaged_perceptron_tagger')\nnltk.download('wordnet')\nfrom nltk.corpus import stopwords\nfrom nltk.stem import WordNetLemmatizer\nfrom nltk.corpus import wordnet\nfrom autocorrect import spell\n\ndef correct_words(keywords, separator):\n keywords_list = keywords.split(separator)\n corrected_keywords = []\n \n for keyword in keywords_list:\n \n words_in_keyword = keyword.split(\" \")\n corrected_keyword = []\n for word in words_in_keyword:\n correct_word = spell(word)\n corrected_keyword.append(correct_word)\n corrected_keywords.append(\" \".join(corrected_keyword))\n return separator.join(corrected_keywords)\n \ndef cleanup_keywords(keywords, separator):\n keywords_list = keywords.split(separator)\n stopset = set(stopwords.words('english'))\n \n filtered_keywords = []\n for keyword in keywords_list:\n tokens = nltk.word_tokenize(keyword)\n cleanup = \" \".join(filter(lambda word: str(word) not in stopset or str(word) == \"no\" or str(word) == \"not\", keyword.split()))\n filtered_keywords.append(cleanup)\n return separator.join(filtered_keywords)\n\ndef nltk2wn_tag(nltk_tag):\n if nltk_tag.startswith('J'):\n return wordnet.ADJ\n elif nltk_tag.startswith('V'):\n return wordnet.VERB\n elif nltk_tag.startswith('N'):\n return wordnet.NOUN\n elif nltk_tag.startswith('R'):\n return wordnet.ADV\n else: \n return None\n\ndef lemmatize_keywords(keywords, separator):\n keywords_list = keywords.split(separator)\n lematized_keywords = []\n \n for keyword in keywords_list:\n nltk_tagged = nltk.pos_tag(nltk.word_tokenize(str(keyword))) \n wn_tagged = map(lambda x: (str(x[0]), nltk2wn_tag(x[1])), nltk_tagged)\n res_words = []\n for word, tag in wn_tagged:\n if tag is None: \n res_word = wordnet._morphy(str(word), wordnet.NOUN)\n if res_word == []:\n res_words.append(str(word))\n else:\n if len(res_word) == 1:\n res_words.append(str(res_word[0]))\n else:\n res_words.append(str(res_word[1]))\n else:\n res_word = wordnet._morphy(str(word), tag)\n if res_word == []:\n res_words.append(str(word))\n else: \n if len(res_word) == 1:\n res_words.append(str(res_word[0]))\n else:\n res_words.append(str(res_word[1]))\n \n lematized_keyword = \" \".join(res_words)\n lematized_keywords.append(lematized_keyword)\n \n return separator.join(lematized_keywords)",
"The complete configuration class is declared below:",
"class TestConfig(DefaultConfig):\n inputColumns = [\"videolocation\", \"subtitles\", \"imagetags\", \"subtitletags\"]\n outputColumns = [\"keywords\"]\n \n # processing of a closed task\n open_ended_task = True\n annotation_vector = []\n \n def processJudgments(self, judgments):\n # pre-process output to match the values in annotation_vector\n for col in self.outputColumns:\n # transform to lowercase\n judgments[col] = judgments[col].apply(lambda x: str(x).lower())\n # remove square brackets from annotations\n judgments[col] = judgments[col].apply(lambda x: str(x).replace('[]','no tags'))\n judgments[col] = judgments[col].apply(lambda x: str(x).replace('[',''))\n judgments[col] = judgments[col].apply(lambda x: str(x).replace(']',''))\n # remove the quotes around the annotations\n judgments[col] = judgments[col].apply(lambda x: str(x).replace('\"',''))\n # apply custom processing functions\n judgments[col] = judgments[col].apply(lambda x: correct_words(str(x), self.annotation_separator))\n judgments[col] = judgments[col].apply(lambda x: \"no tag\" if cleanup_keywords(str(x), self.annotation_separator) == '' else cleanup_keywords(str(x), self.annotation_separator))\n judgments[col] = judgments[col].apply(lambda x: lemmatize_keywords(str(x), self.annotation_separator))\n return judgments",
"Pre-processing the input data\nAfter declaring the configuration of our input file, we are ready to pre-process the crowd data:",
"data, config = crowdtruth.load(\n file = \"../data/person-video-free-input.csv\",\n config = TestConfig()\n)\n\ndata['judgments'].head()",
"Computing the CrowdTruth metrics\nThe pre-processed data can then be used to calculate the CrowdTruth metrics:",
"results = crowdtruth.run(data, config)",
"results is a dict object that contains the quality metrics for the video fragments, annotations and crowd workers.\nThe video fragment metrics are stored in results[\"units\"]:",
"results[\"units\"].head()",
"The uqs column in results[\"units\"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment. Here we plot its histogram:",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.hist(results[\"units\"][\"uqs\"])\nplt.xlabel(\"Video Fragment Quality Score\")\nplt.ylabel(\"Video Fragment\")",
"The unit_annotation_score column in results[\"units\"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-relation score.",
"results[\"units\"][\"unit_annotation_score\"].head()",
"The worker metrics are stored in results[\"workers\"]:",
"results[\"workers\"].head()",
"The wqs columns in results[\"workers\"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.",
"plt.hist(results[\"workers\"][\"wqs\"])\nplt.xlabel(\"Worker Quality Score\")\nplt.ylabel(\"Workers\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
anhaidgroup/py_entitymatching | notebooks/guides/step_wise_em_guides/.ipynb_checkpoints/Performing Blocking Using Built-In Blockers (Overlap Blocker)-checkpoint.ipynb | bsd-3-clause | [
"Introduction\nThis IPython notebook illustrates how to perform blocking using Overlap blocker.\nFirst, we need to import py_entitymatching package and other libraries as follows:",
"# Import py_entitymatching package\nimport py_entitymatching as em\nimport os\nimport pandas as pd",
"Then, read the (sample) input tables for blocking purposes.",
"# Get the datasets directory\ndatasets_dir = em.get_install_path() + os.sep + 'datasets'\n\n# Get the paths of the input tables\npath_A = datasets_dir + os.sep + 'person_table_A.csv'\npath_B = datasets_dir + os.sep + 'person_table_B.csv'\n\n# Read the CSV files and set 'ID' as the key attribute\nA = em.read_csv_metadata(path_A, key='ID')\nB = em.read_csv_metadata(path_B, key='ID')\n\nA.head()",
"Ways To Do Overlap Blocking\nThere are three different ways to do overlap blocking:\n\nBlock two tables to produce a candidate set of tuple pairs.\nBlock a candidate set of tuple pairs to typically produce a reduced candidate set of tuple pairs.\nBlock two tuples to check if a tuple pair would get blocked.\n\nBlock Tables to Produce a Candidate Set of Tuple Pairs",
"# Instantiate overlap blocker object\nob = em.OverlapBlocker()",
"For the given two tables, we will assume that two persons with no sufficient overlap between their addresses do not refer to the same real world person. So, we apply overlap blocking on address. Specifically, we tokenize the address by word and include the tuple pairs if the addresses have at least 3 overlapping tokens. That is, we block all the tuple pairs that do not share at least 3 tokens in address.",
"# Specify the tokenization to be 'word' level and set overlap_size to be 3.\nC1 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3, \n l_output_attrs=['name', 'birth_year', 'address'], \n r_output_attrs=['name', 'birth_year', 'address']\n show_progress=False)\n\n# Display first 5 tuple pairs in the candidate set.\nC1.head()",
"In the above, we used word-level tokenizer. Overlap blocker also supports q-gram based tokenizer and it can be used as follows:",
"# Set the word_level to be False and set the value of q (using q_val)\nC2 = ob.block_tables(A, B, 'address', 'address', word_level=False, q_val=3, overlap_size=3, \n l_output_attrs=['name', 'birth_year', 'address'], \n r_output_attrs=['name', 'birth_year', 'address'],\n show_progress=False)\n\n# Display first 5 tuple pairs\nC2.head()",
"Updating Stopwords\nCommands in the Overlap Blocker removes some stop words by default. You can avoid this by specifying rem_stop_words parameter to False",
"# Set the parameter to remove stop words to False\nC3 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3, rem_stop_words=False,\n l_output_attrs=['name', 'birth_year', 'address'], \n r_output_attrs=['name', 'birth_year', 'address'],\n show_progress=False)\n\n# Display first 5 tuple pairs\nC3.head()",
"You can check what stop words are getting removed like this:",
"ob.stop_words",
"You can update this stop word list (with some domain specific stop words) and do the blocking.",
"# Include Franciso as one of the stop words\nob.stop_words.append('francisco')\n\nob.stop_words\n\n# Set the word level tokenizer to be True\nC4 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3, \n l_output_attrs=['name', 'birth_year', 'address'], \n r_output_attrs=['name', 'birth_year', 'address'],\n show_progress=False)\n\nC4.head()",
"Handling Missing Values\nIf the input tuples have missing values in the blocking attribute, then they are ignored by default. You can set allow_missing_values to be True to include all possible tuple pairs with missing values.",
"# Introduce some missing value\nA1 = em.read_csv_metadata(path_A, key='ID')\nA1.ix[0, 'address'] = pd.np.NaN\n\n# Set the word level tokenizer to be True\nC5 = ob.block_tables(A1, B, 'address', 'address', word_level=True, overlap_size=3, allow_missing=True,\n l_output_attrs=['name', 'birth_year', 'address'], \n r_output_attrs=['name', 'birth_year', 'address'],\n show_progress=False)\n\nlen(C5)\n\nC5",
"Block a Candidata Set To Produce Reduced Set of Tuple Pairs",
"#Instantiate the overlap blocker\nob = em.OverlapBlocker()",
"In the above, we see that the candidate set produced after blocking over input tables include tuple pairs that have at least three tokens in overlap. Adding to that, we will assume that two persons with no overlap of their names cannot refer to the same person. So, we block the candidate set of tuple pairs on name. That is, we block all the tuple pairs that have no overlap of tokens.",
"# Specify the tokenization to be 'word' level and set overlap_size to be 1.\nC6 = ob.block_candset(C1, 'name', 'name', word_level=True, overlap_size=1, show_progress=False)\n\nC6",
"In the above, we saw that word level tokenization was used to tokenize the names. You can also use q-gram tokenization like this:",
"# Specify the tokenization to be 'word' level and set overlap_size to be 1.\nC7 = ob.block_candset(C1, 'name', 'name', word_level=False, q_val= 3, overlap_size=1, show_progress=False)\n\nC7.head()",
"Handling Missing Values\nAs we saw with block_tables, you can include all the possible tuple pairs with the missing values using allow_missing parameter block the candidate set with the updated set of stop words.",
"# Introduce some missing values\nA1.ix[2, 'name'] = pd.np.NaN\n\nC8 = ob.block_candset(C5, 'name', 'name', word_level=True, overlap_size=1, allow_missing=True, show_progress=False)",
"Block Two tuples To Check If a Tuple Pair Would Get Blocked\nWe can apply overlap blocking to a tuple pair to check if it is going to get blocked. For example, we can check if the first tuple from A and B will get blocked if we block on address.",
"# Display the first tuple from table A\nA.ix[[0]]\n\n# Display the first tuple from table B\nB.ix[[0]]\n\n# Instantiate Attr. Equivalence Blocker\nob = em.OverlapBlocker()\n\n# Apply blocking to a tuple pair from the input tables on zipcode and get blocking status\nstatus = ob.block_tuples(A.ix[0], B.ix[0],'address', 'address', overlap_size=1, show_progress=False)\n\n# Print the blocking status\nprint(status)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cavestruz/MLPipeline | notebooks/time_series/sample_time_series.ipynb | mit | [
"In this first example, we will explore a simulated lightcurve that follows a damped random walk, which is often used to model variability in the optical flux of quasar.",
"import numpy as np\nfrom matplotlib import pyplot as plt\n\nfrom astroML.time_series import lomb_scargle, generate_damped_RW\nfrom astroML.time_series import ACF_scargle",
"Use the numpy.arange method to generate 1000 days of data.",
"tdays = np.arange(0, 1E3)\nz = 2.0 # redshift\ntau = 300 # damping timescale",
"Use the help function to figure out how to generate a dataset of this evenly spaced damped random walk over the 1000 days. \nAdd errors to your 1000 points using numpy.random.normal. Note, you will need 1000 points, each centered on the actual data point, and assume a sigma 0.1. \nRandomly select a subsample of 200 data points from your generated dataset. This is now unevenly spaced, and will serve as your observed lightcurve.\nPlot the observed lightcurve.\nUse the help menu to figure out how to calculate the autocorrelation function of your lightcurve with ACF_scargle. \nIn this next example, we will explore data drawn from a gaussian process.",
"from sklearn.gaussian_process import GaussianProcess",
"Define a covariance function as the one dimensional squared-exponential covariance function described in class. This will be a function of x1, x2, and the bandwidth h. Name this function covariance_squared_exponential. \nGenerate values for the x-axis as 1000 evenly points between 0 and 10 using numpy.linspace. Define a bandwidth of h=1.\nGenerate an output of your covariance_squared_exponential with x as x1, x[:,None] as x2, and h as the bandwidth.\nUse numpy.random.multivariate_normal to generate a numpy array of the same length as your x-axis points. Each point is centered on 0 (your mean is a 1-d array of zeros), and your covariance is the output of your covariance_squared_exponential above.\nChoose two values in your x-range as sample x values, and put in an array, x_sample_test. Choose a function (e.g. numpy.cos) as your example function to constrain.\nDefine an instance of a gaussian proccess",
"gp = GaussianProcess(corr='squared_exponential', theta0=0.5,\n random_state=0)",
"Fit the Gaussian process to data x1[:,None], with the output of the function on your sample x values (e.g. numpy.cos(x_sample_test) ).\nPredict on x1[:,None], and get the MSE values. Plot the output function and function errors."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hanhanwu/Hanhan_Data_Science_Practice | sequencial_analysis/after_2020_practice/ts_RNN_basics_tf2.4.ipynb | mit | [
"Time Series Forecast with Basic RNN\n\nDataset is downloaded from https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data",
"import pandas as pd\nimport numpy as np\nimport datetime\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom sklearn.preprocessing import MinMaxScaler\n\ndf = pd.read_csv('data/pm25.csv')\n\nprint(df.shape)\ndf.head()\n\ndf.isnull().sum()*100/df.shape[0]\n\ndf.dropna(subset=['pm2.5'], axis=0, inplace=True)\ndf.reset_index(drop=True, inplace=True)\n\ndf['datetime'] = df[['year', 'month', 'day', 'hour']].apply(\n lambda row: datetime.datetime(year=row['year'], \n month=row['month'], day=row['day'],hour=row['hour']), axis=1)\ndf.sort_values('datetime', ascending=True, inplace=True)\n\ndf.head()\n\ndf['year'].value_counts()\n\nplt.figure(figsize=(5.5, 5.5))\ng = sns.lineplot(data=df['pm2.5'], color='g')\ng.set_title('pm2.5 between 2010 and 2014')\ng.set_xlabel('Index')\ng.set_ylabel('pm2.5 readings')",
"Note\n\nScaling the variables will make optimization functions work better, so here going to scale the variable into [0,1] range",
"scaler = MinMaxScaler(feature_range=(0, 1))\ndf['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))\n\ndf.head()\n\nplt.figure(figsize=(5.5, 5.5))\ng = sns.lineplot(data=df['scaled_pm2.5'], color='purple')\ng.set_title('Scaled pm2.5 between 2010 and 2014')\ng.set_xlabel('Index')\ng.set_ylabel('scaled_pm2.5 readings')\n\n# 2014 data as validation data, before 2014 as training data\nsplit_date = datetime.datetime(year=2014, month=1, day=1, hour=0) \ndf_train = df.loc[df['datetime']<split_date]\ndf_val = df.loc[df['datetime']>=split_date]\nprint('Shape of train:', df_train.shape)\nprint('Shape of test:', df_val.shape)\n\ndf_val.reset_index(drop=True, inplace=True)\ndf_val.head()\n\n# The way this works is to have the first nb_timesteps-1 observations as X and nb_timesteps_th as the target,\n## collecting the data with 1 stride rolling window.\n\ndef makeXy(ts, nb_timesteps):\n \"\"\"\n Input: \n ts: original time series\n nb_timesteps: number of time steps in the regressors\n Output: \n X: 2-D array of regressors\n y: 1-D array of target \n \"\"\"\n X = []\n y = []\n for i in range(nb_timesteps, ts.shape[0]):\n X.append(list(ts.loc[i-nb_timesteps:i-1]))\n y.append(ts.loc[i])\n \n X, y = np.array(X), np.array(y)\n return X, y\n\nX_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)\nprint('Shape of train arrays:', X_train.shape, y_train.shape)\n\nprint(X_train[0], y_train[0])\nprint(X_train[1], y_train[1])\n\nX_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)\nprint('Shape of validation arrays:', X_val.shape, y_val.shape)\n\nprint(X_val[0], y_val[0])\nprint(X_val[1], y_val[1])",
"Note\n\nIn 2D array above for X_train, X_val, it means (number of samples, number of time steps)\nHowever RNN input has to be 3D array, (number of samples, number of time steps, number of features per timestep)\nOnly 1 feature which is scaled_pm2.5\nSo, the code below converts 2D array to 3D array",
"X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))\nX_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1))\nprint('Shape of arrays after reshaping:', X_train.shape, X_val.shape)\n\nimport tensorflow as tf\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import SimpleRNN\nfrom tensorflow.keras.layers import Dense, Dropout, Input\nfrom tensorflow.keras.models import load_model\nfrom tensorflow.keras.callbacks import ModelCheckpoint\n\nfrom sklearn.metrics import mean_absolute_error\n\ntf.random.set_seed(10)\n\nmodel = Sequential()\nmodel.add(SimpleRNN(32, input_shape=(X_train.shape[1:])))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(1, activation='linear'))\n\nmodel.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=['mae'])\nmodel.summary()\n\nsave_weights_at = 'basic_rnn_model'\nsave_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,\n save_best_only=True, save_weights_only=False, mode='min',\n save_freq='epoch')\nhistory = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,\n verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),\n shuffle=True)\n\n# load the best model\nbest_model = load_model('basic_rnn_model')\n\n# Compare the prediction with y_true\npreds = best_model.predict(X_val)\npred_pm25 = scaler.inverse_transform(preds)\npred_pm25 = np.squeeze(pred_pm25)\n\n# Measure MAE of y_pred and y_true\nmae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)\nprint('MAE for the validation set:', round(mae, 4))\n\nmae = mean_absolute_error(df_val['scaled_pm2.5'].loc[7:], preds)\nprint('MAE for the scaled validation set:', round(mae, 4))\n\n# Check the metrics and loss of each apoch\nmae = history.history['mae']\nval_mae = history.history['val_mae']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(len(mae))\n\nplt.plot(epochs, mae, 'bo', label='Training MAE')\nplt.plot(epochs, val_mae, 'b', label='Validation MAE')\nplt.title('Training and Validation MAE')\nplt.legend()\n\nplt.figure()\n\n# Here I was using MAE as loss too, that's why they lookedalmost the same...\nplt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and Validation loss')\nplt.legend()\n\nplt.show()",
"Note\n\nBest model saved by ModelCheckpoint saved 12th epoch result, which had 0.12 val_loss\nFrom the history plot of training vs validation loss, 12th epoch result (i=11) has the lowest validation loss. This aligh with the result from ModelCheckpoint\nSet different tensorflow seed will get different results!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ecalio07/enron-paper | deliver/.ipynb_checkpoints/010617-WJGH-art_struc-checkpoint.ipynb | gpl-3.0 | [
"Corpus callosum's shape signature for segmentation error detection in large datasets\nAbstract\nCorpus Callosum (CC) is a subcortical, white matter structure with great importance in clinical and research studies because its shape and volume are correlated with subject's characteristics and neurodegenerative diseases. CC segmentation is a important step for any medical, clinical or research posterior study. Currently, magnetic resonance imaging (MRI) is the main tool for evaluating brain because it offers the better soft tissue contrast. Particullary, segmentation in MRI difussion modality has great importante given information associated to brain microstruture and fiber composition.\nIn this work a method for detection of erroneous segmentations in large datasets is proposed based-on shape signature. Shape signature is obtained from segmentation, calculating curvature along contour using a spline formulation. A mean correct signature is used as reference for compare new segmentations through root mean square error. This method was applied to 145 subject dataset for three different segmentation methods in diffusion: Watershed, ROQS and pixel-based presenting high accuracy in error detection. This method do not require per-segmentation reference and it can be applied to any MRI modality and other image aplications.",
"## Functions\n\nimport sys\nsys.path.append(\"../dev\")\n\nimport bib_mri as FW\nimport numpy as np\nimport scipy as scipy\nimport scipy.misc as misc \nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom numpy import genfromtxt\nimport platform\n\n%matplotlib inline\n\ndef sign_extract(seg, resols): #Function for shape signature extraction\n splines = FW.get_spline(seg,smoothness)\n\n sign_vect = np.array([]).reshape(0,points) #Initializing temporal signature vector\n for resol in resols:\n sign_vect = np.vstack((sign_vect, FW.get_profile(splines, n_samples=points, radius=resol)))\n \n return sign_vect\n\ndef sign_fit(sig_ref, sig_fit): #Function for signature fitting\n dif_curv = []\n for shift in range(points):\n dif_curv.append(np.abs(np.sum((sig_ref - np.roll(sig_fit[0],shift))**2)))\n return np.apply_along_axis(np.roll, 1, sig_fit, np.argmin(dif_curv))\n\nprint \"Python version: \", platform.python_version()\nprint \"Numpy version: \", np.version.version\nprint \"Scipy version: \", scipy.__version__\nprint \"Matplotlib version: \", mpl.__version__",
"Introduction\nThe Corpus Callosum (CC) is the largest white matter structure in the central nervous system that connects both brain hemispheres and allows the communication between them. The CC has great importance in research studies due to the correlation between shape and volume with some subject's characteristics, such as: gender, age, numeric and mathematical skills and handedness. In addition, some neurodegenerative diseases like Alzheimer, autism, schizophrenia and dyslexia could cause CC shape deformation.\nCC segmentation is a necessary step for morphological and physiological features extraction in order to analyze the structure in image-based clinical and research applications. Magnetic Resonance Imaging (MRI) is the most suitable image technique for CC segmentation due to its ability to provide contrast between brain tissues however CC segmentation is challenging because of the shape and intensity variability between subjects, volume partial effect in diffusion MRI, fornex proximity and narrow areas in CC. Among the known MRI modalities, Diffusion-MRI arouses special interest to study the CC, despite its low resolution and high complexity, since it provides useful information related to the organization of brain tissues and the magnetic field does not interfere with the diffusion process itself.\nSome CC segmentation approaches using Diffusion-MRI were found in the literature. Niogi et al. proposed a method based on thresholding, Freitas et al. e Rittner et al. proposed region methods based on Watershed transform, Nazem-Zadeh et al. implemented based on level surfaces, Kong et al. presented an clustering algorithm for segmentation, Herrera et al. segmented CC directly in diffusion weighted imaging (DWI) using a model based on pixel classification and Garcia et al. proposed a hybrid segmentation method based on active geodesic regions and level surfaces.\nWith the growing of data and the proliferation of automatic algorithms, segmentation over large databases is affordable. Therefore, error automatic detection is important in order to facilitate and speed up filter on CC segmentation databases. presented proposals for content-based image retrieval (CBIR) using shape signature of the planar object representation.\nIn this work, a method for automatic detection of segmentation error in large datasets is proposed based on CC shape signature. Signature offers shape characterization of the CC and therefore it is expected that a \"typical correct signature\" represents well any correct segmentation. Signature is extracted measuring curvature along segmentation contour. The method was implemented in three main stages: mean correct signature generation, signature configuration and method testing. The first one takes 20 corrects segmentations and generates one correct signature of reference (typical correct signature), per-resolution, using mean values in each point. The second stage stage takes 10 correct segmentations and 10 erroneous segmentations and adjusts the optimal resolution and threshold, based on mean correct signature, that lets detection of erroneous segmentations. The third stage labels a new segmentation as correct and erroneous comparing with the mean signature using optimal resolution and threshold.\n<img src=\"../figures/workflow.png\">\nThe comparison between signatures is done using root mean square error (RMSE). True label for each segmentation was done visually. Correct segmentation corresponds to segmentations with at least 50% of agreement with the structure. It is expected that RMSE for correct segmentations is lower than RMSE associated to erroneous segmentation when compared with a typical correct segmentation.",
"#Loading labeled segmentations\nseg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')\n\nlist_mask = seg_label[seg_label[:,1] == 0, 0][:20] #Extracting correct segmentations for mean signature\nlist_normal_mask = seg_label[seg_label[:,1] == 0, 0][20:30] #Extracting correct names for configuration\nlist_error_mask = seg_label[seg_label[:,1] == 1, 0][:10] #Extracting correct names for configuration\n\nmask_correct = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[0]))\nmask_error = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_error_mask[0]))\n\nplt.figure()\nplt.axis('off')\nplt.imshow(mask_correct,'gray',interpolation='none')\nplt.title(\"Correct segmentation example\")\nplt.show()\n\nplt.figure()\nplt.axis('off')\nplt.imshow(mask_error,'gray',interpolation='none')\nplt.title(\"Erroneous segmentation example\")\nplt.show()",
"Shape signature for comparison\nSignature is a shape descriptor that measures the rate of variation along the segmentation contour. As shown in figure, the curvature $k$ in the pivot point $p$, with coordinates ($x_p$,$y_p$), is calculated using the next equation. This curvature depict the angle between the segments $\\overline{(x_{p-ls},y_{p-ls})(x_p,y_p)}$ and $\\overline{(x_p,y_p)(x_{p+ls},y_{p+ls})}$. These segments are located to a distance $ls>0$, starting in a pivot point and finishing in anterior and posterior points, respectively.\nThe signature is obtained calculating the curvature along all segmentation contour.\n\\begin{equation} \\label{eq:per1}\nk(x_p,y_p) = \\arctan\\left(\\frac{y_{p+ls}-y_p}{x_{p+ls}-x_p}\\right)-\\arctan\\left(\\frac{y_p-y_{p-ls}}{x_p-x_{p-ls}}\\right)\n\\end{equation}\n<img src=\"../figures/curvature.png\">\nSignature construction is performed from segmentation contour of the CC. From contour, spline is obtained. Spline purpose is twofold: to get a smooth representation of the contour and to facilitate calculation of\nthe curvature using its parametric representation. The signature is obtained measuring curvature along spline. $ls$ is the parametric distance between pivot point and both posterior and anterior points and it determines signature resolution. By simplicity, $ls$ is measured in percentage of reconstructed spline points.\nIn order to achieve quantitative comparison between two signatures root mean square error (RMSE) is introduced. RMSE measures distance, point to point, between signatures $a$ and $b$ along all points $p$ of signatures.\n\\begin{equation} \\label{eq:per4}\nRMSE = \\sqrt{\\frac{1}{P}\\sum_{p=1}^{P}(k_{ap}-k_{bp})^2}\n\\end{equation}\nFrequently, signatures of different segmentations are not fitted along the 'x' axis because of the initial point on the spline calculation starts in different relative positions. This makes impossible to compare directly two signatures and therefore, a prior fitting process must be accomplished. The fitting process is done shifting one of the signature while the other is kept fixed. For each shift, RMSE between the two signatures is measured. The point giving the minor error is the fitting point. Fitting was done at resolution $ls = 0.35$. This resolution represents globally the CC's shape and eases their fitting.\nAfter fitting, RMSE between signatures can be measured in order to achieve final quantitative comparison.\nSignature for segmentation error detection\nFor segmentation error detection, a typical correct signature is obtained calculating mean over a group of signatures from correct segmentations. Because of this signature could be used in any resolution, $ls$ must be chosen for achieve segmentation error detection. The optimal resolution must be able to return the greatest RMSE difference between correct and erroneous segmentation when compared with a typical correct signature.\nIn the optimal resolution, a threshold must be chosen for separate erroneous and correct segmentations. This threshold stays between RMSE associated to correct ($RMSE_E$) and erroneous ($RMSE_C$) signatures and it is given by the next equation where N (in percentage) represents proximity to correct or erroneous RMSE. If RMSE calculated over a group of signatures, mean value is applied.\n\\begin{equation} \\label{eq:eq3}\nth = N*(\\overline{RMSE_E}-\\overline{RMSE_C})+\\overline{RMSE_C}\n\\end{equation}\nExperiments and results\nIn this work, comparison of signatures through RMSE is used for segmentation error detection in large datasets. For this, it will be calculated a mean correct signature based on 20 correct segmentation signatures. This mean correct signature represents a tipycal correct segmentation. For a new segmentation, signature is extracted and compared with mean signature.\nFor experiments, DWI from 152 subjects at the University of Campinas, were acquired on a Philips scanner Achieva 3T in the axial plane with a $1$x$1mm$ spatial resolution and $2mm$ slice thickness, along $32$ directions ($b-value=1000s/mm^2$, $TR=8.5s$, and $TE=61ms$). All data used in this experiment was acquired through a project approved by the research ethics committee from the School of Medicine at UNICAMP. From each acquired DWI volume, only the midsaggital slice was used.\nThree segmentation methods were implemented to obtained binary masks over a 152 subject dataset: Watershed, ROQS and pixel-based. 40 Watershed segmentations were chosen as follows: 20 correct segmentations for mean correct signature generation and 10 correct and 10 erroneous segmentations for signature configuration stage. Watershed was chosen to generate and adjust the mean signature because of its higher error rate and its variability in the erroneous segmentation shape. These characteristics allow improve generalization. The method was tested on the remaining Watershed segmentations (108 masks) and two additional segmentations methods: ROQS (152 masks) and pixel-based (152 masks).\nMean correct signature generation\nIn this work, segmentations based on Watershed method were used for implementation of the first and second stages. From the Watershed dataset, 20 correct segmentations were chosen. Spline for each one was obtained from segmentation contour. The contour was obtained using mathematical morphology, applying xor logical operation, pixel-wise, between original segmentation and the eroded version of itself by an structuring element b:\n\\begin{equation} \\label{eq:per2}\nG_E = XOR(S,S \\ominus b)\n\\end{equation}\nFrom contour, it is calculated spline. The implementation, is a B-spline (Boor's basic spline). This formulation has two parameters: degree, representing polynomial degrees of the spline, and smoothness, being the trade off between proximity and smoothness in the fitness of the spline. Degree was fixed in 5 allowing adequate representation of the contour. Smoothness was fixed in 700. This value is based on the mean quantity of pixels of the contour that are passed for spline calculation. The curvature was measured over 500 points over the spline to generate the signature along 20 segmentations. Signatures were fitted to make possible comparison (Fig. signatures). Fitting resolution was fixed in 0.35.",
"n_list = len(list_mask)\nsmoothness = 700 #Smoothness\ndegree = 1 #Spline degree\nfit_res = 0.35\nresols = np.arange(0.01,0.5,0.01) #Signature resolutions\nresols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting\npoints = 100 #Points of Spline reconstruction\n\nrefer_wat = np.empty((n_list,resols.shape[0],points)) #Initializing signature vector\n\nfor mask in xrange(n_list):\n mask_p = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[mask]))\n \n refer_temp = sign_extract(mask_p, resols) #Function for shape signature extraction\n \n refer_wat[mask] = refer_temp\n if mask > 0: #Fitting curves using the first one as basis\n prof_ref = refer_wat[0] \n refer_wat[mask] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting\n \nprint \"Signatures' vector size: \", refer_wat.shape\n\nres_ex = 10\nplt.figure()\nplt.plot(refer_wat[:,res_ex,:].T)\nplt.title(\"Signatures for res: %f\"%(resols[res_ex]))\nplt.show()",
"In order to get a representative correct signature, mean signature per-resolution was generated using 20 correct signatures. The mean was calculated in each point.",
"refer_wat_mean = np.mean(refer_wat,axis=0) #Finding mean signature per resolution\n\nprint \"Mean signature size: \", refer_wat_mean.shape\n\nplt.figure() #Plotting mean signature\nplt.plot(refer_wat_mean[res_ex,:])\nplt.title(\"Mean signature for res: %f\"%(resols[res_ex]))\nplt.show()",
"Signature configuration\nBecause of the mean signature was extracted for all the resolutions, it is necessary to find resolution in that diference between RMSE for correct signature and RMSE for erroneous signature is maximum. So, 20 news segmentations were used to find this optimal resolution, being divided as 10 correct segmentations and 10 erroneous segmentations. For each segmentation, it was extracted signature for all resolutions.",
"n_list = np.amax((len(list_normal_mask),len(list_error_mask)))\nrefer_wat_n = np.empty((n_list,resols.shape[0],points)) #Initializing correct signature vector\nrefer_wat_e = np.empty((n_list,resols.shape[0],points)) #Initializing error signature vector\n\nfor mask in xrange(n_list):\n #Loading correct mask\n mask_pn = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_normal_mask[mask]))\n refer_temp_n = sign_extract(mask_pn, resols) #Function for shape signature extraction\n refer_wat_n[mask] = sign_fit(refer_wat_mean[0], refer_temp_n) #Function for signature fitting\n #Loading erroneous mask\n mask_pe = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_error_mask[mask]))\n refer_temp_e = sign_extract(mask_pe, resols) #Function for shape signature extraction\n refer_wat_e[mask] = sign_fit(refer_wat_mean[0], refer_temp_e) #Function for signature fitting\n \nprint \"Correct segmentations' vector: \", refer_wat_n.shape\nprint \"Erroneous segmentations' vector: \", refer_wat_e.shape\n\nplt.figure()\nplt.plot(refer_wat_n[:,res_ex,:].T)\nplt.title(\"Correct signatures for res: %f\"%(resols[res_ex]))\nplt.show()\n\nplt.figure()\nplt.plot(refer_wat_e[:,res_ex,:].T)\nplt.title(\"Erroneous signatures for res: %f\"%(resols[res_ex]))\nplt.show()",
"The RMSE over the 10 correct segmentations was compared with RMSE over the 10 erroneous segmentations. As expected, RMSE for correct segmentations was greater than RMSE for erroneous segmentations along all the resolutions. In general, this is true, but optimal resolution guarantee the maximum difference between both of RMSE results: correct and erroneous.\nSo, to find optimal resolution, difference between correct and erroneous RMSE was calculated over all resolutions.",
"rmse_nacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_n)**2,axis=2)/(refer_wat_mean.shape[1]))\nrmse_eacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_e)**2,axis=2)/(refer_wat_mean.shape[1]))\n\ndif_dis = rmse_eacum - rmse_nacum #Difference between erroneous signatures and correct signatures\n\nin_max_res = np.argmax(np.mean(dif_dis,axis=0)) #Finding optimal resolution at maximum difference\nopt_res = resols[in_max_res]\nprint \"Optimal resolution for error detection: \", opt_res\n\nperc_th = 0.3 #Established percentage to threshold\ncorrect_max = np.mean(rmse_nacum[:,in_max_res]) #Finding threshold for separate segmentations\nerror_min = np.mean(rmse_eacum[:,in_max_res])\nth_res = perc_th*(error_min-correct_max)+correct_max\nprint \"Threshold for separate segmentations: \", th_res\n\n#### Plotting erroneous and correct segmentation signatures\n\nticksx_resols = [\"%.2f\" % el for el in np.arange(0.01,0.5,0.01)] #Labels for plot xticks\nticksx_resols = ticksx_resols[::6]\nticksx_index = np.arange(1,50,6)\n\nfigpr = plt.figure() #Plotting mean RMSE for correct segmentations\nplt.boxplot(rmse_nacum[:,1:], showmeans=True) #Element 0 was introduced only for fitting, \n #in comparation is not used.\nplt.axhline(y=0, color='g', linestyle='--')\nplt.axhline(y=th_res, color='r', linestyle='--')\nplt.axvline(x=in_max_res, color='r', linestyle='--')\nplt.xlabel('Resolutions', fontsize = 12, labelpad=-2)\nplt.ylabel('RMSE correct signatures', fontsize = 12)\nplt.xticks(ticksx_index, ticksx_resols)\nplt.show()\n\nfigpr = plt.figure() #Plotting mean RMSE for erroneous segmentations\nplt.boxplot(rmse_eacum[:,1:], showmeans=True)\nplt.axhline(y=0, color='g', linestyle='--')\nplt.axhline(y=th_res, color='r', linestyle='--')\nplt.axvline(x=in_max_res, color='r', linestyle='--')\nplt.xlabel('Resolutions', fontsize = 12, labelpad=-2)\nplt.ylabel('RMSE error signatures', fontsize = 12)\nplt.xticks(ticksx_index, ticksx_resols)\nplt.show()\n\nfigpr = plt.figure() #Plotting difference for mean RMSE over all resolutions\nplt.boxplot(dif_dis[:,1:], showmeans=True)\nplt.axhline(y=0, color='g', linestyle='--')\nplt.axvline(x=in_max_res, color='r', linestyle='--')\nplt.xlabel('Resolutions', fontsize = 12, labelpad=-2)\nplt.ylabel('Difference RMSE signatures', fontsize = 12)\nplt.xticks(ticksx_index, ticksx_resols)\nplt.show()",
"The greatest difference resulted at resolution 0.09. In this resolution, threshold for separate erroneous and correct segmentations is established as 30% of the distance between the mean RMSE of the correct masks and the mean RMSE of the erroneous masks.\nMethod testing\nFinally, method test was performed in the 145 subject dataset: Watershed dataset with 107 segmentations, ROQS dataset with 152 segmentations and pixel-based dataset with 152 segmentations. You can uncomment indicated blocks below to detailed results (mask and dataset-wise).",
"n_resols = [fit_res, opt_res] #Resolutions for fitting and comparison\n\n#### Teste dataset (Watershed)\n#Loading labels\nseg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')\n\nall_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0][30:],\n seg_label[seg_label[:,1] == 1, 0][10:])) #Extracting erroneous and correct names\n\nlab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1][30:],\n seg_label[seg_label[:,1] == 1, 1][10:])) #Extracting erroneous and correct labels\n\nrefer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting \n #and optimal resolution\n\nrefer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector\nin_mask = 0\nfor mask in all_seg:\n mask_ = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask))\n refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction\n refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting\n \n ###### Uncomment this block to see each segmentation with true and predicted labels\n #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))\n #plt.figure()\n #plt.axis('off')\n #plt.imshow(mask_,'gray',interpolation='none')\n #plt.title(\"True label: {}, Predic. label: {}\".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))\n #plt.show()\n \n in_mask += 1\n\n#### Segmentation evaluation result over all segmentations\nRMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))\npred_seg = RMSE > th_res #Apply threshold\ncomp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels\nacc_w = np.sum(comp_seg)/(1.0*len(comp_seg))\nprint \"Final accuracy on Watershed {} segmentations: {}\".format(len(comp_seg),acc_w)\n\n#### Teste dataset (ROQS)\n\nseg_label = genfromtxt('../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8') #Loading labels\n\nall_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0],\n seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names\n\nlab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1],\n seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels\n\nrefer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting \n #and optimal resolution\n\nrefer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector\nin_mask = 0\nfor mask in all_seg:\n mask_ = np.load('../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask))\n refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction\n refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting\n \n ###### Uncomment this block to see each segmentation with true and predicted labels\n #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))\n #plt.figure()\n #plt.axis('off')\n #plt.imshow(mask_,'gray',interpolation='none')\n #plt.title(\"True label: {}, Predic. label: {}\".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))\n #plt.show()\n \n in_mask += 1\n\n#### Segmentation evaluation result over all segmentations\nRMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))\npred_seg = RMSE > th_res #Apply threshold\ncomp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels\nacc_r = np.sum(comp_seg)/(1.0*len(comp_seg))\nprint \"Final accuracy on ROQS {} segmentations: {}\".format(len(comp_seg),acc_r)\n\n#### Teste dataset (Pixel-based)\n\nseg_label = genfromtxt('../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8') #Loading labels\n\nall_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0],\n seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names\n\nlab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1],\n seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels\n\nrefer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting \n #and optimal resolution\n\nrefer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector\nin_mask = 0\nfor mask in all_seg:\n mask_ = np.load('../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask))\n refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction\n refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting\n \n ###### Uncomment this block to see each segmentation with true and predicted labels\n #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))\n #plt.figure()\n #plt.axis('off')\n #plt.imshow(mask_,'gray',interpolation='none')\n #plt.title(\"True label: {}, Predic. label: {}\".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))\n #plt.show()\n \n in_mask += 1\n\n#### Segmentation evaluation result over all segmentations\nRMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))\npred_seg = RMSE > th_res #Apply threshold\ncomp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels\nacc_p = np.sum(comp_seg)/(1.0*len(comp_seg))\nprint \"Final accuracy on pixel-based {} segmentations: {}\".format(len(comp_seg),acc_p)",
"Discussion and conclusion\nIn this work, a method for segmentation error detection in large datasets was proposed based-on shape signature. RMSE was used for comparison between signatures. Signature can be extracted in various resolutions but optimal resolution (ls=0.09) was chosen in order to get maximum separation between correct RMSE and erroneous RMSE. In this optimal resolution, threshold was fixed at 29.4 allowing separation of the two segmentation classes. The method achieved 95% of accuracy on both, the test Watershed segmentations and the test datasets: ROQS and pixel-based.\n40 Watershed segmentations were used to generation and configuration of the mean correct signature because of the greater number of erroneous segmentations and major variability on the error shape in this dataset. Because the signature holds the CC shape, the method can be extended to new datasets, segmented with any method. Accuracy and generalization can be improved varying the segmentations used to generate and adjust the mean correct signature."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
vasco-da-gama/ros_hadoop | doc/Tutorial.ipynb | apache-2.0 | [
"RosbagInputFormat\nRosbagInputFormat is an open source splitable Hadoop InputFormat for the rosbag file format.\n\nUsage from Spark (pyspark)\nExample data can be found for instance at https://github.com/udacity/self-driving-car/tree/master/datasets published under MIT License.\nCheck that the rosbag file version is V2.0\nThe code you cloned is located in /opt/ros_hadoop/master while the latest release is in /opt/ros_hadoop/latest\n../lib/rosbaginputformat.jar is a symlink to a recent version. You can replace it with the version you would like to test.\nbash\njava -jar ../lib/rosbaginputformat.jar --version -f /opt/ros_hadoop/master/dist/HMB_4.bag\nExtract the index as configuration\nThe index is a very very small configuration file containing a protobuf array that will be given in the job configuration.\nNote that the operation will not process and it will not parse the whole bag file, but will simply seek to the required offset.",
"%%bash\necho -e \"Current working directory: $(pwd)\\n\\n\"\n\ntree -d -L 2 /opt/ros_hadoop/\n\n%%bash\n# assuming you start the notebook in the doc/ folder of master (default Dockerfile build)\njava -jar ../lib/rosbaginputformat.jar -f /opt/ros_hadoop/master/dist/HMB_4.bag",
"This will generate a very small file named HMB_4.bag.idx.bin in the same folder.\nCopy the bag file in HDFS\nUsing your favorite tool put the bag file in your working HDFS folder.\nNote: keep the index json file as configuration to your jobs, do not put small files in HDFS.\nFor convenience we already provide an example file (/opt/ros_hadoop/master/dist/HMB_4.bag) in the HDFS under /user/root/\nbash\nhdfs dfs -put /opt/ros_hadoop/master/dist/HMB_4.bag\nhdfs dfs -ls\nProcess the ros bag file in Spark using the RosbagInputFormat\n\nCreate the Spark Session or get an existing one",
"from pyspark import SparkContext, SparkConf\nfrom pyspark.sql import SparkSession\n\nsparkConf = SparkConf()\nsparkConf.setMaster(\"local[*]\")\nsparkConf.setAppName(\"ros_hadoop\")\nsparkConf.set(\"spark.jars\", \"../lib/protobuf-java-3.3.0.jar,../lib/rosbaginputformat.jar,../lib/scala-library-2.11.8.jar\")\n\nspark = SparkSession.builder.config(conf=sparkConf).getOrCreate()\nsc = spark.sparkContext",
"Create an RDD from the Rosbag file\nNote: your HDFS address might differ.",
"fin = sc.newAPIHadoopFile(\n path = \"hdfs://127.0.0.1:9000/user/root/HMB_4.bag\",\n inputFormatClass = \"de.valtech.foss.RosbagMapInputFormat\",\n keyClass = \"org.apache.hadoop.io.LongWritable\",\n valueClass = \"org.apache.hadoop.io.MapWritable\",\n conf = {\"RosbagInputFormat.chunkIdx\":\"/opt/ros_hadoop/master/dist/HMB_4.bag.idx.bin\"})",
"Interpret the Messages\nTo interpret the messages we need the connections.\nWe could get the connections as configuration as well. At the moment we decided to collect the connections into Spark driver in a dictionary and use it in the subsequent RDD actions. Note in the next version of the RosbagInputFormater alternative implementations will be given.\nCollect the connections from all Spark partitions of the bag file into the Spark driver",
"conn_a = fin.filter(lambda r: r[1]['header']['op'] == 7).map(lambda r: r[1]).collect()\nconn_d = {str(k['header']['topic']):k for k in conn_a}\n# see topic names\nconn_d.keys()",
"Load the python map functions from src/main/python/functions.py",
"%run -i ../src/main/python/functions.py",
"Use of msg_map to apply a function on all messages\nPython rosbag.bag needs to be installed on all Spark workers.\nThe msg_map function (from src/main/python/functions.py) takes three arguments:\n1. r = the message or RDD record Tuple\n2. func = a function (default str) to apply to the ROS message\n3. conn = a connection to specify what topic to process",
"%matplotlib nbagg \n# use %matplotlib notebook in python3\nfrom functools import partial\nimport pandas as pd\nimport numpy as np\n\n# Take messages from '/imu/data' topic using default str func\nrdd = fin.flatMap(\n partial(msg_map, conn=conn_d['/imu/data'])\n)\n\nprint(rdd.take(1)[0])",
"Image data from camera messages\nAn example of taking messages using a func other than default str.\nIn our case we apply a lambda to messages from from '/center_camera/image_color/compressed' topic. As usual with Spark the operation will happen in parallel on all workers.",
"from PIL import Image\nfrom io import BytesIO\n\nres = fin.flatMap(\n partial(msg_map, func=lambda r: r.data, conn=conn_d['/center_camera/image_color/compressed'])\n).take(50)\n \nImage.open(BytesIO(res[48]))",
"Plot fuel level\nThe topic /vehicle/fuel_level_report contains 2215 ROS messages. Let us plot the header.stamp in seconds vs. fuel_level using a pandas dataframe",
"def f(msg):\n return (msg.header.stamp.secs, msg.fuel_level)\n\nd = fin.flatMap(\n partial(msg_map, func=f, conn=conn_d['/vehicle/fuel_level_report'])\n).toDF().toPandas()\n\nd.set_index('_1').plot()",
"Machine Learning models on Spark workers\nA dot product Keras \"model\" for each message from a topic. We will compare it with the one computed with numpy.\nNote that the imports happen in the workers and not in driver. On the other hand the connection dictionary is sent over the closure.",
"def f(msg):\n from keras.layers import dot, Dot, Input\n from keras.models import Model\n \n linear_acceleration = {\n 'x': msg.linear_acceleration.x,\n 'y': msg.linear_acceleration.y,\n 'z': msg.linear_acceleration.z,\n }\n \n linear_acceleration_covariance = np.array(msg.linear_acceleration_covariance)\n \n i1 = Input(shape=(3,))\n i2 = Input(shape=(3,))\n o = dot([i1,i2], axes=1)\n \n model = Model([i1,i2], o)\n \n # return a tuple with (numpy dot product, keras dot \"predict\")\n return (\n np.dot(linear_acceleration_covariance.reshape(3,3), \n [linear_acceleration['x'], linear_acceleration['y'], linear_acceleration['z']]),\n model.predict([\n np.array([[ linear_acceleration['x'], linear_acceleration['y'], linear_acceleration['z'] ]]),\n linear_acceleration_covariance.reshape((3,3))])\n )\n\nfin.flatMap(partial(msg_map, func=f, conn=conn_d['/vehicle/imu/data_raw'])).take(5)\n\n# tuple with (numpy dot product, keras dot \"predict\")",
"One can of course sample and collect the data in the driver to train a model.\nNote that the msg is the most granular unit but you could of course replace the flatMap with a mapPartitions\nAnother option would be to have a map.reduceByKey before the flatMap so that the function argument would be a whole interval instead of a msg."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb | apache-2.0 | [
"Using side features: feature preprocessing\nLearning Objectives\n\nTurning categorical features into embeddings.\nNormalizing continuous features.\nProcessing text features.\nBuild a User and Movie model.\n\nIntroduction\nOne of the great advantages of using a deep learning framework to build recommender models is the freedom to build rich, flexible feature representations.\nThe first step in doing so is preparing the features, as raw features will usually not be immediately usable in a model.\nFor example:\n\nUser and item ids may be strings (titles, usernames) or large, noncontiguous integers (database IDs).\nItem descriptions could be raw text.\nInteraction timestamps could be raw Unix timestamps.\n\nThese need to be appropriately transformed in order to be useful in building models:\n\nUser and item ids have to be translated into embedding vectors: high-dimensional numerical representations that are adjusted during training to help the model predict its objective better.\nRaw text needs to be tokenized (split into smaller parts such as individual words) and translated into embeddings.\nNumerical features need to be normalized so that their values lie in a small interval around 0.\n\nFortunately, by using TensorFlow we can make such preprocessing part of our model rather than a separate preprocessing step. This is not only convenient, but also ensures that our pre-processing is exactly the same during training and during serving. This makes it safe and easy to deploy models that include even very sophisticated pre-processing.\nIn this notebook, we are going to focus on recommenders and the preprocessing we need to do on the MovieLens dataset. If you're interested in a larger tutorial without a recommender system focus, have a look at the full Keras preprocessing guide.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook\nThe MovieLens dataset\nLet's first have a look at what features we can use from the MovieLens dataset:",
"!pip install -q --upgrade tensorflow-datasets",
"Please re-run the above cell if you are getting any incompatible warnings and errors.",
"import pprint\n\nimport tensorflow_datasets as tfds\n\nratings = tfds.load(\"movielens/100k-ratings\", split=\"train\")\n\nfor x in ratings.take(1).as_numpy_iterator():\n pprint.pprint(x)",
"There are a couple of key features here:\n\nMovie title is useful as a movie identifier.\nUser id is useful as a user identifier.\nTimestamps will allow us to model the effect of time.\n\nThe first two are categorical features; timestamps are a continuous feature.\nTurning categorical features into embeddings\nA categorical feature is a feature that does not express a continuous quantity, but rather takes on one of a set of fixed values.\nMost deep learning models express these feature by turning them into high-dimensional vectors. During model training, the value of that vector is adjusted to help the model predict its objective better.\nFor example, suppose that our goal is to predict which user is going to watch which movie. To do that, we represent each user and each movie by an embedding vector. Initially, these embeddings will take on random values - but during training, we will adjust them so that embeddings of users and the movies they watch end up closer together.\nTaking raw categorical features and turning them into embeddings is normally a two-step process:\n\nFirstly, we need to translate the raw values into a range of contiguous integers, normally by building a mapping (called a \"vocabulary\") that maps raw values (\"Star Wars\") to integers (say, 15).\nSecondly, we need to take these integers and turn them into embeddings.\n\nDefining the vocabulary\nThe first step is to define a vocabulary. We can do this easily using Keras preprocessing layers.",
"import numpy as np\nimport tensorflow as tf\n\nmovie_title_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()",
"The layer itself does not have a vocabulary yet, but we can build it using our data.",
"movie_title_lookup.adapt(ratings.map(lambda x: x[\"movie_title\"]))\n\nprint(f\"Vocabulary: {movie_title_lookup.get_vocabulary()[:3]}\")",
"Once we have this we can use the layer to translate raw tokens to embedding ids:",
"movie_title_lookup([\"Star Wars (1977)\", \"One Flew Over the Cuckoo's Nest (1975)\"])",
"Note that the layer's vocabulary includes one (or more!) unknown (or \"out of vocabulary\", OOV) tokens. This is really handy: it means that the layer can handle categorical values that are not in the vocabulary. In practical terms, this means that the model can continue to learn about and make recommendations even using features that have not been seen during vocabulary construction.\nUsing feature hashing\nIn fact, the StringLookup layer allows us to configure multiple OOV indices. If we do that, any raw value that is not in the vocabulary will be deterministically hashed to one of the OOV indices. The more such indices we have, the less likley it is that two different raw feature values will hash to the same OOV index. Consequently, if we have enough such indices the model should be able to train about as well as a model with an explicit vocabulary without the disadvantage of having to maintain the token list.\nWe can take this to its logical extreme and rely entirely on feature hashing, with no vocabulary at all. This is implemented in the tf.keras.layers.experimental.preprocessing.Hashing layer.",
"# We set up a large number of bins to reduce the chance of hash collisions.\nnum_hashing_bins = 200_000\n\nmovie_title_hashing = tf.keras.layers.experimental.preprocessing.Hashing(\n num_bins=num_hashing_bins\n)",
"We can do the lookup as before without the need to build vocabularies:",
"movie_title_hashing([\"Star Wars (1977)\", \"One Flew Over the Cuckoo's Nest (1975)\"])",
"Defining the embeddings\nNow that we have integer ids, we can use the Embedding layer to turn those into embeddings.\nAn embedding layer has two dimensions: the first dimension tells us how many distinct categories we can embed; the second tells us how large the vector representing each of them can be.\nWhen creating the embedding layer for movie titles, we are going to set the first value to the size of our title vocabulary (or the number of hashing bins). The second is up to us: the larger it is, the higher the capacity of the model, but the slower it is to fit and serve.",
"# Turns positive integers (indexes) into dense vectors of fixed size.\nmovie_title_embedding = # TODO: Your code goes here\n # Let's use the explicit vocabulary lookup.\n input_dim=movie_title_lookup.vocab_size(),\n output_dim=32\n)",
"We can put the two together into a single layer which takes raw text in and yields embeddings.",
"movie_title_model = tf.keras.Sequential([movie_title_lookup, movie_title_embedding])",
"Just like that, we can directly get the embeddings for our movie titles:",
"movie_title_model([\"Star Wars (1977)\"])",
"We can do the same with user embeddings:",
"user_id_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()\nuser_id_lookup.adapt(ratings.map(lambda x: x[\"user_id\"]))\n\nuser_id_embedding = tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32)\n\nuser_id_model = tf.keras.Sequential([user_id_lookup, user_id_embedding])",
"Normalizing continuous features\nContinuous features also need normalization. For example, the timestamp feature is far too large to be used directly in a deep model:",
"for x in ratings.take(3).as_numpy_iterator():\n print(f\"Timestamp: {x['timestamp']}.\")",
"We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones.\nStandardization\nStandardization rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation.\nThis can be easily accomplished using the tf.keras.layers.experimental.preprocessing.Normalization layer:",
"# Feature-wise normalization of the data.\ntimestamp_normalization = # TODO: Your code goes here\ntimestamp_normalization.adapt(ratings.map(lambda x: x[\"timestamp\"]).batch(1024))\n\nfor x in ratings.take(3).as_numpy_iterator():\n print(f\"Normalized timestamp: {timestamp_normalization(x['timestamp'])}.\")",
"Discretization\nAnother common transformation is to turn a continuous feature into a number of categorical features. This makes good sense if we have reasons to suspect that a feature's effect is non-continuous.\nTo do this, we first need to establish the boundaries of the buckets we will use for discretization. The easiest way is to identify the minimum and maximum value of the feature, and divide the resulting interval equally:",
"max_timestamp = ratings.map(lambda x: x[\"timestamp\"]).reduce(\n tf.cast(0, tf.int64), tf.maximum).numpy().max()\nmin_timestamp = ratings.map(lambda x: x[\"timestamp\"]).reduce(\n np.int64(1e9), tf.minimum).numpy().min()\n\ntimestamp_buckets = np.linspace(\n min_timestamp, max_timestamp, num=1000)\n\nprint(f\"Buckets: {timestamp_buckets[:3]}\")",
"Given the bucket boundaries we can transform timestamps into embeddings:",
"timestamp_embedding_model = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),\n tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32)\n])\n\nfor timestamp in ratings.take(1).map(lambda x: x[\"timestamp\"]).batch(1).as_numpy_iterator():\n print(f\"Timestamp embedding: {timestamp_embedding_model(timestamp)}.\") ",
"Processing text features\nWe may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario.\nWhile the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series.\nThe first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding.\nThe Keras tf.keras.layers.experimental.preprocessing.TextVectorization layer can do the first two steps for us:",
"# Text vectorization layer.\ntitle_text = # TODO: Your code goes here\ntitle_text.adapt(ratings.map(lambda x: x[\"movie_title\"]))",
"Let's try it out:",
"for row in ratings.batch(1).map(lambda x: x[\"movie_title\"]).take(1):\n print(title_text(row))",
"Each title is translated into a sequence of tokens, one for each piece we've tokenized.\nWe can check the learned vocabulary to verify that the layer is using the correct tokenization:",
"title_text.get_vocabulary()[40:45]",
"This looks correct: the layer is tokenizing titles into individual words.\nTo finish the processing, we now need to embed the text. Because each title contains multiple words, we will get multiple embeddings for each title. For use in a downstream model these are usually compressed into a single embedding. Models like RNNs or Transformers are useful here, but averaging all the words' embeddings together is a good starting point.\nPutting it all together\nWith these components in place, we can build a model that does all the preprocessing together.\nUser model\nThe full user model may look like the following:",
"class UserModel(tf.keras.Model):\n \n def __init__(self):\n super().__init__()\n\n self.user_embedding = tf.keras.Sequential([\n user_id_lookup,\n tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32),\n ])\n self.timestamp_embedding = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),\n tf.keras.layers.Embedding(len(timestamp_buckets) + 2, 32)\n ])\n self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()\n\n def call(self, inputs):\n\n # Take the input dictionary, pass it through each input layer,\n # and concatenate the result.\n return tf.concat([\n self.user_embedding(inputs[\"user_id\"]),\n self.timestamp_embedding(inputs[\"timestamp\"]),\n self.normalized_timestamp(inputs[\"timestamp\"])\n ], axis=1)",
"Let's try it out:",
"user_model = # TODO: Your code goes here\n\nuser_model.normalized_timestamp.adapt(\n ratings.map(lambda x: x[\"timestamp\"]).batch(128))\n\nfor row in ratings.batch(1).take(1):\n print(f\"Computed representations: {user_model(row)[0, :3]}\")",
"Movie model\nWe can do the same for the movie model:",
"class MovieModel(tf.keras.Model):\n \n def __init__(self):\n super().__init__()\n\n max_tokens = 10_000\n\n self.title_embedding = tf.keras.Sequential([\n movie_title_lookup,\n tf.keras.layers.Embedding(movie_title_lookup.vocab_size(), 32)\n ])\n self.title_text_embedding = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.TextVectorization(max_tokens=max_tokens),\n tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),\n # We average the embedding of individual words to get one embedding vector\n # per title.\n tf.keras.layers.GlobalAveragePooling1D(),\n ])\n\n def call(self, inputs):\n return tf.concat([\n self.title_embedding(inputs[\"movie_title\"]),\n self.title_text_embedding(inputs[\"movie_title\"]),\n ], axis=1)",
"Let's try it out:",
"movie_model = # TODO: Your code goes here\n\nmovie_model.title_text_embedding.layers[0].adapt(\n ratings.map(lambda x: x[\"movie_title\"]))\n\nfor row in ratings.batch(1).take(1):\n print(f\"Computed representations: {movie_model(row)[0, :3]}\")",
"Next steps\nWith the two models above we've taken the first steps to representing rich features in a recommender model: to take this further and explore how these can be used to build an effective deep recommender model, take a look at our Deep Recommenders tutorial."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tgrammat/ML-Data_Challenges | Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb | apache-2.0 | [
"Cliff Walking Problem solved with TD(n) Algorithms: Implementation & Comparisons\n1. Load Libraries & Define Environment",
"import gym\nimport random\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\nfrom collections import OrderedDict",
"The OpenAI Gym toolkit includes the below environment for the \"Cliff-Walking\" problem:",
"print('OpenAI Gym environments for Cliff Walking Problem:')\n[k for k in gym.envs.registry.env_specs.keys() if k.find('Cliff' , 0) >=0]",
"Load the Cliff-Walking environment:",
"env = gym.make('CliffWalking-v0')",
"This environment has to do about gridworld shown below, where the traveller initial position (x) and the target to achieve (reach T) has been flagged appropriately. In addition in a one of the edge of this gridwordld example there is a \"Cliff\" denoted with C. Reward is $-1$ on all transitions except those into the cliff region. Steppping into this region incurs a reward of $-100$ and sends the agent instantly back to the start. \nOnce the environment is initialized you get the situation below. This is an episodic (undiscounted) task with start at traveller's starting point, and it is completed either when the goal is achieved, that is the traveller manage to reach the target location, T, or she may happen to step into the cliff. In this case the environment is reseted in each initial state.",
"env.render()",
"Possible traveller's actions are of course her movements in this grid:\n- \"UP\": denoted by 0\n- \"RIGHT\": denoted by 1\n- \"DOWN\": denoted by 2\n- \"LEFT\": denoted by 3\nTo get the new state at every next step of an episode, you may pass the current action into the .step() method of the environment. The environment then will return a tuple (observation, reward, done, info) each of which are explained as below:\n- observation (object): agent's observation of the current environment\n- reward (float): amount of reward returned after previous action\n- done (bool): whether the episode has ended, in which case further step() calls will return undefined results\n- info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)\nNote: At termination of each episode, the programmer is responsible to reset the environment.\nFor further details concerning the CliffWalking-v0 environment of OpenAI Gym toolkit consult the docstring below.",
"help(env)",
"2. RL-Algorithms based on Temporal Difference TD(n): Prediction Problem\n2a. Load the \"Temporal Difference\" Python class\nLoad the Python class PlotUtils() which provides various plotting utilities and start a new instance.",
"%run ../PlotUtils.py\n\nplotutls = PlotUtils()",
"Load the Temporal Difference Python class, TemporalDifferenceUtils():",
"%run ../TDn_Utils.py",
"Instantiate the class for the environment of interest:",
"TD = TemporalDifferenceUtils(env)",
"2b. n-step TD Prediction (estimating $V \\approx v_{\\pi}$)\nWe define the functions below to help:\n1. compute the optimal state-action values of this problem, \n2. provide the optimal policy which is expected to be learned by the agent, and\n3. visualize the result to verify that everything have been configured correctly.",
"def cliff_walking_optimal_q_values(grid_height=4, grid_width=12, cliff_index = np.s_[:,:,:]):\n \n # Define the start position\n start = [grid_height - 1, 0] \n\n # Define the position of target dstination\n goal = [grid_height - 1, grid_width - 1]\n\n # Define a dictionary of possible actions\n actions_dict = {}\n actions = ['UP', 'RIGHT', 'DOWN', 'LEFT']\n for k, v in zip(actions, range(0, len(actions))):\n actions_dict[k] = v\n\n # Define a \"q_values\" array for grid-world of interest\n n_states = grid_height * grid_width\n n_actions = len(actions_dict)\n q_values = np.full((grid_height, grid_width, n_actions), fill_value=-100.)\n\n # Determine \"q_values\" of optimal policy\n n_steps = grid_width\n q_values[:cliff_index[0],:,actions_dict['RIGHT']] = np.arange(0, n_steps, 1)\n q_values[start[0], start[1], actions_dict['UP']] = 0.5\n q_values[:goal[0],goal[1],actions_dict['DOWN']] = n_steps\n \n return q_values\n\n\ndef cliff_walking_optimal_policy(env, state):\n active_q = cliff_walking_optimal_q_values(grid_height=4, grid_width=12, cliff_index = np.s_[3,1:12,:])\n active_q = active_q.reshape((env.observation_space.n, env.action_space.n))\n return TD.epsilon_greedy_policy(env, active_q, state, epsilon=0.)\n\n\n# print optimal policy\ndef print_optimal_policy(q_values, grid_height=4, grid_width=12):\n # Define a helper dictionary of actions\n actions_dict = {}\n actions = ['UP', 'RIGHT', 'DOWN', 'LEFT']\n for k, v in zip(actions, range(0, len(actions))):\n actions_dict[k] = v\n \n # Define the position of target dstination\n GOAL = [3, 11]\n \n # Reshape the \"q_values\" table to follow grid-world dimensionality\n q_values = q_values.reshape((grid_height, grid_width, len(actions)))\n \n optimal_policy = []\n for i in range(0, grid_height):\n optimal_policy.append([])\n for j in range(0, grid_width):\n if [i, j] == GOAL:\n optimal_policy[-1].append('G')\n continue\n bestAction = np.argmax(q_values[i, j, :])\n if bestAction == actions_dict['UP']:\n optimal_policy[-1].append('\\U00002191')\n elif bestAction == actions_dict['RIGHT']:\n optimal_policy[-1].append('\\U00002192')\n elif bestAction == actions_dict['DOWN']:\n optimal_policy[-1].append('\\U00002193')\n elif bestAction == actions_dict['LEFT']:\n optimal_policy[-1].append('\\U00002190')\n for row in optimal_policy:\n print(*row)",
"Verify that the optimal policy has been configured correctly.",
"active_q = cliff_walking_optimal_q_values(grid_height=4, grid_width=12, cliff_index = np.s_[3,1:12,:])\nprint_optimal_policy(active_q, grid_height=4, grid_width=12)",
"Use the temporal_difference_prediction() method to predict the state values if the agent let to follow a 5-step TD learning path.",
"runs=10; n_episodes = 100\ns_values = TD.temporal_difference_prediction(env, cliff_walking_optimal_policy,\n runs=runs, n_episodes=n_episodes, decimals=2,\n n_step=6, step_size=0.3, discount=1., epsilon=0.1)\n\ntitle = 'State-value Predictions\\n[Cliff-Walking task]'\nplotutls.plot_state_values(s_values, grid_height=4, grid_width=12, title=title)",
"3. RL-Algorithms based on Temporal Difference: On-Policy TD(n) Control\n3a. SARSA: On-Policy TD(n) Control",
"# Define TD(n) execution parameters\nruns = 10 # Number of Independent Runs\nn_episodes = 100 # Number of Episodes\n\n# Various n-steps SARSA algorithms to try\nprint('Determine the n-steps your are interested to explore...\\n')\nn_step_min = 2; n_step_max = 6\nn_steps = np.arange(n_step_min, n_step_max + 1)\nprint('n_steps: {}'.format(n_steps), '\\n')\n\n# various discount factors to try\ndiscount_fixed = 1.\nprint('Determine a fixed discount factor: {}\\n'.format(discount_fixed))\n\n# various step size parameters to try\nstep_size_fixed = 0.3\nprint('Determine a fixed step-size: {}\\n'.format(step_size_fixed))\n\n# various epsilon parameters to try\nepsilon_fixed = 0.1\nprint('Determine a fixed epsilon: {}\\n'.format(epsilon_fixed))\n\n# Create a mesh-grid of trials\nprint('Create a dictionary of the RL-models of interest...\\n')\nn_steps, discounts = np.meshgrid(n_steps, discount_fixed)\nn_steps = n_steps.flatten()\ndiscounts = discounts.flatten()\n\n# Create a dictionary of the RL-trials of interest\nRL_trials = {\"sarsa(0)\":\n {'epsilon': epsilon_fixed,\n 'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}\n\nfor n, trial in enumerate(list(zip(n_steps, discounts))):\n key = 'sarsa({})'.format(trial[0]-1)\n RL_trials[key] = {'epsilon': epsilon_fixed, \n 'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}\nprint('Number of RL-models to try: {}\\n'.format(len(RL_trials)))\n\nprint('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\\n'.format(int(n_episodes), int(runs)))\n\nrewards_per_trial_On_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\nq_values_per_trial_On_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\n\n\nfor trial, params_dict in RL_trials.items():\n \n # Read out parameters from \"params_dict\"\n epsilon = params_dict['epsilon']\n step_size = params_dict['step_size']\n discount = params_dict['discount']\n n_step = params_dict['n_step']\n \n # Apply SARSA [on-policy TD(n) Control]\n q_values, tot_rewards = TD.sarsa_on_policy_control(env,\n runs=runs, n_episodes=n_episodes, n_step=n_step,\n step_size=step_size, discount=discount, epsilon=epsilon)\n \n # Update \"rewards_per_trial\" and \"q_values_per_trial\" OrderedDicts\n rewards_per_trial_On_Policy_SARSA[trial] = tot_rewards\n q_values_per_trial_On_Policy_SARSA[trial] = q_values",
"Verify the learning curves of the RL-models we trained.",
"title = 'Efficiency of the RL Method\\n[SARSA On-Policy TD(n) Control]'\nplotutls.plot_learning_curve(rewards_per_trial_On_Policy_SARSA, title=title, \n cumulative_reward=True, lower_reward_ratio=None)",
"Visualize agent's move which is suggested by the solutions.",
"for trial in list(RL_trials.keys()):\n print('\\n', trial, ':')\n q_vals = q_values_per_trial_On_Policy_SARSA[trial]\n print_optimal_policy(q_vals, grid_height=4, grid_width=12)",
"3b. Expected SARSA: On-Policy TD(n) Control",
"# Define TD(n) execution parameters\nruns = 10 # Number of Independent Runs\nn_episodes = 100 # Number of Episodes\n\n# Various n-steps SARSA algorithms to try\nprint('Determine the n-steps your are interested to explore...\\n')\nn_step_min = 2; n_step_max = 6\nn_steps = np.arange(n_step_min, n_step_max + 1)\nprint('n_steps: {}'.format(n_steps), '\\n')\n\n# various discount factors to try\ndiscount_fixed = 1.\nprint('Determine a fixed discount factor: {}\\n'.format(discount_fixed))\n\n# various step size parameters to try\nstep_size_fixed = 0.3\nprint('Determine a fixed step-size: {}\\n'.format(step_size_fixed))\n\n# various epsilon parameters to try\nepsilon_fixed = 0.1\nprint('Determine a fixed epsilon: {}\\n'.format(epsilon_fixed))\n\n# Create a mesh-grid of trials\nprint('Create a dictionary of the RL-models of interest...\\n')\nn_steps, discounts = np.meshgrid(n_steps, discount_fixed)\nn_steps = n_steps.flatten()\ndiscounts = discounts.flatten()\n\n# Create a dictionary of the RL-trials of interest\nRL_trials = {\"sarsa(0)\":\n {'epsilon': epsilon_fixed,\n 'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}\n\nfor n, trial in enumerate(list(zip(n_steps, discounts))):\n key = 'sarsa({})'.format(trial[0]-1)\n RL_trials[key] = {'epsilon': epsilon_fixed, \n 'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}\nprint('Number of RL-models to try: {}\\n'.format(len(RL_trials)))\n\nprint('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\\n'.format(int(n_episodes), int(runs)))\n\nrewards_per_trial_On_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\nq_values_per_trial_On_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\n\n\nfor trial, params_dict in RL_trials.items():\n \n # Read out parameters from \"params_dict\"\n epsilon = params_dict['epsilon']\n step_size = params_dict['step_size']\n discount = params_dict['discount']\n n_step = params_dict['n_step']\n \n # Apply SARSA [on-policy TD(n) Control]\n q_values, tot_rewards = TD.sarsa_on_policy_control(env,\n runs=runs, n_episodes=n_episodes, n_step=n_step,\n expected_sarsa = True,\n step_size=step_size, discount=discount, epsilon=epsilon)\n \n # Update \"rewards_per_trial\" and \"q_values_per_trial\" OrderedDicts\n rewards_per_trial_On_Policy_ExpSARSA[trial] = tot_rewards\n q_values_per_trial_On_Policy_ExpSARSA[trial] = q_values",
"Verify the learning curves of the RL-models we trained.",
"title = 'Efficiency of the RL Method\\n[Expected SARSA Οn-policy TD(n) Control]'\nplotutls.plot_learning_curve(rewards_per_trial_On_Policy_ExpSARSA,title=title,\n cumulative_reward=True, lower_reward_ratio=None)",
"Visualize agent's move which is suggested by the solutions.",
"for trial in list(RL_trials.keys()):\n print('\\n', trial, ':')\n q_vals = q_values_per_trial_On_Policy_ExpSARSA[trial]\n print_optimal_policy(q_vals, grid_height=4, grid_width=12)",
"4. RL-Algorithms based on Temporal Difference: Off-Policy TD(n) Control\n4a. SARSA: Off-Policy TD(n) Control",
"# Define TD(n) execution parameters\nruns = 10 # Number of Independent Runs\nn_episodes = 500 # Number of Episodes\n\n# Various n-steps SARSA algorithms to try\nprint('Determine the n-steps your are interested to explore...\\n')\nn_step_min = 2; n_step_max = 6\nn_steps = np.arange(n_step_min, n_step_max + 1)\nprint('n_steps: {}'.format(n_steps), '\\n')\n\n# various discount factors to try\ndiscount_fixed = 1.\nprint('Determine a fixed discount factor: {}\\n'.format(discount_fixed))\n\n# various step size parameters to try\nstep_size_fixed = 0.8\nprint('Determine a fixed step-size: {}\\n'.format(step_size_fixed))\n\n# various epsilon parameters to try\nepsilon_fixed = 0.1\nprint('Determine a fixed epsilon: {}\\n'.format(epsilon_fixed))\n\n# Create a mesh-grid of trials\nprint('Create a dictionary of the RL-models of interest...\\n')\nn_steps, discounts = np.meshgrid(n_steps, discount_fixed)\nn_steps = n_steps.flatten()\ndiscounts = discounts.flatten()\n\n# Create a dictionary of the RL-trials of interest\nRL_trials = {\"sarsa(0)\":\n {'epsilon': epsilon_fixed,\n 'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}\n\nfor n, trial in enumerate(list(zip(n_steps, discounts))):\n key = 'sarsa({})'.format(trial[0]-1)\n RL_trials[key] = {'epsilon': epsilon_fixed, \n 'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}\nprint('Number of RL-models to try: {}\\n'.format(len(RL_trials)))\n\nprint('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\\n'.format(int(n_episodes), int(runs)))\n\nrewards_per_trial_Off_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\nq_values_per_trial_Off_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\n\n\nfor trial, params_dict in RL_trials.items():\n \n # Read out parameters from \"params_dict\"\n epsilon = params_dict['epsilon']\n step_size = params_dict['step_size']\n discount = params_dict['discount']\n n_step = params_dict['n_step']\n\n # Apply SARSA [on-policy TD(n) Control]\n q_values, tot_rewards = TD.sarsa_off_policy_control(env,\n runs=runs, n_episodes=n_episodes, n_step=n_step,\n step_size=step_size, discount=discount, epsilon=epsilon)\n \n # Update \"rewards_per_trial\" and \"q_values_per_trial\" OrderedDicts\n rewards_per_trial_Off_Policy_SARSA[trial] = tot_rewards\n q_values_per_trial_Off_Policy_SARSA[trial] = q_values",
"Verify the learning curves of the RL-models we trained.",
"title = 'Efficiency of the RL Method\\n[SARSA Off-Policy TD(n) Control]'\nplotutls.plot_learning_curve(rewards_per_trial_Off_Policy_SARSA,title=title,\n cumulative_reward=True, lower_reward_ratio=None)",
"Visualize agent's move which is suggested by the solutions.",
"for trial in list(RL_trials.keys()):\n print('\\n', trial, ':')\n q_vals = q_values_per_trial_Off_Policy_SARSA[trial]\n print_optimal_policy(q_vals, grid_height=4, grid_width=12)",
"4b. Expected SARSA: Off-Policy TD(n) Control",
"# Define TD(n) execution parameters\nruns = 10 # Number of Independent Runs\nn_episodes = 500 # Number of Episodes\n\n# Various n-steps SARSA algorithms to try\nprint('Determine the n-steps your are interested to explore...\\n')\nn_step_min = 2; n_step_max = 6\nn_steps = np.arange(n_step_min, n_step_max + 1)\nprint('n_steps: {}'.format(n_steps), '\\n')\n\n# various discount factors to try\ndiscount_fixed = 1.\nprint('Determine a fixed discount factor: {}\\n'.format(discount_fixed))\n\n# various step size parameters to try\nstep_size_fixed = 0.8\nprint('Determine a fixed step-size: {}\\n'.format(step_size_fixed))\n\n# various epsilon parameters to try\nepsilon_fixed = 0.1\nprint('Determine a fixed epsilon: {}\\n'.format(epsilon_fixed))\n\n# Create a mesh-grid of trials\nprint('Create a dictionary of the RL-models of interest...\\n')\nn_steps, discounts = np.meshgrid(n_steps, discount_fixed)\nn_steps = n_steps.flatten()\ndiscounts = discounts.flatten()\n\n# Create a dictionary of the RL-trials of interest\nRL_trials = {\"sarsa(0)\":\n {'epsilon': epsilon_fixed,\n 'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}\n\nfor n, trial in enumerate(list(zip(n_steps, discounts))):\n key = 'sarsa({})'.format(trial[0]-1)\n RL_trials[key] = {'epsilon': epsilon_fixed, \n 'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}\nprint('Number of RL-models to try: {}\\n'.format(len(RL_trials)))\n\nprint('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\\n'.format(int(n_episodes), int(runs)))\n\nrewards_per_trial_Off_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\nq_values_per_trial_Off_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\n\n\nfor trial, params_dict in RL_trials.items():\n \n # Read out parameters from \"params_dict\"\n epsilon = params_dict['epsilon']\n step_size = params_dict['step_size']\n discount = params_dict['discount']\n n_step = params_dict['n_step']\n \n # Apply SARSA [on-policy TD(n) Control]\n q_values, tot_rewards = TD.sarsa_off_policy_control(env,\n runs=runs, n_episodes=n_episodes, n_step=n_step,\n expected_sarsa=True,\n step_size=step_size, discount=discount, epsilon=epsilon)\n \n # Update \"rewards_per_trial\" and \"q_values_per_trial\" OrderedDicts \n rewards_per_trial_Off_Policy_ExpSARSA[trial] = tot_rewards\n q_values_per_trial_Off_Policy_ExpSARSA[trial] = q_values",
"Verify the learning curves of the RL-models we trained.",
"title = 'Efficiency of the RL Method\\n[Expected SARSA Off-Policy TD(n) Control]'\nplotutls.plot_learning_curve(rewards_per_trial_Off_Policy_ExpSARSA,title=title,\n cumulative_reward=True, lower_reward_ratio=None)",
"Visualize agent's move which is suggested by the solutions.",
"for trial in list(RL_trials.keys()):\n print('\\n', trial, ':')\n q_vals = q_values_per_trial_Off_Policy_ExpSARSA[trial]\n print_optimal_policy(q_vals, grid_height=4, grid_width=12)",
"5. A Unifying Algorithm: Off-Policy n-step $Q(\\sigma)$, dynamic $\\sigma$\nWe train below a $Q(\\sigma)$ algorithm following a 0 to 12-step temporal difference updates.\nThe discount factor of the MDP, the step size between the updates and the probability of the epsilon-greedy policy that this agent learns has been set as below:\n\ndiscount factor of MDP: $\\gamma=1$\nstep size between the updates: $\\alpha = 0.3$\nepsilon of epsilon-greedy policy: $\\varepsilon = 0.1$\n\nThe $\\sigma$ parameter which controls the degree of sampling followed in each update, and more specifically:\n\nOnce $\\sigma = 0$, results in a pure expectation without sampling, whereas \nOnce $\\sigma = 1$, results in the other extreme in full sampling,\n\nhas been set initially at $\\sigma = 0.5$, but with the option \"sigma = dynamic\" we asked from the .off_policy_q_sigma() function of the TDn_Utils class to change $\\sigma$ dynamically, towards larger/smaller values depending on the improvement achieved in terms of \"Commulative Mean Reward / Number of episodes\". The corrections in the parameter $\\sigma$ have been set to occur every 10 next episodes.",
"# Define TD(n) execution parameters\nruns = 10 # Number of Independent Runs\nn_episodes = 300 # Number of Episodes\n\n# Various n-steps SARSA algorithms to try\nprint('Determine the n-steps your are interested to explore...\\n')\nn_step_min = 2; n_step_max = 13\nn_steps = np.arange(n_step_min, n_step_max + 1)\nprint('n_steps: {}'.format(n_steps), '\\n')\n\n# various discount factors to try\ndiscount_fixed = 1.\nprint('Determine a fixed discount factor: {}\\n'.format(discount_fixed))\n\n# various step size parameters to try\nstep_size_fixed = 0.3\nprint('Determine a fixed step-size: {}\\n'.format(step_size_fixed))\n\n# various epsilon parameters to try\nepsilon_fixed = 0.1\nprint('Determine a fixed epsilon: {}\\n'.format(epsilon_fixed))\n\n# Determine a sigma prameter, controlling the degree of sampling at each step of the TD-n algorithm\nsigma = None\nif not sigma:\n sigma = 'Random Variable in [0,1] range'\nprint('Determine sigma: {}'.format(sigma))\nprint('[Note: Controls the degree of sampling at each step of the TD(n) algorithm]\\n')\n\n# Create a mesh-grid of trials\nprint('Create a dictionary of the RL-models of interest...\\n')\nn_steps, discounts = np.meshgrid(n_steps, discount_fixed)\nn_steps = n_steps.flatten()\ndiscounts = discounts.flatten()\n\n# Create a dictionary of the RL-trials of interest\nRL_trials = {\"0-step Q(σ)\":\n {'epsilon': epsilon_fixed,\n 'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}\n\nfor n, trial in enumerate(list(zip(n_steps, discounts))):\n key = '{}-step Q(σ)'.format(trial[0]-1)\n RL_trials[key] = {'epsilon': epsilon_fixed, \n 'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}\nprint('Number of RL-models to try: {}\\n'.format(len(RL_trials)))\n\nprint('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\\n'.format(int(n_episodes), int(runs)))\n\nrewards_per_trial_Off_Policy_QSigma = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\nq_values_per_trial_Off_Policy_QSigma = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())\n\n\nfor trial, params_dict in RL_trials.items():\n \n # Read out parameters from \"params_dict\"\n epsilon = params_dict['epsilon']\n step_size = params_dict['step_size']\n discount = params_dict['discount']\n n_step = params_dict['n_step']\n \n # Apply SARSA [on-policy TD(n) Control]\n q_values, tot_rewards = TD.off_policy_q_sigma(env,\n runs=runs, n_episodes=n_episodes, n_step=n_step, sigma='dynamic',\n step_size=step_size, discount=discount, epsilon=epsilon)\n \n # Update \"rewards_per_trial\" and \"q_values_per_trial\" OrderedDicts\n rewards_per_trial_Off_Policy_QSigma[trial] = tot_rewards\n q_values_per_trial_Off_Policy_QSigma[trial] = q_values",
"Reward achieved with the number of episodes: first six trials, 0 to 5-step $Q(\\sigma)$",
"first6_RL_trials = list(RL_trials.keys())[:6]\nrewards_per_trial = OrderedDict((label, rewards_per_trial_Off_Policy_QSigma[label]) for label in first6_RL_trials)\ntitle = 'Efficiency of the RL Method\\n[n-step $\\mathbf{Q(\\sigma)}$ (Off-Policy TD(n) Control, first 6 trials)]'\nplotutls.plot_learning_curve(rewards_per_trial, title=title,\n cumulative_reward=True, lower_reward_ratio=None)\n\nfor trial in first6_RL_trials:\n print('\\n', trial, ':')\n q_vals = q_values_per_trial_Off_Policy_QSigma[trial]\n print_optimal_policy(q_vals, grid_height=4, grid_width=12)",
"Reward achieved with the number of episodes: remaining six trials, 6 to 12-step $Q(\\sigma)$",
"rest_RL_trials = list(RL_trials.keys())[5:] #+ [first6_RL_trials[0]]\nrewards_per_trial = OrderedDict((label, rewards_per_trial_Off_Policy_QSigma[label]) for label in rest_RL_trials)\ntitle = 'Efficiency of the RL Method\\n[n-step $\\mathbf{Q(\\sigma)}$ (Off-Policy TD(n) Control, rest 6 trials)]'\nplotutls.plot_learning_curve(rewards_per_trial, title=title,\n cumulative_reward=True, lower_reward_ratio=None)\n\nfor trial in rest_RL_trials:\n print('\\n', trial, ':')\n q_vals = q_values_per_trial_Off_Policy_QSigma[trial]\n print_optimal_policy(q_vals, grid_height=4, grid_width=12)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JENkt4k/pynotes-general | Linux Tools & Tricks.ipynb | gpl-3.0 | [
"Python magic\nhttps://ipython.org/ipython-doc/3/interactive/magics.html",
"%colors Linux\n\n%history\n\n%dirs\n\n%magic\n\n%pwd\n\n%quickref",
"Sticky keys causing issues? Need Password feedback?\nHad an issue with my keyboard where a few keys were sticking, worn, and they weren't detected or showed up twice. Constant password auth failures so a quick Google search returned the following results:\n1) Change Password Entry To Show * (asterix) instead of no feed back - less secure!\nbash \n #run command\n sudo visudo\nbash\n #change\n Defaults env_reset\n #to\n Defaults env_reset,pwfeedback\n2) Change from VI to Nano or Emacs etc..\nbash \n export VISUAL=nano; visudo\nNotes * use spaces not tabs\nChanging Git author info\nsource\nCheck out clean repo:\nbash \ngit clone --bare https://github.com/[user]/[repo].git\ncd [repo].git\ncreate git-author-rewrite.sh file:\n```bash\n!/bin/sh\ngit filter-branch --env-filter '\nOLD_EMAIL=\"[email protected]\"\nCORRECT_NAME=\"Your Correct Name\"\nCORRECT_EMAIL=\"[email protected]\"\nif [ \"$GIT_COMMITTER_EMAIL\" = \"$OLD_EMAIL\" ]\nthen\n export GIT_COMMITTER_NAME=\"$CORRECT_NAME\"\n export GIT_COMMITTER_EMAIL=\"$CORRECT_EMAIL\"\nfi\nif [ \"$GIT_AUTHOR_EMAIL\" = \"$OLD_EMAIL\" ]\nthen\n export GIT_AUTHOR_NAME=\"$CORRECT_NAME\"\n export GIT_AUTHOR_EMAIL=\"$CORRECT_EMAIL\"\nfi\n' --tag-name-filter cat -- --branches --tags\n```\nmake executable:\nbash \nchmod +x create git-author-rewrite.sh\nreview changes:\nbash\ngit log\npush changes:\nbash\ngit push --force --tags origin 'refs/heads/*'\ncleanup:\nbash\ncd ..\nrm -rf [repo].git\nManaging Remotes\n(Managing Remotes Documentation)[https://git-scm.com/book/ch2-5.html]\n(Multiple push remotes)[http://stackoverflow.com/questions/14290113/git-pushing-code-to-two-remotes]\nShow current remotes:\nbash\ngit remote -v\nAdd a \"all\" remote\nbash\ngit remote add all git://original/repo.git\ngit remote -v\nAdd another repo to the remote\nbash\ngit remote set-url --add --push all git://another/repo.git\nThis will replace you orignal push, so simply add it back in\nbash\ngit remote set-url --add --push all git://original/repo.git\nNow you should see both pushes\nbash\ngit remote -v\nGit general\nQuick Refference\nWhats my name?\nLinux Kernel Version\nbash\nuname -r\nUbuntu version\nbash\nlsb_release -sc",
"print \"this is a test of the emergency broadcast system\"\n\n%%html\n\n<style>\n\nhtml {\n font-size: 62.5% !important; }\nbody {\n font-size: 1.5em !important; /* currently ems cause chrome bug misinterpreting rems on body element */\n line-height: 1.6 !important;\n font-weight: 400 !important;\n font-family: \"Raleway\", \"HelveticaNeue\", \"Helvetica Neue\", Helvetica, Arial, sans-serif !important;\n color: #222 !important; }\n\ndiv{ border-radius: 0px !important; }\ndiv.CodeMirror-sizer{ background: rgb(244, 244, 248) !important; }\ndiv.input_area{ background: rgb(244, 244, 248) !important; }\n\ndiv.out_prompt_overlay:hover{ background: rgb(244, 244, 248) !important; }\ndiv.input_prompt:hover{ background: rgb(244, 244, 248) !important; }\n\nh1, h2, h3, h4, h5, h6 {\n color: #333 !important;\n margin-top: 0 !important;\n margin-bottom: 2rem !important;\n font-weight: 300 !important; }\nh1 { font-size: 4.0rem !important; line-height: 1.2 !important; letter-spacing: -.1rem !important;}\nh2 { font-size: 3.6rem !important; line-height: 1.25 !important; letter-spacing: -.1rem !important; }\nh3 { font-size: 3.0rem !important; line-height: 1.3 !important; letter-spacing: -.1rem !important; }\nh4 { font-size: 2.4rem !important; line-height: 1.35 !important; letter-spacing: -.08rem !important; }\nh5 { font-size: 1.8rem !important; line-height: 1.5 !important; letter-spacing: -.05rem !important; }\nh6 { font-size: 1.5rem !important; line-height: 1.6 !important; letter-spacing: 0 !important; }\n\n@media (min-width: 550px) {\n h1 { font-size: 5.0rem !important; }\n h2 { font-size: 4.2rem !important; }\n h3 { font-size: 3.6rem !important; }\n h4 { font-size: 3.0rem !important; }\n h5 { font-size: 2.4rem !important; }\n h6 { font-size: 1.5rem !important; }\n}\n\np {\n margin-top: 0 !important; }\n \na {\n color: #1EAEDB !important; }\na:hover {\n color: #0FA0CE !important; }\n \ncode {\n padding: .2rem .5rem !important;\n margin: 0 .2rem !important;\n font-size: 90% !important;\n white-space: nowrap !important;\n background: #F1F1F1 !important;\n border: 1px solid #E1E1E1 !important;\n border-radius: 4px !important; }\npre > code {\n display: block !important;\n padding: 1rem 1.5rem !important;\n white-space: pre !important; }\n \nbutton{ border-radius: 0px !important; }\n.navbar-inner{ background-image: none !important; }\nselect, textarea{ border-radius: 0px !important; }\n\n</style>",
"Get the Active Window on Linux\nGet active window title in X\n- Original Code had the following error: TypeError: can't use a string pattern on a bytes-like object\n<br/>\nObtain Active window using Python\n\n\nCorected code is now here\n\n\n\"import wnck\" only works with python 2.x, using python3.x pypie and wx were the only options I found so far",
"import sys\nimport os\nfrom subprocess import PIPE, Popen\nimport re\n\ndef get_active_window_title():\n root = Popen(['xprop', '-root', '_NET_ACTIVE_WINDOW'], stdout=PIPE)\n\n for line in root.stdout:\n m = re.search(b'^_NET_ACTIVE_WINDOW.* ([\\w]+)$', line)\n if m != None:\n id_ = m.group(1)\n id_w = Popen(['xprop', '-id', id_, 'WM_NAME'], stdout=PIPE)\n break\n\n if id_w != None:\n for line in id_w.stdout:\n match = re.match(b\"WM_NAME\\(\\w+\\) = (?P<name>.+)$\", line)\n if match != None:\n return match.group(\"name\")\n\n return \"Active window not found\"\n\nget_active_window_title()\n\nimport time\ntime.sleep(2)\nget_active_window_title()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pdamodaran/yellowbrick | examples/jkeung/testing.ipynb | apache-2.0 | [
"ROC Curve Example\nInspired by: http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html\nThis is an example of how to create an ROC Curvs in sklearn vs using the Yellowbrick libarary. The data used is the breast cancer dataset that is included in sklearn.\nImport Libraries",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn import svm, datasets\nfrom sklearn.metrics import roc_curve, auc\nfrom sklearn.model_selection import train_test_split",
"Import some data to play with",
"bc = datasets.load_breast_cancer()\nX = bc.data\ny = bc.target\n\nrandom_state = np.random.RandomState(0)\n# shuffle and split training and test sets\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,\n random_state=random_state)",
"Split the data and prepare data for ROC Curve",
"# Learn to predict each class against the other\nclassifier = svm.SVC(kernel='linear', probability=True, random_state=random_state)\ny_score = classifier.fit(X_train, y_train).decision_function(X_test)\n\n\n# Compute ROC curve and ROC area for each class\nfpr, tpr, _ = roc_curve(y_test, y_score)\nroc_auc = auc(fpr, tpr)\n",
"Plot ROC Curve using Matplotlib",
"plt.figure()\nlw = 2\nplt.plot(fpr, tpr, color='darkorange',\n lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)\nplt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('Receiver operating characteristic example')\nplt.legend(loc=\"lower right\")\nplt.show()",
"Create ROCAUC using YellowBrick",
"import yellowbrick as yb \nfrom yellowbrick.classifier import ROCAUC\n\nvisualizer = ROCAUC(classifier)\n\nvisualizer.fit(X_train, y_train) # Fit the training data to the visualizer\nvisualizer.score(X_test, y_test) # Evaluate the model on the test data \ng = visualizer.poof() # Draw/show/poof the data"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jchowk/pyND | pyND/gbt/docs/GBTspec.usage.ipynb | gpl-3.0 | [
"Using pyND.gbt.GBTspec\nThis code reads individual datasets from a GBTIDL-style FITS file.",
"# Typical imports here\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport astropy.units as u\nimport astropy.constants as c\n\n%load_ext autoreload\n%autoreload 2\n\nfrom pyND.gbt import GBTspec",
"Loading GBT spectra from GBTIDL ASCII output",
"input_filename = 'data/HS0033+4300_GBT.dat'\nx = GBTspec.from_ascii(input_filename)\n\nx.plotspectrum()\n\nx.velocity[0:5]\n\nx.Tb[0:5]",
"Metadata...",
"x.meta.keys()\n\nx.meta['object'],x.meta['RA'],x.meta['DEC']",
"Loading GBT spectra from GBTIDL FITS format\nLoading from the list of objects",
"input_filename = 'data/GBTdata.fits'\ny = GBTspec.from_GBTIDLindex(input_filename)\n\ny.plotspectrum()",
"Loading with an object name:",
"input_filename = 'data/GBTdata.fits'\nobject_name = 'HS0033+4300'\nz = GBTspec.from_GBTIDL(input_filename,object_name)\n\nz.plotspectrum()",
"Resample the results to a coarser velocity grid\nThis is a flux-conserving process.",
"z_new = z.copy()\nnew_velocity = np.arange(-400,100,10.)\nz_new.resample(new_velocity,masked=True)\n\nplt.figure(figsize=(8,5))\nplt.plot(z.velocity,z.Tb,drawstyle='steps-mid',label='Original')\nplt.plot(z_new.velocity,z_new.Tb,drawstyle='steps-mid',label='Resampled',lw=4)\nplt.xlim(-100,50)\nplt.legend(loc='upper left')\nplt.xlim(-200,50)\nplt.xlabel('Velocity')\nplt.ylabel('Tb [K]');",
"Compare the two results\nIn this example, the results are slightly different, as the ASCII data are saved in the OPTICAL-LSR frame, while the GBTIDL data are saved using the RADI-LSR, the radio astronomical definition of the LSR.",
"plt.plot(z.velocity,z.Tb,drawstyle='steps-mid',label='RADIO')\nplt.plot(x.velocity,x.Tb,drawstyle='steps-mid',label='OPTICAL')\nplt.xlim(-100,50)\nplt.legend(loc='upper left')",
"The following is not yet working...\nChange OPTICAL to RADIO\nN.B. This approach doesn't seem to shift the spectra by enough to be the source of the difference...",
"light_speed = np.float64(c.c.to('m/s').value)\nnu0 = np.float64(1420405800.0000000000000000000)\n\n# First calculate frequency from optical:\nnu = (nu0/(1+(x.velocity)*1000./light_speed))\n# Calculate radio definition\nvrad =light_speed*((nu0-nu)/nu0)/1000.\n\nplt.figure(figsize=(8,5))\nplt.plot(y.velocity,y.Tb,drawstyle='steps-mid',label='RADIO')\nplt.plot(vrad,x.Tb,drawstyle='steps-mid',label='OPTICAL-->RADIO',zorder=0,linewidth=3)\nplt.xlim(-100,50)\nplt.legend(loc='upper left')",
"Change RADIO to OPTICAL",
"light_speed = (c.c.to('m/s').value)\nnu0 = (1420405800.0000000000000000000)\n\n# Frequency from radio:\nnu = nu0*(1-(x.velocity)*1000./light_speed)\n# Calculate radio definition\nvopt = (light_speed/1000.)*((nu0-nu)/nu)\n\n\nplt.plot(vopt,y.Tb,drawstyle='steps-mid',label='RADIO-->OPTICAL')\nplt.plot(x.velocity,x.Tb,drawstyle='steps-mid',label='OPTICAL',zorder=0,linewidth=3)\nplt.xlim(-100,50)\nplt.legend(loc='upper left')",
"Change RADIO to OPTICAL with built-in function",
"input_filename = 'data/AMIGA-GBT.fits'\nobject_name = 'RBS2055'\ny = GBTspec.from_GBTIDL(input_filename,object_name)\n\ninput_filename = 'data/RBS2055_GBT.dat'\nx = GBTspec.from_ascii(input_filename)\n\nx.change_veldef()\n\n\n\nplt.plot(y.velocity,y.Tb,drawstyle='steps-mid',label='RADIO-->OPTICAL')\nplt.plot(x.velocity,x.Tb,drawstyle='steps-mid',label='OPTICAL',zorder=0,linewidth=3)\nplt.xlim(-100,50)\nplt.legend(loc='upper left')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
KDD-OpenSource/geox-young-academy | day-1/examples/Random Numbers.ipynb | mit | [
"Random Numbers\nThis document shows you how to create a lot of random numbers.\nThe random module\nPython Docs",
"import random # load the module\n\nr = random.random() # equally distributed random number\nprint(\"Random 0 <= r == {} < 1\".format(r))\n\nμ = 200\nσ = 25\n\ng = random.gauss(μ, σ)\n\nprint(\"Gauss distributed random number {} with μ={} and σ={}\".format(g, μ, σ))",
"Plotting the numbers\nImport numpy",
"import numpy as np\nnp.random.seed(0)",
"Generate random numbers.",
"normal_numbers = np.random.normal(μ, σ, size=100)\nprint(\"normal_numbers = {}\".format(normal_numbers))",
"Install plotly from the command line:\nWe generate a plot",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(8, 4))\n\nax0.hist(normal_numbers, 20, normed=1, histtype='stepfilled', facecolor='g', alpha=0.75)\nax0.set_title('stepfilled')\n\n# Create a histogram by providing the bin edges (unequally spaced).\nbins = [100, 150, 180, 195, 205, 220, 250, 300]\nax1.hist(normal_numbers, bins, normed=1, histtype='bar', rwidth=0.8)\nax1.set_title('unequal bins')\n\nfig.tight_layout()\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
folivetti/CRUFABC | Gabarito.ipynb | mit | [
"import networkx as nx\nimport matplotlib as plt\n%matplotlib inline",
"Exercício 01: Calcule a distância média, o diâmetro e o coeficiente de agrupamento das redes abaixo.",
"G1 = nx.erdos_renyi_graph(10,0.4)\nnx.draw_shell(G1)\n\nprint \"Dist. media: \", nx.average_shortest_path_length(G1)\nprint \"Diametro: \", nx.diameter(G1)\nprint \"Coef. Agrupamento médio: \", nx.average_clustering(G1)\n\nG2 = nx.barabasi_albert_graph(10,3)\nnx.draw_shell(G2)\n\nprint \"Dist. media: \", nx.average_shortest_path_length(G2)\nprint \"Diametro: \", nx.diameter(G2)\nprint \"Coef. Agrupamento médio: \", nx.average_clustering(G2)\n\nG3 = nx.barabasi_albert_graph(10,4)\nnx.draw_shell(G3)\n\nprint \"Dist. media: \", nx.average_shortest_path_length(G3)\nprint \"Diametro: \", nx.diameter(G3)\nprint \"Coef. Agrupamento médio: \", nx.average_clustering(G3)",
"Exercício 02: Calcule a centralidade de grau, betweenness e pagerank dos nós das redes abaixo:",
"G4 = nx.barabasi_albert_graph(10,3)\nplt.pyplot.figure(figsize=(10,10))\npos = nx.shell_layout(G4)\nnx.draw_networkx_nodes(G4,pos);\nnx.draw_networkx_edges(G4,pos);\nnx.draw_networkx_labels(G4,pos);\nplt.pyplot.axis('off')\n\nprint \"Centralidades de grau:\"\nfor ni,dc in nx.degree_centrality(G4).items():\n print ni, dc\n\nprint \"Centralidades de pagerank:\"\nfor ni,dc in nx.pagerank(G4).items():\n print ni, dc\n\nprint \"Centralidades de betweenness:\"\nfor ni,dc in nx.betweenness_centrality(G4).items():\n print ni, dc"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
jakevdp/sklearn_tutorial | notebooks/04.3-Density-GMM.ipynb | bsd-3-clause | [
"<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>\nDensity Estimation: Gaussian Mixture Models\nHere we'll explore Gaussian Mixture Models, which is an unsupervised clustering & density estimation technique.\nWe'll start with our standard set of initial imports",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\nplt.style.use('seaborn')",
"Introducing Gaussian Mixture Models\nWe previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.\nHere we'll consider an extension to this which is suitable for both clustering and density estimation.\nFor example, imagine we have some one-dimensional data in a particular distribution:",
"np.random.seed(2)\nx = np.concatenate([np.random.normal(0, 2, 2000),\n np.random.normal(5, 5, 2000),\n np.random.normal(3, 0.5, 600)])\nplt.hist(x, 80, normed=True)\nplt.xlim(-10, 20);",
"Gaussian mixture models will allow us to approximate this density:",
"from sklearn.mixture import GaussianMixture as GMM\nX = x[:, np.newaxis]\nclf = GMM(4, max_iter=500, random_state=3).fit(X)\n\nxpdf = np.linspace(-10, 20, 1000)\ndensity = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])\n\nplt.hist(x, 80, density=True, alpha=0.5)\nplt.plot(xpdf, density, '-r')\nplt.xlim(-10, 20);",
"Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes:",
"clf.means_\n\nclf.covariances_\n\nclf.weights_\n\nplt.hist(x, 80, normed=True, alpha=0.3)\nplt.plot(xpdf, density, '-r')\n\nfor i in range(clf.n_components):\n pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],\n np.sqrt(clf.covariances_[i, 0])).pdf(xpdf)\n plt.fill(xpdf, pdf, facecolor='gray',\n edgecolor='none', alpha=0.3)\nplt.xlim(-10, 20);",
"These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.\nSomewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).\nHow many Gaussians?\nGiven a model, we can use one of several means to evaluate how well it fits the data.\nFor example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)",
"print(clf.bic(X))\nprint(clf.aic(X))",
"Let's take a look at these as a function of the number of gaussians:",
"n_estimators = np.arange(1, 10)\nclfs = [GMM(n, max_iter=1000).fit(X) for n in n_estimators]\nbics = [clf.bic(X) for clf in clfs]\naics = [clf.aic(X) for clf in clfs]\n\nplt.plot(n_estimators, bics, label='BIC')\nplt.plot(n_estimators, aics, label='AIC')\nplt.legend();",
"It appears that for both the AIC and BIC, 4 components is preferred.\nExample: GMM For Outlier Detection\nGMM is what's known as a Generative Model: it's a probabilistic model from which a dataset can be generated.\nOne thing that generative models can be useful for is outlier detection: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where \"suitable\" is up to your own bias/variance preference) can be labeld outliers.\nLet's take a look at this by defining a new dataset with some outliers:",
"np.random.seed(0)\n\n# Add 20 outliers\ntrue_outliers = np.sort(np.random.randint(0, len(x), 20))\ny = x.copy()\ny[true_outliers] += 50 * np.random.randn(20)\n\nclf = GMM(4, max_iter=500, random_state=0).fit(y[:, np.newaxis])\nxpdf = np.linspace(-10, 20, 1000)\ndensity_noise = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])\n\nplt.hist(y, 80, density=True, alpha=0.5)\nplt.plot(xpdf, density_noise, '-r')\nplt.xlim(-15, 30);",
"Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y:",
"log_likelihood = np.array([clf.score_samples([[yy]]) for yy in y])\n# log_likelihood = clf.score_samples(y[:, np.newaxis])[0]\nplt.plot(y, log_likelihood, '.k');\n\ndetected_outliers = np.where(log_likelihood < -9)[0]\n\nprint(\"true outliers:\")\nprint(true_outliers)\nprint(\"\\ndetected outliers:\")\nprint(detected_outliers)",
"The algorithm misses a few of these points, which is to be expected (some of the \"outliers\" actually land in the middle of the distribution!)\nHere are the outliers that were missed:",
"set(true_outliers) - set(detected_outliers)",
"And here are the non-outliers which were spuriously labeled outliers:",
"set(detected_outliers) - set(true_outliers)",
"Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.\nOther Density Estimators\nThe other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point!",
"from sklearn.neighbors import KernelDensity\nkde = KernelDensity(0.15).fit(x[:, None])\ndensity_kde = np.exp(kde.score_samples(xpdf[:, None]))\n\nplt.hist(x, 80, density=True, alpha=0.5)\nplt.plot(xpdf, density, '-b', label='GMM')\nplt.plot(xpdf, density_kde, '-r', label='KDE')\nplt.xlim(-10, 20)\nplt.legend();",
"All of these density estimators can be viewed as Generative models of the data: that is, that is, the model tells us how more data can be created which fits the model."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bhrutledge/jupyter-django | 2 - Working with Django.ipynb | mit | [
"Working with Django\nWe'll be working with the data from my band's website, which uses the Django admin as a basic CMS. The source is on GitHub.\nRequirements:\n$ brew install graphviz\n(venv)$ pip install pydot graphviz\nWhat do our models look like?\nLet's use some of IPython's magic to find out.\nThanks to manage.py shell_plus, all of our models have already been imported.\nView the source code for a model:",
"Gig??",
"View the whole file:",
"from inspect import getfile\n\ngig_file = getfile(Gig)\ngig_file\n\n%pycat $gig_file",
"View the contents of the app directory:",
"from os import path\n\n!ls -l {path.dirname(gig_file)}",
"View the output of the graph_models command from Django Extensions:",
"from graphviz import Source\nfrom IPython.display import Image\n\n!manage.py graph_models music news shows -o models.png 2>/dev/null\nImage('models.png')",
"Alternatively, capture the output, and render it as SVG:",
"dot = !manage.py graph_models shows 2>/dev/null\nSource(dot.n)",
"Learn more about IPython's magic functions:",
"%quickref",
"Answering questions\nHow often do we play gigs?",
"gigs = Gig.objects.published().past()\ngigs.count()",
"Where did we play last year?",
"[gig for gig in gigs.filter(date__year='2016')]",
"How many gigs have we played each year?",
"for date in gigs.dates('date', 'year'):\n gig_count = gigs.filter(date__year=date.year).count()\n print('{}: {}'.format(date.year, gig_count))",
"What venues have we played?",
"gigs.values('venue').distinct().aggregate(count=Count('*'))",
"Render a Django template in the notebook:",
"from django.template import Context, Template\nfrom IPython.display import HTML\n\ntop_venues = (\n gigs.values('venue__name', 'venue__city')\n .annotate(gig__count=Count('*'))\n .order_by('-gig__count')\n [:10]\n)\n\ntemplate = Template(\"\"\"\n<table>\n <tr>\n <th>Venue</th>\n <th>City</th>\n <th>Gigs</th>\n </tr>\n {% for v in venues %}\n <tr>\n <td>{{v.venue__name}}</td>\n <td>{{v.venue__city}}</td>\n <td>{{v.gig__count}}</td>\n </tr>\n {% endfor %}\n</table>\n\"\"\")\n\ncontext = Context(\n {'venues': top_venues}\n)\n\nHTML(template.render(context))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bbglab/adventofcode | 2015/ferran/day7.ipynb | mit | [
"Day 7\nDay 7.1\nApproach 1: Create a single expression by recursive substitution, then evaluate!",
"binary_command = {'NOT': '~', 'AND': '&', 'OR': '|', 'LSHIFT': '<<', 'RSHIFT': '>>'}\noperators = binary_command.values()\n\nimport csv\n\ndef translate(l):\n return [binary_command[a] if a in binary_command else a for a in l]\n\ndef display(input_file):\n\n \"\"\"produce a dict mapping variables to expressions\"\"\"\n \n commands = []\n with open(input_file, 'rt') as f_input:\n csv_reader = csv.reader(f_input, delimiter=' ')\n for line in csv_reader:\n commands.append((line[-1], ' '.join(list(translate(line[:-2])))))\n return dict(commands)\n\nimport re\n\ndef extract_variables(expr):\n varbls = []\n regex_pattern = '\\s|\\\\)|\\\\('\n l = re.split(regex_pattern, expr)\n for a in l:\n if (a not in operators) and (not a.isnumeric()) and (a != ''):\n varbls.append(a)\n return set(varbls)\n\n\ndef create_instance(wire):\n exec_python = commands[wire]\n pending = extract_variables(commands[wire])\n count = 0\n while pending and (count < 200):\n s = pending.pop()\n expr = commands[s]\n exec_python = re.sub('({0})'.format(s), '( {0} )'.format(expr), exec_python)\n pending = pending.union(extract_variables(exec_python))\n count += 1\n return wire + ' = ' + exec_python\n\ndef evaluate(var):\n instance = create_instance(var)\n exec(instance)\n return np.uint16(locals()[var])",
"Test",
"commands = display('inputs/input7.test.txt')\n\ndef test():\n assert(evaluate('d') == 72)\n assert(evaluate('e') == 507)\n assert(evaluate('f') == 492)\n assert(evaluate('g') == 114)\n assert(evaluate('h') == 65412)\n assert(evaluate('i') == 65079)\n assert(evaluate('x') == 123)\n assert(evaluate('y') == 456)\n\ntest()",
"This approach seems correct, but it creates huge expressions along the way that become harder and harder to parse. Thus the time to a final expression that wraps up all the computations is very long. Two ideas to carry on: i) concurrent evaluation of expressions; ii) define lazy variables/functions that collect all the dependencies of the circuit and start firing upon request.\nApproach 2: Concurrent evaluation from known variables.\nThe solution provided hereto owes credit to this source: https://www.reddit.com/r/adventofcode/comments/5id6w0/2015_day_7_part_1_python_wrong_answer/",
"import numpy as np\n\ndef RSHIFT(a, b):\n result = np.uint16(a) >> int(b)\n return int(result)\n\ndef LSHIFT(a, b):\n result = np.uint16(a) << int(b)\n return int(result)\n\ndef OR(a, b):\n result = np.uint16(a) | np.uint16(b)\n return int(result)\n\ndef AND(a, b):\n result = np.uint16(a) & np.uint16(b)\n return int(result)\n\ndef NOT(a):\n result = ~ np.uint16(a)\n return int(result)\n\nimport csv\ndef display(input_file):\n\n \"\"\"produce a dict mapping variables to expressions\"\"\"\n \n commands = []\n with open(input_file, 'rt') as f_input:\n csv_reader = csv.reader(f_input, delimiter=' ')\n for line in csv_reader:\n commands.append((line[-1], line[:-2]))\n return dict(commands)\n\ndef evaluate(wire):\n known = {}\n while wire not in known:\n if wire in known:\n break\n for k, v in commands.items():\n if (len(v) == 1) and (v[0].isnumeric()) and (k not in known):\n known[k] = int(v[0])\n elif (len(v) == 1) and (v[0] in known) and (k not in known):\n known[k] = known[v[0]]\n elif ('AND' in v) and (v[0] in known) and (v[2] in known):\n known[k] = AND(known[v[0]], known[v[2]])\n elif ('AND' in v) and (v[0].isnumeric()) and (v[2] in known):\n known[k] = AND(int(v[0]), known[v[2]])\n elif ('AND' in v) and (v[0] in known) and (v[2].isnumeric()):\n known[k] = AND(known[v[0]], int(v[2]))\n elif ('OR' in v) and (v[0] in known) and (v[2] in known):\n known[k] = OR(known[v[0]], known[v[2]])\n elif ('OR' in v) and (v[0].isnumeric()) and (v[2] in known):\n known[k] = OR(int(v[0]), known[v[2]])\n elif ('OR' in v) and (v[0] in known) and (v[2].isnumeric()):\n known[k] = OR(known[v[0]], int(v[2]))\n elif ('LSHIFT' in v) and (v[0] in known):\n known[k] = LSHIFT(known[v[0]], v[2])\n elif ('RSHIFT' in v) and (v[0] in known):\n known[k] = RSHIFT(known[v[0]], v[2])\n elif ('NOT' in v) and (v[1] in known):\n known[k] = NOT(known[v[1]])\n return known[wire]",
"Test 0",
"commands = display('inputs/input7.test1.txt')\ncommands\n\nevaluate('a')",
"Test 1",
"commands = display('inputs/input7.test2.txt')\ncommands\n\ntest()",
"Solution",
"commands = display('inputs/input7.txt')\nevaluate('a')",
"Approach 3: With Lazy Variable Wrapper (Python)",
"import csv\nimport numpy as np\n\ndef display(input_file):\n\n \"\"\"produce a dict mapping variables to expressions\"\"\"\n \n commands = []\n with open(input_file, 'rt') as f_input:\n csv_reader = csv.reader(f_input, delimiter=' ')\n for line in csv_reader:\n commands.append((line[-1], line[:-2]))\n return dict(commands)\n\nclass LazyVar(object):\n def __init__(self, func):\n self.func = func\n self.value = None\n def __call__(self):\n if self.value is None:\n self.value = self.func()\n return self.value\n\nbinary_command = {'NOT': '~', 'AND': '&', 'OR': '|', 'LSHIFT': '<<', 'RSHIFT': '>>'}\n\ndef translate(l):\n translated = []\n for a in l:\n if a in binary_command:\n b = binary_command[a]\n elif a.isnumeric():\n b = 'np.uint16({})'.format(a)\n else:\n b = '{}.func()'.format('var_' + a)\n translated.append(b)\n return translated",
"Test",
"commands = display('inputs/input7.test2.txt')\n\ncommands = display('inputs/input7.test2.txt')\nfor k, v in commands.items():\n command_str = '{0} = LazyVar(lambda: {1})'.format('var_' + k, ''.join(translate(v)))\n print(command_str)\n exec(command_str)\n\ndef test():\n assert(var_d.func() == 72)\n assert(var_e.func() == 507)\n assert(var_f.func() == 492)\n assert(var_g.func() == 114)\n assert(var_h.func() == 65412)\n assert(var_i.func() == 65079)\n assert(var_x.func() == 123)\n assert(var_y.func() == 456)\n\ntest()",
"Although the approach passes the test, it does not end in reasonable time for the full input.\nApproach 4: With Lazy Evaluation in R\nThe approach now is to exploit the lazy evaluation capabilities in R. So we leverage Python to create an R script that does the job.",
"def rscript_command(var, l):\n vocab = {'AND' : 'bitwAnd', \n 'OR' : 'bitwOr',\n 'LSHIFT' : 'bitwShiftL',\n 'RSHIFT' : 'bitwShiftR'}\n if len(l) == 3:\n func = vocab[l[1]]\n arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'\n arg2 = l[2] if l[2].isdigit() else 'var_' + l[2] + '()'\n return 'var_{0} <- function(a={1}, b={2})'.format(var, arg1, arg2) + ' {' + '{0}(a,b)'.format(func) + '}'\n elif len(l) == 2:\n func = 'bitwNot'\n arg1 = l[1] if l[1].isdigit() else 'var_' + l[1] + '()'\n return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + '{0}(a)'.format(func) + '}'\n else:\n arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'\n return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + 'a' + '}'\n\ndef generate_rscript(commands, target):\n with open('day7_commands.R', 'wt') as f:\n for k, v in commands.items():\n f.write(rscript_command(k, v)+'\\n')\n f.write('var_' + target + '()')",
"Test",
"commands = display('inputs/input7.test2.txt')\ngenerate_rscript(commands, 'd')\n\n! cat day7_commands.R\n\n!Rscript day7_commands.R",
"Solution",
"commands = display('inputs/input7.txt')\ngenerate_rscript(commands, 'a')\n\n! cat day7_commands.R \n\n!Rscript day7_commands.R",
"Although this approach is more natural than defining a LazyWrapper in Python, it takes quite a lot of time to execute, so this is not a very cool solution after all.\nDay 7.2",
"commands = display('inputs/input7.txt')\ncommands['b'] = ['16076']\nevaluate('a')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/ensae_teaching_cs | _doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb | mit | [
"2A.ML101.3: Supervised Learning: Classification of Handwritten Digits\nIn this section we'll apply scikit-learn to the classification of handwritten\ndigits. This will go a bit beyond the iris classification we saw before: we'll\ndiscuss some of the metrics which can be used in evaluating the effectiveness\nof a classification model.\nSource: Course on machine learning with scikit-learn by Gaël Varoquaux",
"from sklearn.datasets import load_digits\ndigits = load_digits()",
"We'll re-use some of our code from before to visualize the data and remind us what\nwe're looking at:",
"%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nfig = plt.figure(figsize=(6, 6)) # figure size in inches\nfig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)\n\n# plot the digits: each image is 8x8 pixels\nfor i in range(64):\n ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])\n ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')\n \n # label the image with the target value\n ax.text(0, 7, str(digits.target[i]))",
"Visualizing the Data\nA good first-step for many problems is to visualize the data using a\nDimensionality Reduction technique. We'll start with the\nmost straightforward one, Principal Component Analysis (PCA).\nPCA seeks orthogonal linear combinations of the features which show the greatest\nvariance, and as such, can help give you a good idea of the structure of the\ndata set. Here we'll use RandomizedPCA, because it's faster for large N.",
"from sklearn.decomposition import PCA\npca = PCA(n_components=2, svd_solver=\"randomized\")\nproj = pca.fit_transform(digits.data)\n\nplt.scatter(proj[:, 0], proj[:, 1], c=digits.target)\nplt.colorbar();",
"Question: Given these projections of the data, which numbers do you think\na classifier might have trouble distinguishing?\nGaussian Naive Bayes Classification\nFor most classification problems, it's nice to have a simple, fast, go-to\nmethod to provide a quick baseline classification. If the simple and fast\nmethod is sufficient, then we don't have to waste CPU cycles on more complex\nmodels. If not, we can use the results of the simple method to give us\nclues about our data.\nOne good method to keep in mind is Gaussian Naive Bayes. It fits a Gaussian distribution to each training label independantly on each feature, and uses this to quickly give a rough classification. It is generally not sufficiently accurate for real-world data, but can perform surprisingly well, for instance on text data.",
"from sklearn.naive_bayes import GaussianNB\nfrom sklearn.model_selection import train_test_split\n\n# split the data into training and validation sets\nX_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)\n\n# train the model\nclf = GaussianNB()\nclf.fit(X_train, y_train)\n\n# use the model to predict the labels of the test data\npredicted = clf.predict(X_test)\nexpected = y_test",
"Question: why did we split the data into training and validation sets?\nLet's plot the digits again with the predicted labels to get an idea of\nhow well the classification is working:",
"fig = plt.figure(figsize=(6, 6)) # figure size in inches\nfig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)\n\n# plot the digits: each image is 8x8 pixels\nfor i in range(64):\n ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])\n ax.imshow(X_test.reshape(-1, 8, 8)[i], cmap=plt.cm.binary,\n interpolation='nearest')\n \n # label the image with the target value\n if predicted[i] == expected[i]:\n ax.text(0, 7, str(predicted[i]), color='green')\n else:\n ax.text(0, 7, str(predicted[i]), color='red')",
"Quantitative Measurement of Performance\nWe'd like to measure the performance of our estimator without having to resort\nto plotting examples. A simple method might be to simply compare the number of\nmatches:",
"matches = (predicted == expected)\nprint(matches.sum())\nprint(len(matches))\n\nmatches.sum() / float(len(matches))",
"We see that nearly 1500 of the 1800 predictions match the input. But there are other\nmore sophisticated metrics that can be used to judge the performance of a classifier:\nseveral are available in the sklearn.metrics submodule.\nOne of the most useful metrics is the classification_report, which combines several\nmeasures and prints a table with the results:",
"from sklearn import metrics\nfrom pandas import DataFrame\nDataFrame(metrics.classification_report(expected, predicted, output_dict=True)).T",
"Another enlightening metric for this sort of multi-label classification\nis a confusion matrix: it helps us visualize which labels are\nbeing interchanged in the classification errors:",
"DataFrame(metrics.confusion_matrix(expected, predicted))",
"We see here that in particular, the numbers 1, 2, 3, and 9 are often being labeled 8."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
PythonFreeCourse/Notebooks | week05/3_Generators.ipynb | mit | [
"<img src=\"images/logo.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.\">\n<span style=\"text-align: right; direction: rtl; float: right;\">Generators</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">הקדמה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפונקציות שיצרנו עד כה נבנו כך שיחזירו ערך אחד בכל קריאה.<br>\n הערך הזה יכול היה להיות מכל טיפוס שהוא: בוליאני, מחרוזת, tuple וכדומה.<br>\n אם נרצה להחזיר כמה ערכים יחד, תמיד נוכל להחזיר אותם כרשימה או כ־tuple.<br>\n אבל מה קורה כשאנחנו רוצים להחזיר סדרת ערכים גדולה מאוד או אפילו אין־סופית?\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n למשל:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>הכתובות של כל הדפים הקיימים באינטרנט.</li>\n <li>מילות כל השירים שראו אור מאז שנת 1400 לספירה.</li>\n <li>כל המספרים השלמים הגדולים מ־0.</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בכלים שיש בידינו כרגע, נמצא שיש בעיה ליצור רשימות כאלו.<br>\n עבור רשימות גדולות מאוד – לא יהיה למחשב די זיכרון ולבסוף הוא ייכשל בשמירת ערכים חדשים.<br>\n ומה בנוגע לרשימות אין־סופיות? הן... ובכן... אין־סופיות, ולכן מלכתחילה לא נוכל ליצור אותן.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">הגדרה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n פתרון שמעניין לחשוב עליו הוא \"פונקציה עצלנית\".<br>\n אם אנחנו בשום שלב לא יכולים להחזיק בזיכרון המחשב את כל האיברים (כי יש יותר מדי מהם, או כי זו סדרה אין־סופית),<br>\n אולי נוכל לשלוח תחילה את הערך הראשון – ואת הערכים שבאים אחריו נשלח רק כשיבקשו אותם מאיתנו.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n פונקציה עצלנית שכזו נקראת <dfn>generator</dfn>, ובכל פעם שנבקש ממנה, היא תחזיר לנו איבר יחיד מתוך סדרת ערכים.<br>\n תחילה – היא תחזיר רק את הערך הראשון, בלי לחשב את שאר האיברים. אחר כך, באותו אופן, רק את השני, אחר כך רק את השלישי וכן הלאה.<br>\n ההבדל העיקרי בין generator לבין פונקציה רגילה, הוא שב־generator נבחר להחזיר את הערכים אחד־אחד ולא תחת מבנה מאוגד.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נסכם: generator היא פונקציה שיוצרת עבורנו בכל פעם ערך אחד, מחזירה אותו, ומחכה עד שנבקש את האיבר הבא.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">שימוש</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">יצירת generator בסיסי</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נתחיל בהגדרת generator מטופש למדי:\n</p>",
"def silly_generator():\n a = 1\n yield a\n b = a + 1\n yield b\n c = [1, 2, 3]\n yield c",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מעניין! זה נראה ממש כמו פונקציה. נקרא למבנה הזה שיצרנו \"<dfn>פונקציית ה־generator</dfn>\".<br>\n אבל מהו ה־<code>yield</code> המוזר הזה שנמצא שם?\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לפני שנתהה על קנקנו, בואו ננסה לקרוא לפונקציה ונראה מה היא מחזירה:\n</p>",
"print(silly_generator())",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אומנם למדנו שלא אומרים איכס על פונקציות, אבל מה קורה פה?\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בניגוד לפונקציות רגילות, קריאה ל־generator לא מחזירה ערך מייד.<br>\n במקום ערך היא מחזירה מעין סמן, כמו בקובץ, שאפשר לדמיין כחץ שמצביע על השורה הראשונה של הפונקציה.<br>\n נשמור את הסמן על משתנה:\n</p>",
"our_generator = silly_generator()",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בעקבות הקריאה ל־<code>silly_generator</code> נוצר לנו סמן שמצביע כרגע על השורה <code>a = 1</code>.<br>\n המינוח המקצועי לסמן הזה הוא <dfn>generator iterator</dfn>.\n</p>\n\n<img src=\"images/silly_generator1.png?v=1\" width=\"300px\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"תוכן הפונקציה silly_generator, כאשר חץ מצביע לשורה הראשונה שלה – a = 1\">\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אחרי שהרצנו את השורה <code dir=\"ltr\">our_generator = silly_generator()</code>, הסמן המדובר נשמר במשתנה בשם <var>our_generator</var>.<br>\n זה זמן מצוין לבקש מה־generator להחזיר ערך.<br>\n נעשה זאת בעזרת הפונקציה הפייתונית <code>next</code>:\n</p>",
"next_value = next(our_generator)\nprint(next_value)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כדי להבין מה התרחש נצטרך להבין שני דברים חשובים שקשורים ל־generators:<br>\n</p>\n<ol style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>קריאה ל־<code>next</code> היא כמו לחיצה על \"נגן\" (Play) – היא גורמת לסמן לרוץ עד שהוא מגיע לשורה של החזרת ערך.</li>\n <li>מילת המפתח <code>yield</code> דומה למילת המפתח <code>return</code> – היא מפסיקה את ריצת הסמן, ומחזירה את הערך שמופיע אחריה.</li>\n</ol>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אז היה לנו סמן שהצביע על השורה הראשונה. לחצנו Play, והוא הריץ את הקוד עד שהוא הגיע לנקודה שבה מחזירים ערך.<br>\n ההבדל בין פונקציה לבין generator, הוא ש<mark>כשאנחנו מחזירים ערך בעזרת <code>yield</code> אנחנו \"מקפיאים\" את המצב שבו\n יצאנו מהפונקציה.</mark><br>\n ממש כמו ללחוץ על \"Pause\".<br>\n כשנקרא ל־<code>next</code> בפעם הבאה – הפונקציה תמשיך לרוץ מאותו המקום שבו השארנו את הסמן, עם אותם ערכי משתנים.<br>\n עכשיו הסמן מצביע על השורה <code>b = a + 1</code>, ומחכה שמישהו יקרא שוב ל־<code>next</code> כדי שהפונקציה תוכל להמשיך לרוץ:\n</p>",
"print(next(our_generator))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נסכם מה קרה עד עכשיו:\n</p>\n\n<ol style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>הגדרנו פונקציה בשם <var>silly_generator</var>, שאמורה להחזיר את הערכים <samp>1</samp>, <samp>2</samp> ו־<samp dir=\"ltr\">[1, 2, 3]</samp>. קראנו לה \"<em>פונקציית הגנרטור</em>\".</li>\n <li>בעזרת קריאה לפונקציית הגנרטור, יצרנו \"סמן\" (generator iterator) שנקרא <var>our_generator</var> ומצביע לשורה הראשונה בפונקציה.</li>\n <li>בעזרת קריאה ל־<code>next</code> על ה־generator iterator, הרצנו את הסמן עד שה־generator החזיר ערך.</li>\n <li>למדנו ש־generator־ים מחזירים ערכים בעיקר בעזרת yield – שמחזיר ערך ושומר את המצב שבו הפונקציה עצרה.</li>\n <li>קראנו שוב ל־<code>next</code> על ה־generator iterator, וראינו שהוא ממשיך מהמקום שבו ה־generator הפסיק לרוץ פעם קודמת.</li>\n</ol>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n תוכלו לחזות מה יקרה אם נקרא שוב ל־<code>next(our_generator)</code>?\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ננסה:\n</p>",
"print(next(our_generator))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n יופי! הכל הלך כמצופה.<br>\n אבל מה צופן לנו העתיד?<br>\n בפעם הבאה שנבקש ערך מהפונקציה, הסמן שלנו ירוץ הלאה ולא ייתקל ב־<code>yield</code>.<br>\n במקרה כזה, נקבל שגיאת <var>StopIteration</var>, שמבשרת לנו ש־<code>next</code> לא הצליח לחלץ מה־generator את הערך הבא.\n</p>",
"print(next(our_generator))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מובן שאין סיבה להילחץ.<br>\n במקרה הזה אפילו לא מדובר במשהו רע – פשוט כילינו את כל הערכים מה־generator iterator שלנו.<br>\n פונקציית ה־generator עדיין קיימת!<br>\n אפשר ליצור עוד generator iterator אם נרצה, ולקבל את כל הערכים שנמצאים בו באותה צורה:\n</p>",
"our_generator = silly_generator()\nprint(next(our_generator))\nprint(next(our_generator))\nprint(next(our_generator))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אבל כשחושבים על זה, זה קצת מגוחך.<br>\n בכל פעם שנרצה להשיג את הערך הבא נצטרך לרשום <code>next</code>?<br>\n חייבת להיות דרך טובה יותר!\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">כל generator הוא גם iterable</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">for</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אז למעשה, יש יותר מדרך טובה אחת להשיג את כל הערכים שיוצאים מ־generator מסוים.<br>\n כהקדמה, נניח פה עובדה שלא תשאיר אתכם אדישים: ה־generator iterator הוא... iterable! הפתעת השנה, אני יודע!<br>\n אמנם אי אפשר לפנות לאיברים שלו לפי מיקום, אך בהחלט אפשר לעבור עליהם בעזרת לולאת <code>for</code>, לדוגמה:\n</p>",
"our_generator = silly_generator()\nfor item in our_generator:\n print(item)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מה מתרחש כאן?<br>\n אנחנו מבקשים מלולאת ה־<code>for</code> לעבור על ה־generator iterator שלנו.<br>\n ה־<code>for</code> עושה עבורנו את העבודה אוטומטית:\n</p>\n\n<ol style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>הוא מבקש את האיבר הבא מה־generator iterator באמצעות <code>next</code>.</li>\n <li>הוא מכניס את האיבר שהוא קיבל מה־generator ל־<var>item</var>.</li>\n <li>הוא מבצע את גוף הלולאה פעם אחת עבור האיבר שנמצא ב־<var>item</var>.</li>\n <li>הוא חוזר לראש הלולאה שוב, ומנסה לקבל את האיבר הבא באמצעות <code>next</code>. כך עד שייגמרו האיברים ב־generator iterator.</li>\n</ol>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n שימו לב שהעובדות שלמדנו בנוגע לאותו \"סמן\" יבואו לידי ביטוי גם כאן.<br>\n הרצה נוספת של הלולאה על אותו סמן לא תדפיס יותר איברים, כיוון שהסמן מצביע כעת על סוף פונקציית ה־generator:\n</p>",
"for item in our_generator:\n print(item)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n למזלנו, לולאות <code>for</code> יודעות לטפל בעצמן בשגיאת <code>StopIteration</code>, ולכן שגיאה שכזו לא תקפוץ לנו במקרה הזה.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">המרת טיפוסים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n דרך אחרת, לדוגמה, היא לבקש להמיר את ה־generator iterator לסוג משתנה אחר שהוא גם iterable:\n</p>",
"our_generator = silly_generator()\nitems = list(our_generator)\nprint(items)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בקוד שלמעלה, השתמשנו בפונקציה <code>list</code> שיודעת להמיר ערכים iterable־ים לרשימות.<br>\n שימו לב שמה שלמדנו בנוגע ל\"סמן\" יבוא לידי ביטוי גם בהמרות:\n</p>",
"print(list(our_generator))",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">שימושים פרקטיים</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">חיסכון בזיכרון</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nנכתוב פונקציה רגילה שמקבלת מספר שלם, ומחזירה רשימה של כל המספרים השלמים מ־0 ועד אותו מספר (נשמע לכם מוכר?):\n</p>",
"def my_range(upper_limit):\n numbers = []\n current_number = 0\n while current_number < upper_limit:\n numbers.append(current_number)\n current_number = current_number + 1\n return numbers\n\n\nfor number in my_range(1000):\n print(number)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בפונקציה הזו אנחנו יוצרים רשימת מספרים חדשה, המכילה את כל המספרים בין 0 לבין המספר שהועבר לפרמטר <var>upper_limit</var>.<br>\n אך ישנה בעיה חמורה – הפעלת הפונקציה גורמת לניצול משאבים רבים!<br>\n אם נכניס כארגומנט 1,000 – נצטרך להחזיק רשימה המכילה 1,000 איברים שונים, ואם נכניס מספר גדול מדי – עלול להיגמר לנו הזיכרון.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אבל איזו סיבה יש לנו להחזיק בזיכרון את רשימת כל המספרים?<br>\n אם לא עולה צורך מובהק שכזה, ייתכן שעדיף להחזיק בזיכרון מספר אחד בלבד בכל פעם, ולהחזירו מייד בעזרת generator:\n</p>",
"def my_range(upper_limit):\n current_number = 0\n while current_number < upper_limit:\n yield current_number\n current_number = current_number + 1\n\n\nour_generator = my_range(1000)\nfor number in our_generator:\n print(number)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n שימו לב כמה הגרסה הזו אלגנטית יותר!<br>\n בכל פעם אנחנו פשוט שולחים את ערכו של מספר אחד (<var>current_number</var>) החוצה.<br>\n כשמבקשים את הערך הבא מה־generator iterator, פונקציית ה־generator חוזרת לעבוד מהנקודה שבה היא עצרה:<br>\n היא מעלה את ערכו של המספר הנוכחי, בודקת אם הוא נמוך מ־<var>upper_limit</var>, ושולחת גם אותו החוצה.<br>\n בשיטה הזו, <code>my_range(numbers)</code> לא מחזירה לנו רשימה של התוצאות – אלא generator iterator שמחזיר ערך אחד בכל פעם.<br>\n כך אנחנו לעולם לא מחזיקים בזיכרון 1,000 מספרים בו־זמנית.\n</p>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לפניכם פונקציה שמקבלת רשימה, ומחזירה עבור כל מספר ברשימה את ערכו בריבוע.<br>\n זוהי גרסה מעט בזבזנית שמשתמשת בהרבה זיכרון. תוכלו להמיר אותה להיות generator?\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>",
"def square_numbers(numbers):\n squared_numbers = []\n for number in numbers:\n squared_numbers.append(number ** 2)\n return squared_numbers\n\n\nfor number in square_numbers(my_range(1000)):\n print(number)",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תשובות חלקיות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לעיתים ניאלץ לבצע חישוב ארוך, שהשלמתו תימשך זמן רב מאוד.<br>\n במקרה כזה, נוכל להשתמש ב־generator כדי לקבל חלק מהתוצאה בזמן אמת,<br>\n בזמן שבפונקציה \"רגילה\" נצטרך להמתין עד סיום החישוב כולו.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n שלשה פיתגורית, לדוגמה, היא שלישיית מספרים שלמים וחיוביים, $a$, $b$ ו־$c$, שעונים על הדרישה $a^2 + b^2 = c^2$.<br>\n אם כך, כדי ששלושה מספרים שאנחנו בוחרים ייחשבו שלשה פיתגורית,<br>\n הסכום של ריבוע המספר הראשון וריבוע המספר השני, אמור להיות שווה לערכו של המספר השלישי בריבוע.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אלו דוגמאות לשלשות פיתגוריות:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>$(3, 4, 5)$, כיוון ש־$9 + 16 = 25$.<br>\n 9 הוא 3 בריבוע, 16 הוא 4 בריבוע ו־25 הוא 5 בריבוע.\n </li>\n <li>$(5, 12, 13)$, כיוון ש־$25 + 144 = 169$.</li>\n <li>$(8, 15, 17)$, כיוון ש־$64 + 225 = 289$.</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ננסה למצוא את כל השלשות הפיתגוריות מתחת ל־10,000 בעזרת קוד שרץ על כל השלשות האפשריות:\n</p>",
"def find_pythagorean_triples(upper_bound=10_000):\n pythagorean_triples = []\n for c in range(3, upper_bound):\n for b in range(2, c):\n for a in range(1, b):\n if a ** 2 + b **2 == c ** 2:\n pythagorean_triples.append((a, b, c))\n return pythagorean_triples\n\n\nfor triple in find_pythagorean_triples():\n print(triple)",
"<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n הרצת התא הקודם תתקע את המחברת (חישוב התוצאה יימשך זמן רב).<br>\n כדי להיות מסוגלים להריץ את התאים הבאים, לחצו <samp>00</samp> לאחר הרצת התא, ובחרו <em>Restart</em>.<br>\n אל דאגה – האתחול יתבצע אך ורק עבור המחברת, ולא עבור מחשב.\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n יו, כמה זמן נמשכת הרצת הקוד הזה... 😴<br>\n הלוואי שעד שהקוד הזה היה מסיים היינו מקבלים לפחות <em>חלק</em> מהתוצאות!<br>\n נפנה ל־generator־ים לעזרה:\n</p>",
"def find_pythagorean_triples(upper_bound=10_000):\n for c in range(3, upper_bound):\n for b in range(2, c):\n for a in range(1, b):\n if a ** 2 + b **2 == c ** 2:\n yield a, b, c\n\n\nfor triple in find_pythagorean_triples():\n print(triple)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n איך זה קרה? קיבלנו את התשובה בתוך שבריר שנייה!<br>\n ובכן, זה לא מדויק – קיבלנו חלק מהתשובות. שימו לב שהקוד ממשיך להדפיס :)<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n להזכירכם, ה־generator שולח את התוצאה החוצה מייד כשהוא מוצא שלשה אחת,<br>\n וה־for מקבל מה־generator iterable כל שלשה ברגע שהיא נמצאה.<br>\n ברגע שה־for מקבל שלשה, הוא מבצע את גוף הלולאה עבור אותה שלשה, ורק אז מבקש מ־generator את האיבר הבא.<br>\n בגלל האופי של generators, הקוד בתא האחרון מדפיס לנו כל שלשה ברגע שהוא מצא אותה, ולא מחכה עד שיימצאו כל השלשות.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תרגול ביניים: מספרים פראיים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n \"פירוק לגורמים של מספר שלם\" היא בעיה שחישוב פתרונה נמשך זמן רב במחשבים מודרניים.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עליכם לכתוב פונקציה שמקבלת מספר חיובי שלם $n$, ומחזירה קבוצת מספרים שמכפלתם (תוצאת הכפל ביניהם) היא $n$.<br>\n לדוגמה, המספר 1,386 בנוי מהמכפלה של קבוצת המספרים $2 \\cdot 3 \\cdot 3 \\cdot 7 \\cdot 11$.<br>\n כל מספר בקבוצת המספרים הזו חייב להיות ראשוני.<br>\n להזכירכם: מספר ראשוני הוא מספר שאין לו מחלקים חוץ מעצמו ומ־1.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הניחו שהמספר שהתקבל אינו ראשוני.<br>\n מה היתרון של generator על פני פונקציה רגילה שעושה אותו דבר?\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\"> \n רמז: <span style=\"background: black;\">אם תנסו לחלק את המספר ב־2, ואז ב־3 (וכן הלאה), בסופו של דבר תגיעו למחלק ראשוני של המספר.</span><br>\n רמז עבה: <span style=\"background: black;\">בכל פעם שמצאתם מחלק אחד למספר, חלקו את המספר במחלק, והתחילו את החיפוש מחדש. מתי עליכם לעצור?</span>\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">אוספים אין־סופיים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עבור בעיות מסוימות, נרצה להיות מסוגלים להחזיר אין־סוף תוצאות.<br>\n ניקח כדוגמה לסדרה אין־סופית את סדרת פיבונאצ'י, שבה כל איבר הוא סכום זוג האיברים הקודמים לו:<br>\n $1, 1, 2, 3, 5, 8, \\ldots$\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נממש פונקציה שמחזירה לנו את סדרת פיבונאצ'י.<br>\n בפונקציה רגילה אין לנו אפשרות להחזיר מספר אין־סופי של איברים, ולכן נצטרך להחליט על מספר האיברים המרבי שנרצה להחזיר:\n</p>",
"def fibonacci(max_items):\n a = 1\n b = 1\n numbers = [1, 1]\n while len(numbers) < max_items:\n a, b = b, a + b # Unpacking\n numbers.append(b)\n return numbers\n\n\nfor number in fibonacci(10):\n print(number)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לעומת זאת, ל־generators לא חייב להיות סוף מוגדר.<br>\n נשתמש ב־<code>while True</code> שתמיד מתקיים, כדי שבסופו של דבר – תמיד נגיע ל־<code>yield</code>: \n</p>",
"def fibonacci():\n a = 1\n b = 1\n numbers = [1, 1]\n while True: # תמיד מתקיים\n yield a\n a, b = b, a + b\n\n \ngenerator_iterator = fibonacci()\nfor number in range(10):\n print(next(generator_iterator))\n\n# אני יכול לבקש בקלות רבה את 10 האיברים הבאים בסדרה\nfor number in range(10):\n print(next(generator_iterator))",
"<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n generators אין־סופיים יכולים לגרום בקלות ללולאות אין־סופיות, גם בלולאות <code>for</code>.<br>\n שימו לב לצורת ההתעסקות העדינה בדוגמאות למעלה.<br>\n הרצת לולאת <code>for</code> ישירות על ה־generator iterator הייתה מכניסה אותנו ללולאה אין־סופית.\n </p>\n </div>\n</div>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו generator שמחזיר את כל המספרים השלמים הגדולים מ־0.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">ריבוי generator iterators</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נגדיר generator פשוט שמחזיר את האיברים <samp>1</samp>, <samp>2</samp> ו־<samp>3</samp>:\n</p>",
"def simple_generator():\n yield 1\n yield 2\n yield 3",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ניצור שני generator iterators (\"סמנים\") שונים שמצביעים לשורה הראשונה של ה־generator שמופיע למעלה:\n</p>",
"first_gen = simple_generator()\nsecond_gen = simple_generator()",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בעניין זה, חשוב להבין שכל אחד מה־generator iterators הוא \"חץ\" נפרד שמצביע לשורה הראשונה ב־<var>simple_generator</var>.<br>\n אם נבקש מכל אחד מהם להחזיר ערך, נקבל משניהם את 1, ואותו חץ דמיוני יעבור בשני ה־generator iterators להמתין בשורה השנייה:\n</p>",
"print(next(first_gen))\nprint(next(second_gen))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נוכל לקדם את <var>first_gen</var>, לדוגמה, לסוף הפונקציה:\n</p>",
"print(next(first_gen))\nprint(next(first_gen))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אבל <var>second_gen</var> הוא חץ נפרד, שעדיין מצביע לשורה השנייה של פונקציית ה־generator.<br>\n אם נבקש ממנו את הערך הבא, הוא ימשיך את המסע מהערך <samp>2</samp>:<br>\n</p>",
"print(next(second_gen))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ממצב זה נוכל להסיק שאפשר ליצור יותר מ־generator iterator אחד עבור כל פונקציית generator.<br>\n כל אחד מה־generator iterators יחזיק תמונת מצב עצמאית של המקום שבו עצר הסמן ושל ערכי המשתנים.<br>\n ההתנהלות של כל generator iterator תקרה בנפרד, ולא תושפע בשום צורה מה־generator iterators האחרים.<br>\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">הבדלי מינוח</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בשלב זה יש לנו הרבה מבנים שאפשר לרוץ עליהם, וכל המינוח סביב עניין האיטרביליות נעשה מעט מבלבל.<br>\n ננסה לעשות סדר בדברים:\n</p>\n\n<dl style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n<dt>Iterable</dt><dd>\n אם ערך מסוים הוא iterable, אפשר לפרק אותו ליחידות קטנות יותר, ולהתייחס לכל יחידה בנפרד.\n</dd>\n<dt>Iteration, חִזְרוּר</dt><dd>ביצוע יחיד של גוף הלולאה עבור ערך מסוים.</dd>\n<dt>Iterator</dt><dd>\n ערך שמייצג זרם של מידע, ומתוכו מאחזרים ערכים אחרים. אפשר לאחזר ממנו ערך אחד בכל פעם, לפי סדר מסוים, בעזרת <code dir=\"ltr\">next()</code>.<br>\n iterator הוא בהכרח iterable, אך לא כל iterable הוא iterator.\n</dd>\n<dt>Sequence</dt>\n <dd>\n כל iterable שאפשר לחלץ ממנו איברים באמצעות פנייה למיקום שלהם (<code>iterable[0]</code>), כמו מחרוזות, רשימות ו־tuple־ים.<br>\n sequence הוא בהכרח iterable, אך לא כל iterable הוא sequence.\n </dd>\n<dt>פונקציית ה־generator</dt><dd>פונקציה המכילה <code>yield</code> ומגדירה אילו ערכים יוחזרו מה־generator.</dd>\n<dt>Generator iterator</dt>\n <dd>\n iterator שנוצר מתוך פונקציית ה־generator.\n </dd>\n<dt>Generator</dt><dd>לרוב מתייחס לפונקציית ה־generator, אך יש פעמים שמשתמשים במינוח כדי להתייחס ל־generator iterator.</dd>\n</dl>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">סיכום</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n Generators הם פונקציות שמאפשרות לנו להחזיר סדרות ערכים באופן מדורג.<br>\n כשנקרא לפונקציית generator, היא תחזיר לנו generator iterator שישמש מעין \"סמן\".<br>\n הסמן ישמור \"מצב\" שמתאר את המקום שבו אנחנו שוהים בתוך הפונקציה, ואת הערכים שחושבו במהלך ריצתה עד כה.<br>\n בכל שלב, נוכל לבקש את הערך הבא ב־generator בעזרת קריאה לפונקציה <code>next</code> על ה־generator iterator.<br>\n נוכל גם להשתמש במבנים שיזיזו עבורנו את הסמן, כמו <code>for</code> או המרה לסוג נתונים אחר שגם הוא iterable.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ל־generators יתרונות רבים:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>אפשר ליצור בעזרתם פונקציות שמחזירות מספר אין־סופי של נתונים.</li>\n <li>במקרים מסוימים, נוכל להיעזר בהם כדי לקבל רק חלק מהתוצאות בכל זמן נתון.</li>\n <li>שימוש נכון בהם יכול להיות מפתח לחיסכון משמעותי במשאבי התוכנית.</li>\n</ul>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תרגילים</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">פיצוץ אוכלוסין</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n קראו בוויקיפדיה על דרך החישוב של <a href=\"https://he.wikipedia.org/wiki/%D7%A1%D7%A4%D7%A8%D7%AA_%D7%91%D7%99%D7%A7%D7%95%D7%A8%D7%AA#%D7%A1%D7%A4%D7%A8%D7%AA_%D7%91%D7%99%D7%A7%D7%95%D7%A8%D7%AA_%D7%91%D7%9E%D7%A1%D7%A4%D7%A8_%D7%94%D7%96%D7%94%D7%95%D7%AA_%D7%91%D7%99%D7%A9%D7%A8%D7%90%D7%9C\">ספרת הביקורת</a> במספרי הזהות בישראל. \n</p>\n\n<ol style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>ממשו פונקציה שמקבלת מספר זהות ללא ספרת ביקורת, ומחזירה את ספרת הביקורת.</li>\n <li>ממשו תוכנית המדפיסה את כל מספרי הזהות האפשריים במדינת ישראל. השתמשו בקוד שכתבתם בסעיף הראשון.</li>\n</ol>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מנה מושלמת לחלוקה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אפשר לחלק רול סושי של 6 יחידות לאדם אחד, ל־2 אנשים, ל־3 אנשים ול־6 אנשים.<br>\n נתעלם ממצבים שבהם כל אדם מקבל רק חתיכת סושי אחת. זה נשמע לי עצוב.<br>\n נגדיר \"מנה מושלמת לחלוקה\" כמנה שאם נסכום את כל הצורות לחלק אותה, נקבל את גודל המנה עצמה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לדוגמה:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>רול סושי בעל 6 יחידות הוא מנה מושלמת לחלוקה, כיוון שאפשר לחלק אותו לאדם 1, ל־2 אנשים או ל־3 אנשים. $1+2+3=6$.</li>\n <li>רול סושי בעל 8 יחידות הוא לא מנה מושלמת לחלוקה, כי אפשר לחלק אותו לאדם 1, ל־2 אנשים או ל־4 אנשים. $1+2+4 \\neq 8$.</li>\n <li>רול בעל 12 יחידות גם הוא לא מנה מושלמת לחלוקה – $1 + 2 + 3 + 4 + 6 \\neq 12$.</li>\n <li>רול בעל 28 יחידות הוא בהחלט מנה מושלמת לחלוקה – $1 + 2 + 4 + 7 + 14 = 28$.</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו תוכנית שמדפיסה באופן אין־סופי את כל גודלי המנות שנחשבים מושלמים לחלוקה.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">לחששנית</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בקובץ resources/logo.jpg מופיע לוגו הקורס, ובתוכו מוכמנים מסרים סודיים אחדים.<br>\n המסרים הם מחרוזות באורך 5 אותיות לפחות, כתובים באותיות אנגליות קטנות בלבד ומסתיימים בסימן קריאה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n פתחו את הלוגו לקריאה בתצורה בינארית, וחלצו ממנו את המסרים הסודיים.<br>\n זכרו שהקובץ עלול להיות גדול מאוד, ועדיף שלא לקרוא את כולו במכה אחת.<br>\n מצאו באינטרנט עזרה בנוגע לפתיחת קבצים בצורה בינארית ולקריאה מדורגת של הקובץ.<br>\n הקפידו שלא להשתמש בטכניקות שלא למדנו (או להוסיף אותן רק בנוסף לפתרון שכזה).\n</p>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
msanterre/deep_learning | sentiment-network/Sentiment_Classification_Projects.ipynb | mit | [
"Sentiment Classification & How To \"Frame Problems\" for a Neural Network\nby Andrew Trask\n\nTwitter: @iamtrask\nBlog: http://iamtrask.github.io\n\nWhat You Should Already Know\n\nneural networks, forward and back-propagation\nstochastic gradient descent\nmean squared error\nand train/test splits\n\nWhere to Get Help if You Need it\n\nRe-watch previous Udacity Lectures\nLeverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)\nShoot me a tweet @iamtrask\n\nTutorial Outline:\n\n\nIntro: The Importance of \"Framing a Problem\" (this lesson)\n\n\nCurate a Dataset\n\nDeveloping a \"Predictive Theory\"\n\nPROJECT 1: Quick Theory Validation\n\n\nTransforming Text to Numbers\n\n\nPROJECT 2: Creating the Input/Output Data\n\n\nPutting it all together in a Neural Network (video only - nothing in notebook)\n\n\nPROJECT 3: Building our Neural Network\n\n\nUnderstanding Neural Noise\n\n\nPROJECT 4: Making Learning Faster by Reducing Noise\n\n\nAnalyzing Inefficiencies in our Network\n\n\nPROJECT 5: Making our Network Train and Run Faster\n\n\nFurther Noise Reduction\n\n\nPROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary\n\n\nAnalysis: What's going on in the weights?\n\n\nLesson: Curate a Dataset<a id='lesson_1'></a>\nThe cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.",
"def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r') # What we know!\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r') # What we WANT to know!\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()",
"Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.",
"len(reviews)\n\nreviews[0]\n\nlabels[0]",
"Lesson: Develop a Predictive Theory<a id='lesson_2'></a>",
"print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)",
"Project 1: Quick Theory Validation<a id='project_1'></a>\nThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.\nYou'll find the Counter class to be useful in this exercise, as well as the numpy library.",
"from collections import Counter\nimport numpy as np",
"We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.",
"# Create three Counter objects to store positive, negative and total counts\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()",
"TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.\nNote: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.",
"# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects",
"Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.",
"# Examine the counts of the most common words in positive reviews\npositive_counts.most_common()\n\n# Examine the counts of the most common words in negative reviews\nnegative_counts.most_common()",
"As you can see, common words like \"the\" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.\nTODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios. \n\nHint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.",
"# Create Counter object to store positive/negative ratios\npos_neg_ratios = Counter()\n\n# TODO: Calculate the ratios of positive and negative uses of the most common words\n# Consider words to be \"common\" if they've been used at least 100 times",
"Examine the ratios you've calculated for a few words:",
"print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))",
"Looking closely at the values you just calculated, we see the following:\n\nWords that you would expect to see more often in positive reviews – like \"amazing\" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.\nWords that you would expect to see more often in negative reviews – like \"terrible\" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.\nNeutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like \"the\" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.\n\nOk, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like \"amazing\" has a value above 4, whereas a very negative word like \"terrible\" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:\n\nRight now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.\nWhen comparing absolute values it's easier to do that around zero than one. \n\nTo fix these issues, we'll convert all of our ratios to new values using logarithms.\nTODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))\nIn the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.",
"# TODO: Convert ratios to logs",
"Examine the new ratios you've calculated for the same words from before:",
"print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))",
"If everything worked, now you should see neutral words with values close to zero. In this case, \"the\" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at \"amazing\"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And \"terrible\" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.\nNow run the following cells to see more ratios. \nThe first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)\nThe second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)\nYou should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.",
"# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]\n\n# Note: Above is the code Andrew uses in his solution video, \n# so we've included it here to avoid confusion.\n# If you explore the documentation for the Counter class, \n# you will see you could also find the 30 least common\n# words like this: pos_neg_ratios.most_common()[:-31:-1]",
"End of Project 1.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nTransforming Text into Numbers<a id='lesson_3'></a>\nThe cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.",
"from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')\n\nreview = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')",
"Project 2: Creating the Input/Output Data<a id='project_2'></a>\nTODO: Create a set named vocab that contains every word in the vocabulary.",
"# TODO: Create set named \"vocab\" containing all of the words from all of the reviews\nvocab = None",
"Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074",
"vocab_size = len(vocab)\nprint(vocab_size)",
"Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.",
"from IPython.display import Image\nImage(filename='sentiment_network_2.png')",
"TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.",
"# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros\nlayer_0 = None",
"Run the following cell. It should display (1, 74074)",
"layer_0.shape\n\nfrom IPython.display import Image\nImage(filename='sentiment_network.png')",
"layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.",
"# Create a dictionary of words in the vocabulary mapped to index positions\n# (to be used in layer_0)\nword2index = {}\nfor i,word in enumerate(vocab):\n word2index[word] = i\n \n# display the map of words to indices\nword2index",
"TODO: Complete the implementation of update_input_layer. It should count \n how many times each word is used in the given review, and then store\n those counts at the appropriate indices inside layer_0.",
"def update_input_layer(review):\n \"\"\" Modify the global layer_0 to represent the vector form of review.\n The element at a given index of layer_0 should represent\n how many times the given word occurs in the review.\n Args:\n review(string) - the string of the review\n Returns:\n None\n \"\"\"\n global layer_0\n # clear out previous state by resetting the layer to be all 0s\n layer_0 *= 0\n \n # TODO: count how many times each word is used in the given review and store the results in layer_0 ",
"Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.",
"update_input_layer(reviews[0])\nlayer_0",
"TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1, \n depending on whether the given label is NEGATIVE or POSITIVE, respectively.",
"def get_target_for_label(label):\n \"\"\"Convert a label to `0` or `1`.\n Args:\n label(string) - Either \"POSITIVE\" or \"NEGATIVE\".\n Returns:\n `0` or `1`.\n \"\"\"\n # TODO: Your code here",
"Run the following two cells. They should print out'POSITIVE' and 1, respectively.",
"labels[0]\n\nget_target_for_label(labels[0])",
"Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.",
"labels[1]\n\nget_target_for_label(labels[1])",
"End of Project 2.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nProject 3: Building a Neural Network<a id='project_3'></a>\nTODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:\n- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. \n- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.\n- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)\n- Implement the pre_process_data function to create the vocabulary for our training data generating functions\n- Ensure train trains over the entire corpus\nWhere to Get Help if You Need it\n\nRe-watch earlier Udacity lectures\nChapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)",
"import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n review_vocab = set()\n # TODO: populate review_vocab with all of the words in the given reviews\n # Remember to split reviews into individual words \n # using \"split(' ')\" instead of \"split()\".\n \n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n label_vocab = set()\n # TODO: populate label_vocab with all of the words in the given labels.\n # There is no need to split the labels because each one is a single word.\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n # TODO: populate self.word2index with indices for all the words in self.review_vocab\n # like you saw earlier in the notebook\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n # TODO: do the same thing you did for self.word2index and self.review_vocab, \n # but for self.label2index and self.label_vocab instead\n \n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Store the number of nodes in input, hidden, and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n \n # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between\n # the input layer and the hidden layer.\n self.weights_0_1 = None\n \n # TODO: initialize self.weights_1_2 as a matrix of random values. \n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = None\n \n # TODO: Create the input layer, a two-dimensional matrix with shape \n # 1 x input_nodes, with all values initialized to zero\n self.layer_0 = np.zeros((1,input_nodes))\n \n \n def update_input_layer(self,review):\n # TODO: You can copy most of the code you wrote for update_input_layer \n # earlier in this notebook. \n #\n # However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE\n # THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.\n # For example, replace \"layer_0 *= 0\" with \"self.layer_0 *= 0\"\n pass\n \n def get_target_for_label(self,label):\n # TODO: Copy the code you wrote for get_target_for_label \n # earlier in this notebook. \n pass\n \n def sigmoid(self,x):\n # TODO: Return the result of calculating the sigmoid activation function\n # shown in the lectures\n pass\n \n def sigmoid_output_2_derivative(self,output):\n # TODO: Return the derivative of the sigmoid activation function, \n # where \"output\" is the original output from the sigmoid fucntion \n pass\n\n def train(self, training_reviews, training_labels):\n \n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n \n # Remember when we started for printing time statistics\n start = time.time()\n\n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # TODO: Get the next review and its correct label\n \n # TODO: Implement the forward pass through the network. \n # That means use the given review to update the input layer, \n # then calculate values for the hidden layer,\n # and finally calculate the output layer.\n # \n # Do not use an activation function for the hidden layer,\n # but use the sigmoid activation function for the output layer.\n \n # TODO: Implement the back propagation pass here. \n # That means calculate the error for the forward pass's prediction\n # and update the weights in the network according to their\n # contributions toward the error, as calculated via the\n # gradient descent and back propagation algorithms you \n # learned in class.\n \n # TODO: Keep track of correct predictions. To determine if the prediction was\n # correct, check that the absolute value of the output error \n # is less than 0.5. If so, add one to the correct_so_far count.\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # TODO: Run a forward pass through the network, like you did in the\n # \"train\" function. That means use the given review to \n # update the input layer, then calculate values for the hidden layer,\n # and finally calculate the output layer.\n #\n # Note: The review passed into this function for prediction \n # might come from anywhere, so you should convert it \n # to lower case prior to using it.\n \n # TODO: The output layer should now contain a prediction. \n # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, \n # and `NEGATIVE` otherwise.\n pass\n",
"Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)",
"Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). \nWe have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.",
"mlp.train(reviews[:-1000],labels[:-1000])",
"That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)\nmlp.train(reviews[:-1000],labels[:-1000])",
"With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.\nEnd of Project 3.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nUnderstanding Neural Noise<a id='lesson_4'></a>\nThe following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.",
"from IPython.display import Image\nImage(filename='sentiment_network.png')\n\ndef update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.split(\" \"):\n layer_0[0][word2index[word]] += 1\n\nupdate_input_layer(reviews[0])\n\nlayer_0\n\nreview_counter = Counter()\n\nfor word in reviews[0].split(\" \"):\n review_counter[word] += 1\n\nreview_counter.most_common()",
"Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>\nTODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:\n* Copy the SentimentNetwork class you created earlier into the following cell.\n* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.",
"# TODO: -Copy the SentimentNetwork class from Projet 3 lesson\n# -Modify it to reduce noise, like in the video ",
"Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\nmlp.train(reviews[:-1000],labels[:-1000])",
"That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"End of Project 4.\nAndrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.\nAnalyzing Inefficiencies in our Network<a id='lesson_5'></a>\nThe following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.",
"Image(filename='sentiment_network_sparse.png')\n\nlayer_0 = np.zeros(10)\n\nlayer_0\n\nlayer_0[4] = 1\nlayer_0[9] = 1\n\nlayer_0\n\nweights_0_1 = np.random.randn(10,5)\n\nlayer_0.dot(weights_0_1)\n\nindices = [4,9]\n\nlayer_1 = np.zeros(5)\n\nfor index in indices:\n layer_1 += (1 * weights_0_1[index])\n\nlayer_1\n\nImage(filename='sentiment_network_sparse_2.png')\n\nlayer_1 = np.zeros(5)\n\nfor index in indices:\n layer_1 += (weights_0_1[index])\n\nlayer_1",
"Project 5: Making our Network More Efficient<a id='project_5'></a>\nTODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:\n* Copy the SentimentNetwork class from the previous project into the following cell.\n* Remove the update_input_layer function - you will not need it in this version.\n* Modify init_network:\n\n\nYou no longer need a separate input layer, so remove any mention of self.layer_0\nYou will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero\nModify train:\nChange the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.\nAt the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.\nRemove call to update_input_layer\nUse self's layer_1 instead of a local layer_1 object.\nIn the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.\nWhen updating weights_0_1, only update the individual weights that were used in the forward pass.\nModify run:\nRemove call to update_input_layer \nUse self's layer_1 instead of a local layer_1 object.\nMuch like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.",
"# TODO: -Copy the SentimentNetwork class from Project 4 lesson\n# -Modify it according to the above instructions ",
"Run the following cell to recreate the network and train it once again.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\nmlp.train(reviews[:-1000],labels[:-1000])",
"That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"End of Project 5.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nFurther Noise Reduction<a id='lesson_6'></a>",
"Image(filename='sentiment_network_sparse_2.png')\n\n# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]\n\nfrom bokeh.models import ColumnDataSource, LabelSet\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.io import output_notebook\noutput_notebook()\n\nhist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"Word Positive/Negative Affinity Distribution\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)\n\nfrequency_frequency = Counter()\n\nfor word, cnt in total_counts.most_common():\n frequency_frequency[cnt] += 1\n\nhist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"The frequency distribution of the words in our corpus\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)",
"Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>\nTODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:\n* Copy the SentimentNetwork class from the previous project into the following cell.\n* Modify pre_process_data:\n\n\nAdd two additional parameters: min_count and polarity_cutoff\nCalculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)\nAndrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. \nChange so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.\nChange so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff\nModify __init__:\nAdd the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data",
"# TODO: -Copy the SentimentNetwork class from Project 5 lesson\n# -Modify it according to the above instructions ",
"Run the following cell to train your network with a small polarity cutoff.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"And run the following cell to test it's performance. It should be",
"mlp.test(reviews[-1000:],labels[-1000:])",
"Run the following cell to train your network with a much larger polarity cutoff.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"And run the following cell to test it's performance.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"End of Project 6.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nAnalysis: What's Going on in the Weights?<a id='lesson_7'></a>",
"mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)\n\nmlp_full.train(reviews[:-1000],labels[:-1000])\n\nImage(filename='sentiment_network_sparse.png')\n\ndef get_most_similar_words(focus = \"horrible\"):\n most_similar = Counter()\n\n for word in mlp_full.word2index.keys():\n most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])\n \n return most_similar.most_common()\n\nget_most_similar_words(\"excellent\")\n\nget_most_similar_words(\"terrible\")\n\nimport matplotlib.colors as colors\n\nwords_to_visualize = list()\nfor word, ratio in pos_neg_ratios.most_common(500):\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)\n \nfor word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)\n\npos = 0\nneg = 0\n\ncolors_list = list()\nvectors_list = list()\nfor word in words_to_visualize:\n if word in pos_neg_ratios.keys():\n vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])\n if(pos_neg_ratios[word] > 0):\n pos+=1\n colors_list.append(\"#00ff00\")\n else:\n neg+=1\n colors_list.append(\"#000000\")\n\nfrom sklearn.manifold import TSNE\ntsne = TSNE(n_components=2, random_state=0)\nwords_top_ted_tsne = tsne.fit_transform(vectors_list)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"vector T-SNE for most polarized words\")\n\nsource = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],\n x2=words_top_ted_tsne[:,1],\n names=words_to_visualize,\n color=colors_list))\n\np.scatter(x=\"x1\", y=\"x2\", size=8, source=source, fill_color=\"color\")\n\nword_labels = LabelSet(x=\"x1\", y=\"x2\", text=\"names\", y_offset=6,\n text_font_size=\"8pt\", text_color=\"#555555\",\n source=source, text_align='center')\np.add_layout(word_labels)\n\nshow(p)\n\n# green indicates positive words, black indicates negative words"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kitu2007/dl_class | dcgan-svhn/DCGAN_Exercises.ipynb | mit | [
"Deep Convolutional GANs\nIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.\nYou'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. \n\nSo, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.",
"%matplotlib inline\n\nimport pickle as pkl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\nimport tensorflow as tf\n\n!mkdir data",
"Getting the data\nHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\ndata_dir = 'data/'\n\nif not isdir(data_dir):\n raise Exception(\"Data directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(data_dir + \"train_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',\n data_dir + 'train_32x32.mat',\n pbar.hook)\n\nif not isfile(data_dir + \"test_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',\n data_dir + 'test_32x32.mat',\n pbar.hook)",
"These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.",
"trainset = loadmat(data_dir + 'train_32x32.mat')\ntestset = loadmat(data_dir + 'test_32x32.mat')",
"Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.",
"idx = np.random.randint(0, trainset['X'].shape[3], size=36)\nfig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)\nfor ii, ax in zip(idx, axes.flatten()):\n ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\nplt.subplots_adjust(wspace=0, hspace=0)",
"Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.",
"def scale(x, feature_range=(-1, 1)):\n # scale to (0, 1)\n x = ((x - x.min())/(255 - x.min()))\n \n # scale to feature_range\n min, max = feature_range\n x = x * (max - min) + min\n return x\n\nclass Dataset:\n def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):\n split_idx = int(len(test['y'])*(1 - val_frac))\n self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]\n self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]\n self.train_x, self.train_y = train['X'], train['y']\n \n self.train_x = np.rollaxis(self.train_x, 3)\n self.valid_x = np.rollaxis(self.valid_x, 3)\n self.test_x = np.rollaxis(self.test_x, 3)\n \n if scale_func is None:\n self.scaler = scale\n else:\n self.scaler = scale_func\n self.shuffle = shuffle\n \n def batches(self, batch_size):\n if self.shuffle:\n idx = np.arange(len(dataset.train_x))\n np.random.shuffle(idx)\n self.train_x = self.train_x[idx]\n self.train_y = self.train_y[idx]\n \n n_batches = len(self.train_y)//batch_size\n for ii in range(0, len(self.train_y), batch_size):\n x = self.train_x[ii:ii+batch_size]\n y = self.train_y[ii:ii+batch_size]\n \n yield self.scaler(x), y",
"Network Inputs\nHere, just creating some placeholders like normal.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"Generator\nHere you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.\nWhat's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.\nYou keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:\n\nNote that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. \n\nExercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.",
"def generator(z, output_dim, reuse=False, alpha=0.2, training=True):\n with tf.variable_scope('generator', reuse=reuse):\n # First fully connected layer\n x\n \n # Output layer, 32x32x3\n logits = \n \n out = tf.tanh(logits)\n \n return out",
"Discriminator\nHere you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.\nYou'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.\nNote: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.\n\nExercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.",
"def discriminator(x, reuse=False, alpha=0.2):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Input layer is 32x32x3\n x =\n \n logits = \n out = \n \n return out, logits",
"Model Loss\nCalculating the loss like before, nothing new here.",
"def model_loss(input_real, input_z, output_dim, alpha=0.2):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n g_model = generator(input_z, output_dim, alpha=alpha)\n d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)\n d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)\n\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))\n\n d_loss = d_loss_real + d_loss_fake\n\n return d_loss, g_loss",
"Optimizers\nNot much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.",
"def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n # Get weights and bias to update\n t_vars = tf.trainable_variables()\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n\n # Optimize\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n\n return d_train_opt, g_train_opt",
"Building the model\nHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.",
"class GAN:\n def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):\n tf.reset_default_graph()\n \n self.input_real, self.input_z = model_inputs(real_size, z_size)\n \n self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,\n real_size[2], alpha=0.2)\n \n self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)",
"Here is a function for displaying generated images.",
"def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):\n fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, \n sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.axis('off')\n img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)\n ax.set_adjustable('box-forced')\n im = ax.imshow(img, aspect='equal')\n \n plt.subplots_adjust(wspace=0, hspace=0)\n return fig, axes",
"And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.",
"def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):\n saver = tf.train.Saver()\n sample_z = np.random.uniform(-1, 1, size=(72, z_size))\n\n samples, losses = [], []\n steps = 0\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in dataset.batches(batch_size):\n steps += 1\n\n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n\n # Run optimizers\n _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})\n _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})\n\n if steps % print_every == 0:\n # At the end of each epoch, get the losses and print them out\n train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})\n train_loss_g = net.g_loss.eval({net.input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g))\n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n\n if steps % show_every == 0:\n gen_samples = sess.run(\n generator(net.input_z, 3, reuse=True, training=False),\n feed_dict={net.input_z: sample_z})\n samples.append(gen_samples)\n _ = view_samples(-1, samples, 6, 12, figsize=figsize)\n plt.show()\n\n saver.save(sess, './checkpoints/generator.ckpt')\n\n with open('samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n return losses, samples",
"Hyperparameters\nGANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.\n\nExercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.",
"real_size = (32,32,3)\nz_size = 100\nlearning_rate = 0.001\nbatch_size = 64\nepochs = 1\nalpha = 0.01\nbeta1 = 0.9\n\n# Create the network\nnet = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)\n\n# Load the data and train the network here\ndataset = Dataset(trainset, testset)\nlosses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()\n\n_ = view_samples(-1, samples, 6, 12, figsize=(10,5))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/pcmdi/cmip6/models/sandbox-2/seaice.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: PCMDI\nSource ID: SANDBOX-2\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:36\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
iaja/scalaLDAvis | examples/python/Dirichlet distribution.ipynb | apache-2.0 | [
"Dirichlet distribution\n\nhttps://en.wikipedia.org/wiki/Dirichlet_distribution\n\n$$\n\\text{Dir}\\left(\\boldsymbol{\\alpha}\\right)\\rightarrow \\mathrm{p}\\left(\\boldsymbol{\\theta}\\mid\\boldsymbol{\\alpha}\\right)=\\frac{\\Gamma\\left(\\sum_{i=1}^{k}\\boldsymbol{\\alpha}{i}\\right)}{\\prod{i=1}^{k}\\Gamma\\left(\\boldsymbol{\\alpha}{i}\\right)}\\prod{i=1}^{k}\\boldsymbol{\\theta}{i}^{\\boldsymbol{\\alpha}{i}-1} \\\nK\\geq2\\ \\text{number of categories} \\\n{\\alpha {1},\\ldots ,\\alpha {K}}\\ concentration\\ parameters,\\ where\\ {\\alpha_{i}>0}\n$$\nVisulaizing Dirchlet Distributions\n\nhttp://blog.bogatron.net/blog/2014/02/02/visualizing-dirichlet-distributions/",
"\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.tri as tri\nfrom functools import reduce\n# import seaborn\nfrom math import gamma\nfrom operator import mul\n\ncorners = np.array([[0, 0], [1, 0], [0.5,0.75**0.5]])\nprint(corners)\ntriangle = tri.Triangulation(corners[:, 0], corners[:, 1])\n\nrefiner = tri.UniformTriRefiner(triangle)\ntrimesh = refiner.refine_triangulation(subdiv=4)\n\nplt.figure(figsize=(10, 5))\nfor (i, shape) in enumerate((triangle, trimesh)):\n plt.subplot(1, 2, i+ 1)\n plt.triplot(shape)\n plt.axis('off')\n plt.axis('equal')\n\n# Mid-points of triangle sides opposite of each corner\nmidpoints = []\nfor i in range(3):\n point1 = corners[(i + 1) % 3]\n point2 = corners[(i + 2) % 3]\n mid = (point1 + point2) / 2.0\n print(point1, '+', point2, '=', mid)\n midpoints.append(mid)\n \nprint('\\n')\nprint(midpoints) ",
"Setting up the Code\nBefore we can plot our Dirichlet distributions, we need to do three things:\n\nGenerate a set of x-y coordinates over our equilateral triangle\nMap the x-y coordinates to the 2-simplex coordinate space\nCompute Dir(α)Dir(α) for each point",
"def xy2bc(xy, tol=1.e-3):\n '''Converts 2D Cartesian coordinates to barycentric.'''\n s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 for i in range(3)]\n return np.clip(s, tol, 1.0 - tol)",
"Gamma: $\\Gamma \\left( z \\right) = \\int\\limits_0^\\infty {x^{z - 1} e^{ - x} dx}$\n$\n\\text{Dir}\\left(\\boldsymbol{\\alpha}\\right)\\rightarrow \\mathrm{p}\\left(\\boldsymbol{\\theta}\\mid\\boldsymbol{\\alpha}\\right)=\\frac{\\Gamma\\left(\\sum_{i=1}^{k}\\boldsymbol{\\alpha}{i}\\right)}{\\prod{i=1}^{k}\\Gamma\\left(\\boldsymbol{\\alpha}{i}\\right)}\\prod{i=1}^{k}\\boldsymbol{\\theta}{i}^{\\boldsymbol{\\alpha}{i}-1} \\\nK\\geq2\\ \\text{number of categories} \\\n{\\alpha {1},\\ldots ,\\alpha {K}}\\ concentration\\ parameters,\\ where\\ {\\alpha_{i}>0}\n$",
"class Dirichlet(object):\n def __init__(self, alpha):\n self._alpha = np.array(alpha)\n self._coef = gamma(np.sum(self._alpha)) / reduce(mul, [gamma(a) for a in self._alpha])\n def pdf(self, x):\n '''Returns pdf value for `x`.'''\n return self._coef * reduce(mul, [xx ** (aa - 1) for (xx, aa)in zip(x, self._alpha)])\n\n\ndef draw_pdf_contours(dist, nlevels=200, subdiv=8, **kwargs):\n import math\n\n refiner = tri.UniformTriRefiner(triangle)\n trimesh = refiner.refine_triangulation(subdiv=subdiv)\n pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)]\n\n plt.tricontourf(trimesh, pvals, nlevels, **kwargs)\n plt.axis('equal')\n plt.xlim(0, 1)\n plt.ylim(0, 0.75**0.5)\n plt.axis('off')\n\ndraw_pdf_contours(Dirichlet([1, 1, 1]))\n\ndraw_pdf_contours(Dirichlet([0.999, 0.999, 0.999]))\n\ndraw_pdf_contours(Dirichlet([5, 5, 5]))\n\ndraw_pdf_contours(Dirichlet([1, 2, 3]))\n\ndraw_pdf_contours(Dirichlet([3, 2, 1]))\n\ndraw_pdf_contours(Dirichlet([2, 3, 1]))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
santipuch590/deeplearning-tf | dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb | mit | [
"<a href=\"https://www.bigdatauniversity.com\"><img src = \"https://ibm.box.com/shared/static/jvcqp2iy2jlx2b32rmzdt0tx8lvxgzkp.png\" width = 300, align = \"center\"></a>\n<h1 align=center> <font size = 5> Exercise-Linear Regression with TensorFlow </font></h1>\n\nThis exercise is about modelling a linear relationship between \"chirps of a cricket\" and ground temperature. \nIn 1948, G. W. Pierce in his book \"Songs of Insects\" mentioned that we can predict temperature by listening to the frequency of songs(chirps) made by stripped Crickets. He recorded change in behaviour of crickets by recording number of chirps made by them at several \"different temperatures\" and found that there is a pattern in the way crickets respond to the rate of change in ground temperature 60 to 100 degrees of farenhite. He also found out that Crickets did not sing \nabove or below this temperature.\nThis data is derieved from the above mentioned book and aim is to fit a linear model and predict the \"Best Fit Line\" for the given \"Chirps(per 15 Second)\" in Column 'A' and the corresponding \"Temperatures(Farenhite)\" in Column 'B' using TensorFlow. So that one could easily tell what temperature it is just by listening to the songs of cricket. \nLet's import tensorFlow and python dependencies",
"import tensorflow as tf\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\npd.__version__",
"Download and Explore the Data",
"\n#downloading dataset\n!wget -nv -O ../data/PierceCricketData.csv https://ibm.box.com/shared/static/fjbsu8qbwm1n5zsw90q6xzfo4ptlsw96.csv\n\n\ndf = pd.read_csv(\"../data/PierceCricketData.csv\")\ndf.head()",
"<h6> Plot the Data Points </h6>",
"\n%matplotlib inline\n\nx_data, y_data = (df[\"Chirps\"].values,df[\"Temp\"].values)\n\n# plots the data points\nplt.plot(x_data, y_data, 'ro')\n# label the axis\nplt.xlabel(\"# Chirps per 15 sec\")\nplt.ylabel(\"Temp in Farenhiet\")\n",
"Looking at the scatter plot we can analyse that there is a linear relationship between the data points that connect chirps to the temperature and optimal way to infer this knowledge is by fitting a line that best describes the data. Which follows the linear equation: \n#### Ypred = m X + c \nWe have to estimate the values of the slope 'm' and the inrtercept 'c' to fit a line where, X is the \"Chirps\" and Ypred is \"Predicted Temperature\" in this case. \nCreate a Data Flow Graph using TensorFlow\nModel the above equation by assigning arbitrary values of your choice for slope \"m\" and intercept \"c\" which can predict the temp \"Ypred\" given Chirps \"X\" as input. \nexample m=3 and c=2\nAlso, create a place holder for actual temperature \"Y\" which we will be needing for Optimization to estimate the actual values of slope and intercept.",
"# Create place holders and Variables along with the Linear model.\nm = tf.Variable(3, dtype=tf.float32)\nc = tf.Variable(2, dtype=tf.float32)\nx = tf.placeholder(dtype=tf.float32, shape=x_data.size)\ny = tf.placeholder(dtype=tf.float32, shape=y_data.size)\n# Linear model\ny_pred = m * x + c",
"<div align=\"right\">\n<a href=\"#createvar\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"createvar\" class=\"collapse\">\n```\n\nX = tf.placeholder(tf.float32, shape=(x_data.size))\nY = tf.placeholder(tf.float32,shape=(y_data.size))\n\n# tf.Variable call creates a single updatable copy in the memory and efficiently updates \n# the copy to relfect any changes in the variable values through out the scope of the tensorflow session\nm = tf.Variable(3.0)\nc = tf.Variable(2.0)\n\n# Construct a Model\nYpred = tf.add(tf.multiply(X, m), c)\n```\n</div>\n\nCreate and Run a Session to Visualize the Predicted Line from above Graph\n<h6> Feel free to change the values of \"m\" and \"c\" in future to check how the initial position of line changes </h6>",
"#create session and initialize variables\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\n\n\n#get prediction with initial parameter values\ny_vals = session.run(y_pred, feed_dict={x: x_data})\n#Your code goes here\nplt.plot(x_data, y_vals, label='Predicted')\nplt.scatter(x_data, y_data, color='red', label='GT')",
"<div align=\"right\">\n<a href=\"#matmul1\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"matmul1\" class=\"collapse\">\n```\n\npred = session.run(Ypred, feed_dict={X:x_data})\n\n#plot initial prediction against datapoints\nplt.plot(x_data, pred)\nplt.plot(x_data, y_data, 'ro')\n# label the axis\nplt.xlabel(\"# Chirps per 15 sec\")\nplt.ylabel(\"Temp in Farenhiet\")\n\n\n```\n</div>\n\nDefine a Graph for Loss Function\nThe essence of estimating the values for \"m\" and \"c\" lies in minimizing the difference between predicted \"Ypred\" and actual \"Y\" temperature values which is defined in the form of Mean Squared error loss function. \n$$ loss = \\frac{1}{n}\\sum_{i=1}^n{[Ypred_i - {Y}_i]^2} $$\nNote: There are also other ways to model the loss function based on distance metric between predicted and actual temperature values. For this exercise Mean Suared error criteria is considered.",
"loss = tf.reduce_mean(tf.squared_difference(y_pred*0.1, y*0.1))",
"<div align=\"right\">\n<a href=\"#matmul12\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"matmul12\" class=\"collapse\">\n```\n# normalization factor\nnf = 1e-1\n# seting up the loss function\nloss = tf.reduce_mean(tf.squared_difference(Ypred*nf,Y*nf))\n```\n</div>\n\nDefine an Optimization Graph to Minimize the Loss and Training the Model",
"# Your code goes here\noptimizer = tf.train.GradientDescentOptimizer(0.01)\ntrain_op = optimizer.minimize(loss)",
"<div align=\"right\">\n<a href=\"#matmul13\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"matmul13\" class=\"collapse\">\n```\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)\n#optimizer = tf.train.AdagradOptimizer(0.01 )\n\n# pass the loss function that optimizer should optimize on.\ntrain = optimizer.minimize(loss)\n\n```\n</div>\n\nInitialize all the vairiables again",
"session.run(tf.global_variables_initializer())",
"Run session to train and predict the values of 'm' and 'c' for different training steps along with storing the losses in each step\nGet the predicted m and c values by running a session on Training a linear model. Also collect the loss for different steps to print and plot.",
"convergenceTolerance = 0.0001\nprevious_m = np.inf\nprevious_c = np.inf\n\nsteps = {}\nsteps['m'] = []\nsteps['c'] = []\n\nlosses=[]\n\nfor k in range(10000):\n ########## Your Code goes Here ###########\n _, _l, _m, _c = session.run([train_op, loss, m, c], feed_dict={x: x_data, y: y_data})\n\n steps['m'].append(_m)\n steps['c'].append(_c)\n losses.append(_l)\n if (np.abs(previous_m - _m) or np.abs(previous_c - _c) ) <= convergenceTolerance :\n \n print(\"Finished by Convergence Criterion\")\n print(k)\n print(_l)\n break\n previous_m = _m \n previous_c = _c",
"<div align=\"right\">\n<a href=\"#matmul18\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"matmul18\" class=\"collapse\">\n```\n# run a session to train , get m and c values with loss function \n_, _m , _c,_l = session.run([train, m, c,loss],feed_dict={X:x_data,Y:y_data}) \n\n```\n</div>\n\nPrint the loss function",
"# Your Code Goes Here\nplt.plot(losses)",
"<div align=\"right\">\n<a href=\"#matmul199\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"matmul199\" class=\"collapse\">\n```\nplt.plot(losses[:])\n\n```\n</div>",
"y_vals_pred = y_pred.eval(session=session, feed_dict={x: x_data})\nplt.scatter(x_data, y_vals_pred, marker='x', color='blue', label='Predicted')\nplt.scatter(x_data, y_data, label='GT', color='red')\nplt.legend()\nplt.ylabel('Temperature (Fahrenheit)')\nplt.xlabel('# Chirps per 15 s')\n\nsession.close() ",
"This Exercise is about giving Overview about how to use TensorFlow for Predicting Ground Temperature given the number of Cricket Chirps per 15 secs. Idea is to use TnesorFlow's dataflow graph to define Optimization and Training graphs to find out the actual values of 'm' and 'c' that best describes the given Data. \nPlease Feel free to change the initial values of 'm' and 'c' to check how the training steps Vary.\nThank You for Completing this exercise\nCreated by <a href = \"https://ca.linkedin.com/in/shashibushan-yenkanchi\"> Shashibushan Yenkanchi </a> </h4>\nREFERENCES\nhttp://mathbits.com/MathBits/TISection/Statistics2/linearREAL.htm"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/recommenders | docs/examples/efficient_serving.ipynb | apache-2.0 | [
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Efficient serving\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/recommenders/examples/efficient_serving\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/efficient_serving.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/recommenders/blob/main/docs/examples/efficient_serving.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/efficient_serving.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nRetrieval models are often built to surface a handful of top candidates out of millions or even hundreds of millions of candidates. To be able to react to the user's context and behaviour, they need to be able to do this on the fly, in a matter of milliseconds.\nApproximate nearest neighbour search (ANN) is the technology that makes this possible. In this tutorial, we'll show how to use ScaNN - a state of the art nearest neighbour retrieval package - to seamlessly scale TFRS retrieval to millions of items.\nWhat is ScaNN?\nScaNN is a library from Google Research that performs dense vector similarity search at large scale. Given a database of candidate embeddings, ScaNN indexes these embeddings in a manner that allows them to be rapidly searched at inference time. ScaNN uses state of the art vector compression techniques and carefully implemented algorithms to achieve the best speed-accuracy tradeoff. It can greatly outperform brute force search while sacrificing little in terms of accuracy.\nBuilding a ScaNN-powered model\nTo try out ScaNN in TFRS, we'll build a simple MovieLens retrieval model, just as we did in the basic retrieval tutorial. If you have followed that tutorial, this section will be familiar and can safely be skipped.\nTo start, install TFRS and TensorFlow Datasets:",
"!pip install -q tensorflow-recommenders\n!pip install -q --upgrade tensorflow-datasets",
"We also need to install scann: it's an optional dependency of TFRS, and so needs to be installed separately.",
"!pip install -q scann",
"Set up all the necessary imports.",
"from typing import Dict, Text\n\nimport os\nimport pprint\nimport tempfile\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\nimport tensorflow_recommenders as tfrs",
"And load the data:",
"# Load the MovieLens 100K data.\nratings = tfds.load(\n \"movielens/100k-ratings\",\n split=\"train\"\n)\n\n# Get the ratings data.\nratings = (ratings\n # Retain only the fields we need.\n .map(lambda x: {\"user_id\": x[\"user_id\"], \"movie_title\": x[\"movie_title\"]})\n # Cache for efficiency.\n .cache(tempfile.NamedTemporaryFile().name)\n)\n\n# Get the movies data.\nmovies = tfds.load(\"movielens/100k-movies\", split=\"train\")\nmovies = (movies\n # Retain only the fields we need.\n .map(lambda x: x[\"movie_title\"])\n # Cache for efficiency.\n .cache(tempfile.NamedTemporaryFile().name))",
"Before we can build a model, we need to set up the user and movie vocabularies:",
"user_ids = ratings.map(lambda x: x[\"user_id\"])\n\nunique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))\nunique_user_ids = np.unique(np.concatenate(list(user_ids.batch(1000))))",
"We'll also set up the training and test sets:",
"tf.random.set_seed(42)\nshuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)\n\ntrain = shuffled.take(80_000)\ntest = shuffled.skip(80_000).take(20_000)",
"Model definition\nJust as in the basic retrieval tutorial, we build a simple two-tower model.",
"class MovielensModel(tfrs.Model):\n\n def __init__(self):\n super().__init__()\n\n embedding_dimension = 32\n\n # Set up a model for representing movies.\n self.movie_model = tf.keras.Sequential([\n tf.keras.layers.StringLookup(\n vocabulary=unique_movie_titles, mask_token=None),\n # We add an additional embedding to account for unknown tokens.\n tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)\n ])\n\n # Set up a model for representing users.\n self.user_model = tf.keras.Sequential([\n tf.keras.layers.StringLookup(\n vocabulary=unique_user_ids, mask_token=None),\n # We add an additional embedding to account for unknown tokens.\n tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)\n ])\n\n # Set up a task to optimize the model and compute metrics.\n self.task = tfrs.tasks.Retrieval(\n metrics=tfrs.metrics.FactorizedTopK(\n candidates=movies.batch(128).cache().map(self.movie_model)\n )\n )\n\n def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:\n # We pick out the user features and pass them into the user model.\n user_embeddings = self.user_model(features[\"user_id\"])\n # And pick out the movie features and pass them into the movie model,\n # getting embeddings back.\n positive_movie_embeddings = self.movie_model(features[\"movie_title\"])\n\n # The task computes the loss and the metrics.\n\n return self.task(user_embeddings, positive_movie_embeddings, compute_metrics=not training)",
"Fitting and evaluation\nA TFRS model is just a Keras model. We can compile it:",
"model = MovielensModel()\nmodel.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))",
"Estimate it:",
"model.fit(train.batch(8192), epochs=3)",
"And evaluate it.",
"model.evaluate(test.batch(8192), return_dict=True)",
"Approximate prediction\nThe most straightforward way of retrieving top candidates in response to a query is to do it via brute force: compute user-movie scores for all possible movies, sort them, and pick a couple of top recommendations.\nIn TFRS, this is accomplished via the BruteForce layer:",
"brute_force = tfrs.layers.factorized_top_k.BruteForce(model.user_model)\nbrute_force.index_from_dataset(\n movies.batch(128).map(lambda title: (title, model.movie_model(title)))\n)",
"Once created and populated with candidates (via the index method), we can call it to get predictions out:",
"# Get predictions for user 42.\n_, titles = brute_force(np.array([\"42\"]), k=3)\n\nprint(f\"Top recommendations: {titles[0]}\")",
"On a small dataset of under 1000 movies, this is very fast:",
"%timeit _, titles = brute_force(np.array([\"42\"]), k=3)",
"But what happens if we have more candidates - millions instead of thousands?\nWe can simulate this by indexing all of our movies multiple times:",
"# Construct a dataset of movies that's 1,000 times larger. We \n# do this by adding several million dummy movie titles to the dataset.\nlots_of_movies = tf.data.Dataset.concatenate(\n movies.batch(4096),\n movies.batch(4096).repeat(1_000).map(lambda x: tf.zeros_like(x))\n)\n\n# We also add lots of dummy embeddings by randomly perturbing\n# the estimated embeddings for real movies.\nlots_of_movies_embeddings = tf.data.Dataset.concatenate(\n movies.batch(4096).map(model.movie_model),\n movies.batch(4096).repeat(1_000)\n .map(lambda x: model.movie_model(x))\n .map(lambda x: x * tf.random.uniform(tf.shape(x)))\n)",
"We can build a BruteForce index on this larger dataset:",
"brute_force_lots = tfrs.layers.factorized_top_k.BruteForce()\nbrute_force_lots.index_from_dataset(\n tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))\n)",
"The recommendations are still the same",
"_, titles = brute_force_lots(model.user_model(np.array([\"42\"])), k=3)\n\nprint(f\"Top recommendations: {titles[0]}\")",
"But they take much longer. With a candidate set of 1 million movies, brute force prediction becomes quite slow:",
"%timeit _, titles = brute_force_lots(model.user_model(np.array([\"42\"])), k=3)",
"As the number of candidate grows, the amount of time needed grows linearly: with 10 million candidates, serving top candidates would take 250 milliseconds. This is clearly too slow for a live service.\nThis is where approximate mechanisms come in.\nUsing ScaNN in TFRS is accomplished via the tfrs.layers.factorized_top_k.ScaNN layer. It follow the same interface as the other top k layers:",
"scann = tfrs.layers.factorized_top_k.ScaNN(num_reordering_candidates=100)\nscann.index_from_dataset(\n tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))\n)",
"The recommendations are (approximately!) the same",
"_, titles = scann(model.user_model(np.array([\"42\"])), k=3)\n\nprint(f\"Top recommendations: {titles[0]}\")",
"But they are much, much faster to compute:",
"%timeit _, titles = scann(model.user_model(np.array([\"42\"])), k=3)",
"In this case, we can retrieve the top 3 movies out of a set of ~1 million in around 2 milliseconds: 15 times faster than by computing the best candidates via brute force. The advantage of approximate methods grows even larger for larger datasets.\nEvaluating the approximation\nWhen using approximate top K retrieval mechanisms (such as ScaNN), speed of retrieval often comes at the expense of accuracy. To understand this trade-off, it's important to measure the model's evaluation metrics when using ScaNN, and to compare them with the baseline.\nFortunately, TFRS makes this easy. We simply override the metrics on the retrieval task with metrics using ScaNN, re-compile the model, and run evaluation.\nTo make the comparison, let's first run baseline results. We still need to override our metrics to make sure they are using the enlarged candidate set rather than the original set of movies:",
"# Override the existing streaming candidate source.\nmodel.task.factorized_metrics = tfrs.metrics.FactorizedTopK(\n candidates=lots_of_movies_embeddings\n)\n# Need to recompile the model for the changes to take effect.\nmodel.compile()\n\n%time baseline_result = model.evaluate(test.batch(8192), return_dict=True, verbose=False)",
"We can do the same using ScaNN:",
"model.task.factorized_metrics = tfrs.metrics.FactorizedTopK(\n candidates=scann\n)\nmodel.compile()\n\n# We can use a much bigger batch size here because ScaNN evaluation\n# is more memory efficient.\n%time scann_result = model.evaluate(test.batch(8192), return_dict=True, verbose=False)",
"ScaNN based evaluation is much, much quicker: it's over ten times faster! This advantage is going to grow even larger for bigger datasets, and so for large datasets it may be prudent to always run ScaNN-based evaluation to improve model development velocity.\nBut how about the results? Fortunately, in this case the results are almost the same:",
"print(f\"Brute force top-100 accuracy: {baseline_result['factorized_top_k/top_100_categorical_accuracy']:.2f}\")\nprint(f\"ScaNN top-100 accuracy: {scann_result['factorized_top_k/top_100_categorical_accuracy']:.2f}\")",
"This suggests that on this artificial datase, there is little loss from the approximation. In general, all approximate methods exhibit speed-accuracy tradeoffs. To understand this in more depth you can check out Erik Bernhardsson's ANN benchmarks.\nDeploying the approximate model\nThe ScaNN-based model is fully integrated into TensorFlow models, and serving it is as easy as serving any other TensorFlow model.\nWe can save it as a SavedModel object",
"lots_of_movies_embeddings\n\n# We re-index the ScaNN layer to include the user embeddings in the same model.\n# This way we can give the saved model raw features and get valid predictions\n# back.\nscann = tfrs.layers.factorized_top_k.ScaNN(model.user_model, num_reordering_candidates=1000)\nscann.index_from_dataset(\n tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))\n)\n\n# Need to call it to set the shapes.\n_ = scann(np.array([\"42\"]))\n\nwith tempfile.TemporaryDirectory() as tmp:\n path = os.path.join(tmp, \"model\")\n tf.saved_model.save(\n scann,\n path,\n options=tf.saved_model.SaveOptions(namespace_whitelist=[\"Scann\"])\n )\n\n loaded = tf.saved_model.load(path)",
"and then load it and serve, getting exactly the same results back:",
"_, titles = loaded(tf.constant([\"42\"]))\n\nprint(f\"Top recommendations: {titles[0][:3]}\")",
"The resulting model can be served in any Python service that has TensorFlow and ScaNN installed.\nIt can also be served using a customized version of TensorFlow Serving, available as a Docker container on Docker Hub. You can also build the image yourself from the Dockerfile.\nTuning ScaNN\nNow let's look into tuning our ScaNN layer to get a better performance/accuracy tradeoff. In order to do this effectively, we first need to measure our baseline performance and accuracy.\nFrom above, we already have a measurement of our model's latency for processing a single (non-batched) query (although note that a fair amount of this latency is from non-ScaNN components of the model).\nNow we need to investigate ScaNN's accuracy, which we measure through recall. A recall@k of x% means that if we use brute force to retrieve the true top k neighbors, and compare those results to using ScaNN to also retrieve the top k neighbors, x% of ScaNN's results are in the true brute force results. Let's compute the recall for the current ScaNN searcher.\nFirst, we need to generate the brute force, ground truth top-k:",
"# Process queries in groups of 1000; processing them all at once with brute force\n# may lead to out-of-memory errors, because processing a batch of q queries against\n# a size-n dataset takes O(nq) space with brute force.\ntitles_ground_truth = tf.concat([\n brute_force_lots(queries, k=10)[1] for queries in\n test.batch(1000).map(lambda x: model.user_model(x[\"user_id\"]))\n], axis=0)",
"Our variable titles_ground_truth now contains the top-10 movie recommendations returned by brute-force retrieval. Now we can compute the same recommendations when using ScaNN:",
"# Get all user_id's as a 1d tensor of strings\ntest_flat = np.concatenate(list(test.map(lambda x: x[\"user_id\"]).batch(1000).as_numpy_iterator()), axis=0)\n\n# ScaNN is much more memory efficient and has no problem processing the whole\n# batch of 20000 queries at once.\n_, titles = scann(test_flat, k=10)",
"Next, we define our function that computes recall. For each query, it counts how many results are in the intersection of the brute force and the ScaNN results and divides this by the number of brute force results. The average of this quantity over all queries is our recall.",
"def compute_recall(ground_truth, approx_results):\n return np.mean([\n len(np.intersect1d(truth, approx)) / len(truth)\n for truth, approx in zip(ground_truth, approx_results)\n ])",
"This gives us baseline recall@10 with the current ScaNN config:",
"print(f\"Recall: {compute_recall(titles_ground_truth, titles):.3f}\")",
"We can also measure the baseline latency:",
"%timeit -n 1000 scann(np.array([\"42\"]), k=10)",
"Let's see if we can do better!\nTo do this, we need a model of how ScaNN's tuning knobs affect performance. Our current model uses ScaNN's tree-AH algorithm. This algorithm partitions the database of embeddings (the \"tree\") and then scores the most promising of these partitions using AH, which is a highly optimized approximate distance computation routine.\nThe default parameters for TensorFlow Recommenders' ScaNN Keras layer sets num_leaves=100 and num_leaves_to_search=10. This means our database is partitioned into 100 disjoint subsets, and the 10 most promising of these partitions is scored with AH. This means 10/100=10% of the dataset is being searched with AH.\nIf we have, say, num_leaves=1000 and num_leaves_to_search=100, we would also be searching 10% of the database with AH. However, in comparison to the previous setting, the 10% we would search will contain higher-quality candidates, because a higher num_leaves allows us to make finer-grained decisions about what parts of the dataset are worth searching.\nIt's no surprise then that with num_leaves=1000 and num_leaves_to_search=100 we get significantly higher recall:",
"scann2 = tfrs.layers.factorized_top_k.ScaNN(\n model.user_model, \n num_leaves=1000,\n num_leaves_to_search=100,\n num_reordering_candidates=1000)\nscann2.index_from_dataset(\n tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))\n)\n\n_, titles2 = scann2(test_flat, k=10)\n\nprint(f\"Recall: {compute_recall(titles_ground_truth, titles2):.3f}\")",
"However, as a tradeoff, our latency has also increased. This is because the partitioning step has gotten more expensive; scann picks the top 10 of 100 partitions while scann2 picks the top 100 of 1000 partitions. The latter can be more expensive because it involves looking at 10 times as many partitions.",
"%timeit -n 1000 scann2(np.array([\"42\"]), k=10)",
"In general, tuning ScaNN search is about picking the right tradeoffs. Each individual parameter change generally won't make search both faster and more accurate; our goal is to tune the parameters to optimally trade off between these two conflicting goals.\nIn our case, scann2 significantly improved recall over scann at some cost in latency. Can we dial back some other knobs to cut down on latency, while preserving most of our recall advantage?\nLet's try searching 70/1000=7% of the dataset with AH, and only rescoring the final 400 candidates:",
"scann3 = tfrs.layers.factorized_top_k.ScaNN(\n model.user_model,\n num_leaves=1000,\n num_leaves_to_search=70,\n num_reordering_candidates=400)\nscann3.index_from_dataset(\n tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings))\n)\n\n_, titles3 = scann3(test_flat, k=10)\nprint(f\"Recall: {compute_recall(titles_ground_truth, titles3):.3f}\")",
"scann3 delivers about a 3% absolute recall gain over scann while also delivering lower latency:",
"%timeit -n 1000 scann3(np.array([\"42\"]), k=10)",
"These knobs can be further adjusted to optimize for different points along the accuracy-performance pareto frontier. ScaNN's algorithms can achieve state-of-the-art performance over a wide range of recall targets.\nFurther reading\nScaNN uses advanced vector quantization techniques and highly optimized implementation to achieve its results. The field of vector quantization has a rich history with a variety of approaches. ScaNN's current quantization technique is detailed in this paper, published at ICML 2020. The paper was also released along with this blog article which gives a high level overview of our technique.\nMany related quantization techniques are mentioned in the references of our ICML 2020 paper, and other ScaNN-related research is listed at http://sanjivk.com/."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NlGG/Projects | 不動産/model2_1.ipynb | mit | [
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\n\n# 統計用ツール\nimport statsmodels.api as sm\nimport statsmodels.tsa.api as tsa\nfrom patsy import dmatrices\n\n# 自作の空間統計用ツール\nfrom spatialstat import *\n\n#描画\nimport matplotlib.pyplot as plt\nfrom pandas.tools.plotting import autocorrelation_plot\nimport seaborn as sns\nsns.set(font=['IPAmincho'])\n\n#深層学習\nimport chainer\nfrom chainer import cuda, Function, gradient_check, Variable, optimizers, serializers, utils\nfrom chainer import Link, Chain, ChainList\nimport chainer.functions as F\nimport chainer.links as L\n\nimport pyper",
"変数名とデータの内容メモ\nCENSUS: 市区町村コード(9桁)\nP: 成約価格\nS: 専有面積\nL: 土地面積\nR: 部屋数\nRW: 前面道路幅員\nCY: 建築年\nA: 建築後年数(成約時)\nTS: 最寄駅までの距離\nTT: 東京駅までの時間\nACC: ターミナル駅までの時間\nWOOD: 木造ダミー\nSOUTH: 南向きダミー\nRSD: 住居系地域ダミー\nCMD: 商業系地域ダミー\nIDD: 工業系地域ダミー\nFAR: 建ぺい率\nFLR: 容積率\nTDQ: 成約時点(四半期)\nX: 緯度\nY: 経度\nCITY_CODE: 市区町村コード(5桁)\nCITY_NAME: 市区町村名\nBLOCK: 地域ブロック名",
"data = pd.read_csv(\"TokyoSingle.csv\")\ndata = data.dropna()\nCITY_NAME = data['CITY_CODE'].copy()\n\nCITY_NAME[CITY_NAME == 13101] = '01千代田区'\nCITY_NAME[CITY_NAME == 13102] = \"02中央区\"\nCITY_NAME[CITY_NAME == 13103] = \"03港区\"\nCITY_NAME[CITY_NAME == 13104] = \"04新宿区\"\nCITY_NAME[CITY_NAME == 13105] = \"05文京区\"\nCITY_NAME[CITY_NAME == 13106] = \"06台東区\"\nCITY_NAME[CITY_NAME == 13107] = \"07墨田区\"\nCITY_NAME[CITY_NAME == 13108] = \"08江東区\"\nCITY_NAME[CITY_NAME == 13109] = \"09品川区\"\nCITY_NAME[CITY_NAME == 13110] = \"10目黒区\"\nCITY_NAME[CITY_NAME == 13111] = \"11大田区\"\nCITY_NAME[CITY_NAME == 13112] = \"12世田谷区\"\nCITY_NAME[CITY_NAME == 13113] = \"13渋谷区\"\nCITY_NAME[CITY_NAME == 13114] = \"14中野区\"\nCITY_NAME[CITY_NAME == 13115] = \"15杉並区\"\nCITY_NAME[CITY_NAME == 13116] = \"16豊島区\"\nCITY_NAME[CITY_NAME == 13117] = \"17北区\"\nCITY_NAME[CITY_NAME == 13118] = \"18荒川区\"\nCITY_NAME[CITY_NAME == 13119] = \"19板橋区\"\nCITY_NAME[CITY_NAME == 13120] = \"20練馬区\"\nCITY_NAME[CITY_NAME == 13121] = \"21足立区\"\nCITY_NAME[CITY_NAME == 13122] = \"22葛飾区\"\nCITY_NAME[CITY_NAME == 13123] = \"23江戸川区\"\n\n#Make Japanese Block name\nBLOCK = data[\"CITY_CODE\"].copy()\nBLOCK[BLOCK == 13101] = \"01都心・城南\"\nBLOCK[BLOCK == 13102] = \"01都心・城南\"\nBLOCK[BLOCK == 13103] = \"01都心・城南\"\nBLOCK[BLOCK == 13104] = \"01都心・城南\"\nBLOCK[BLOCK == 13109] = \"01都心・城南\"\nBLOCK[BLOCK == 13110] = \"01都心・城南\"\nBLOCK[BLOCK == 13111] = \"01都心・城南\"\nBLOCK[BLOCK == 13112] = \"01都心・城南\"\nBLOCK[BLOCK == 13113] = \"01都心・城南\"\nBLOCK[BLOCK == 13114] = \"02城西・城北\"\nBLOCK[BLOCK == 13115] = \"02城西・城北\"\nBLOCK[BLOCK == 13105] = \"02城西・城北\"\nBLOCK[BLOCK == 13106] = \"02城西・城北\"\nBLOCK[BLOCK == 13116] = \"02城西・城北\"\nBLOCK[BLOCK == 13117] = \"02城西・城北\"\nBLOCK[BLOCK == 13119] = \"02城西・城北\"\nBLOCK[BLOCK == 13120] = \"02城西・城北\"\nBLOCK[BLOCK == 13107] = \"03城東\"\nBLOCK[BLOCK == 13108] = \"03城東\"\nBLOCK[BLOCK == 13118] = \"03城東\"\nBLOCK[BLOCK == 13121] = \"03城東\"\nBLOCK[BLOCK == 13122] = \"03城東\"\nBLOCK[BLOCK == 13123] = \"03城東\"\n\nnames = list(data.columns) + ['CITY_NAME', 'BLOCK']\ndata = pd.concat((data, CITY_NAME, BLOCK), axis = 1)\ndata.columns = names",
"市区町村別の件数を集計",
"print(data['CITY_NAME'].value_counts()) \n\nvars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y']\neq = fml_build(vars)\n\ny, X = dmatrices(eq, data=data, return_type='dataframe')\n\nCITY_NAME = pd.get_dummies(data['CITY_NAME'])\nTDQ = pd.get_dummies(data['TDQ'])\n\nX = pd.concat((X, CITY_NAME, TDQ), axis=1)\n\ndatas = pd.concat((y, X), axis=1)\ndatas = datas[datas['12世田谷区'] == 1][0:5000]\n\nclass CAR(Chain):\n def __init__(self, unit1, unit2, unit3, col_num):\n self.unit1 = unit1\n self.unit2 = unit2\n self.unit3 = unit3\n super(CAR, self).__init__(\n l1 = L.Linear(col_num, unit1),\n l2 = L.Linear(self.unit1, self.unit1),\n l3 = L.Linear(self.unit1, self.unit2),\n l4 = L.Linear(self.unit2, self.unit3),\n l5 = L.Linear(self.unit3, self.unit3),\n l6 = L.Linear(self.unit3, 1),\n )\n \n def __call__(self, x, y):\n fv = self.fwd(x, y)\n loss = F.mean_squared_error(fv, y)\n return loss\n \n def fwd(self, x, y):\n h1 = F.sigmoid(self.l1(x))\n h2 = F.sigmoid(self.l2(h1))\n h3 = F.sigmoid(self.l3(h2))\n h4 = F.sigmoid(self.l4(h3))\n h5 = F.sigmoid(self.l5(h4))\n h6 = self.l6(h5)\n return h6\n\nclass DLmodel(object):\n def __init__(self, data, vars, bs=200, n=1000):\n self.vars = vars\n eq = fml_build(vars)\n y, X = dmatrices(eq, data=datas, return_type='dataframe')\n self.y_in = y[:-n]\n self.X_in = X[:-n]\n self.y_ex = y[-n:]\n self.X_ex = X[-n:]\n \n self.logy_in = np.log(self.y_in)\n self.logy_ex = np.log(self.y_ex)\n \n self.bs = bs\n \n def DL(self, ite=100, bs=200, add=False):\n y_in = np.array(self.y_in, dtype='float32') \n X_in = np.array(self.X_in, dtype='float32')\n\n y = Variable(y_in)\n x = Variable(X_in)\n\n num, col_num = X_in.shape\n \n if add is False:\n self.model1 = CAR(13, 13, 3, col_num)\n \n optimizer = optimizers.Adam()\n optimizer.setup(self.model1)\n \n loss_val = 100000000\n\n for j in range(ite + 10000):\n sffindx = np.random.permutation(num)\n for i in range(0, num, bs):\n x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]])\n y = Variable(y_in[sffindx[i:(i+bs) if (i+bs) < num else num]])\n self.model1.zerograds()\n loss = self.model1(x, y)\n loss.backward()\n optimizer.update()\n if loss_val >= loss.data:\n loss_val = loss.data\n if j > ite:\n if loss_val >= loss.data:\n loss_val = loss.data\n print('epoch:', j)\n print('train mean loss={}'.format(loss_val))\n print(' - - - - - - - - - ')\n break\n if j % 1000 == 0:\n print('epoch:', j)\n print('train mean loss={}'.format(loss_val))\n print(' - - - - - - - - - ')\n \n def predict(self):\n y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex))\n \n X_ex = np.array(self.X_ex, dtype='float32')\n X_ex = Variable(X_ex)\n resid_pred = self.model1.fwd(X_ex, X_ex).data \n print(resid_pred[:10])\n \n self.pred = resid_pred\n self.error = np.array(y_ex - self.pred.reshape(len(self.pred),))[0]\n \n def compare(self):\n plt.hist(self.error)\n\nvars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR']\n#vars += vars + list(TDQ.columns)\n\nmodel = DLmodel(datas, vars)\n\nmodel.DL(ite=20000, bs=200)\n\nmodel.DL(ite=20000, bs=200, add=True)\n\nmodel.predict()",
"青がOLSの誤差、緑がOLSと深層学習を組み合わせた誤差。",
"model.compare()\n\nprint(np.mean(model.error1))\nprint(np.mean(model.error2))\n\nprint(np.mean(np.abs(model.error1)))\nprint(np.mean(np.abs(model.error2)))\n\nprint(max(np.abs(model.error1)))\nprint(max(np.abs(model.error2)))\n\nprint(np.var(model.error1))\nprint(np.var(model.error2))\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nerrors = [model.error1, model.error2]\n\nbp = ax.boxplot(errors)\n\nplt.grid()\nplt.ylim([-5000,5000])\n\nplt.title('分布の箱ひげ図')\n\nplt.show()\n\nX = model.X_ex['X'].values\nY = model.X_ex['Y'].values\n\ne = model.error2\n\nimport numpy\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\n\nfig=plt.figure()\nax=Axes3D(fig)\n \nax.scatter3D(X, Y, e)\nplt.show()\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.tri as mtri\n\n\n\n#============\n# First plot\n#============\n# Plot the surface. The triangles in parameter space determine which x, y, z\n# points are connected by an edge.\nax = fig.add_subplot(1, 2, 1, projection='3d')\nax.plot_trisurf(X, Y, e)\nax.set_zlim(-1, 1)\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
schmooser/physics | Dynamical chaos.ipynb | mit | [
"Dynamical chaos\nThis notebook aims to provide some examples of dynamical systems demonstrating chaotical behaviour. We'll start from simple reccurent equations and go forward to space-time distributed systems.\nMost examples are taken from the book \"Dynamical chaos\" by S.Kuznetsov available in Russian.\nDefinitions\n\nDynamical system – object of various nature if it can be described by some dynamical variables determining system state and evolution of the system can be described by some arbitrary rule\nDissipative system – kind of a system where dynamics after transient process becomes independent on initial conditions\nAttractor – set of dynamical states in a dissipative system after the transient process is completed",
"%pylab\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n#import matplotlib\n#matplotlib.style.use('ggplot')\n\ndef iterator(f, x0=0, inf=1000):\n \"\"\"\n iterator function returns iterator for given function f.\n \n Parameters:\n f iterating function of one argument\n x0 initial condition\n inf maximum next absolute value of a variable when iteration stops\n \"\"\"\n x = x0 # x - iteration value, i - counter\n while abs(x) < inf: \n yield x\n x = f(x)\n\nassert [x for x in iterator(lambda x: x+2, x0=2, inf=15)] == [2, 4, 6, 8, 10, 12, 14]\n\ndef take(it, n=100, skip=10):\n \"\"\"\n take takes n points from iterator it skipping first skip points.\n \n Parameters:\n it iterator\n n number of results to return\n skip number of steps to skip\n \"\"\"\n i = 0\n while i < skip:\n try:\n it.next()\n i += 1\n except StopIteration:\n return []\n \n i = 0\n result = []\n while i < n:\n try:\n result.append(it.next())\n i += 1\n except StopIteration:\n return result\n return result\n\nassert take(iterator(lambda x: x+2, x0=1, inf=10000), n=5, skip=5) == [11, 13, 15, 17, 19]\n\ndef diagram_points(xs):\n \"\"\"\n diagram_points takes list of numbers and returns a list\n of tuples where each tuple corresponds to a point\n on iterative diagram.\n \"\"\"\n #result = [(xs[0], 0)]\n result = []\n for x, y in zip(xs, xs[1:]):\n result.append((x,x))\n result.append((x,y))\n return result\n \n#assert diagram_points([1,2,3]) == [(1,0), (1,1), (1,2), (2,2), (2,3)]\nassert diagram_points([1,2,3]) == [(1,1), (1,2), (2,2), (2,3)]\n\ndef linspace(start=1.0, stop=10.0, step=1.0):\n \"\"\"\n linspace returns list of linear space steps from start to stop\n devided by step\n \"\"\"\n return [start+i*step for i in range(int((stop-start)/step)+1)]\n\nassert linspace(1,3,0.5) == [1, 1.5, 2, 2.5, 3]\nassert linspace() == [1,2,3,4,5,6,7,8,9,10]\n\n# boundary is a function which returns boundary curve for given function\nboundary = lambda f, limits: pd.DataFrame([(x, f(x)) for x in linspace(limits[0], limits[1], 0.001)])\n\n# let's draw cobweb plot diagram x_{n+1} over x_n of sawtooth map\n# along with evolution of x_n over n\n\ndef cobweb_plot(xs, limits=[0,1], title='Plot', *dfs):\n fig, axes = plt.subplots(1, 2, figsize=(15, 6));\n plt.subplots_adjust(wspace=0.5, hspace=0.5);\n\n pd.DataFrame(zip(limits,limits)).plot(x=0, y=1, ax=axes[0], xlim=limits, ylim=limits, legend=False, color='k')\n for df in dfs:\n df.plot(x=0, y=1, ax=axes[0], legend=False, color='k')\n\n pd.DataFrame(diagram_points(xs), columns=('n', 'n1')).plot(x='n', y='n1', style='o-', ax=axes[0], legend=False)\n pd.DataFrame(xs).plot(style='o-', ax=axes[1], legend=False)\n\n axes[0].set_xlabel('x_n')\n axes[0].set_ylabel('x_n+1')\n axes[1].set_xlabel('n')\n axes[1].set_ylabel('x_n')",
"Sawtooth map\nLet's examine simple system where each next element is derived by previous element by the following rule:\n$$x_{n+1}={2 x_n}$$\nwhere operator ${}$ means taking decimal part of a number.",
"#from math import trunc\nassert trunc(1.5) == 1.0\nassert trunc(12.59) == 12.0\nassert trunc(1) == 1.0\n\nsawtooth = lambda x: round(2*x-trunc(2*x),8)\n\nsawtooth_borders = [pd.DataFrame([(0,0),(0.5,1)]), pd.DataFrame([(0.5,0),(1,1)])]\nfor x0 in [0.4, 0.41, 0.42, 0.43]:\n cobweb_plot(take(iterator(sawtooth, x0=x0), n=30, skip=500), [0,1], *sawtooth_borders)",
"Logistic map\nHere we have very simple equation:\n$$x_{n+1} = k x_n (1 - x_n)$$\nwhere $k$ is some fixed constant.",
"logistic = lambda k, x: k*x*(1-x)\n\nlimits = [0,1]\nfor k in [2.8, 3.2, 3.5, 3.9]:\n l = lambda x: logistic(k, x)\n cobweb_plot(take(iterator(l, x0=0.1), n=30, skip=500), limits, 'plot', boundary(l, limits))\n\n# let's plot the bifurcation diagram\n\ndots = []\nfor k in linspace(2.5, 4, 0.001):\n for dot in set(take(iterator(lambda x: logistic(k, x), x0=0.5), n=50, skip=500)):\n dots.append((k, dot))\n\ndf = pd.DataFrame(dots, columns=('k', 'xs'))\ndf.plot(x='k', y='xs', kind='scatter', style='.', figsize=(16,12), s=1, xlim=[2.5,4], ylim=[0,1])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io | mathematics/argmin_and_argmax.ipynb | mit | [
"Title: argmin and argmax\nSlug: argmin_and_argmax\nSummary: An explanation of argmin and argmax in Python. \nDate: 2016-01-23 12:00\nCategory: Mathematics\nTags: Basics \nAuthors: Chris Albon \nargmin and argmax are the inputs, x's, to a function, f, that creates the smallest and largest outputs, f(x).\nPreliminaries",
"import numpy as np\nimport pandas as pd\nnp.random.seed(1)",
"Define A Function, f(x)",
"# Define a function that,\ndef f(x):\n # Outputs x multiplied by a random number drawn from a normal distribution\n return x * np.random.normal(size=1)[0]",
"Create Some Values Of x",
"# Create some values of x\nxs = [1,2,3,4,5,6]",
"Find The Argmin Of f(x)",
"#Define argmin that\ndef argmin(f, xs):\n # Applies f on all the x's\n data = [f(x) for x in xs]\n\n # Finds index of the smallest output of f(x)\n index_of_min = data.index(min(data))\n \n # Returns the x that produced that output\n return xs[index_of_min]\n\n# Run the argmin function\nargmin(f, xs)",
"Check Our Results",
"print('x','|', 'f(x)')\nprint('--------------')\nfor x in xs:\n print(x,'|', f(x))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kanhua/pypvcell | legacy/Enhancement in Jsc of back reflector.ipynb | apache-2.0 | [
"Introduction\nThis script calculates the potential increase of photocurrent. The question that I want to resolve in this calculation is: does any kind of back reflector help the absorption of crystalline silicon?\nMethod\nI assume we have a 100%-reflectivity mirror attached at the back side of silicon substrate. The structure is like this:",
"from IPython.display import Image\nImage('/Users/kanhua/Dropbox/Programming/pypvcell examples/si_back_reflector.png')",
"We use experimental absorption coefficients of crystalline silicon and assume that the absorptivity follows Beer-Lambert's law. Also, we assume that every abosorped photons can be converted into electrons. In this case, having a 100% reflector on the back side of silicon can be thought of as doubling the thickness of silicon substrate.\nCalculation",
"%matplotlib inline\nfrom pypvcell.photocurrent import conv_abs_to_qe,calc_jsc\nfrom pypvcell.illumination import Illumination\nfrom pypvcell.spectrum import Spectrum\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nabs_file = \"/Users/kanhua/Dropbox/Programming/pypvcell/legacy/si_alpha.csv\"\n\nsi_alpha = np.loadtxt(abs_file, delimiter=',')\nsi_alpha_sp = Spectrum(si_alpha[:,0],si_alpha[:,1],'m')\n\nlayer_t=np.logspace(-8,-3,num=100)\njsc_baseline=np.zeros(layer_t.shape)\n\njsc_full_r=np.zeros(layer_t.shape)\n\nit=np.nditer(layer_t,flags=['f_index'])\nill=Illumination(\"AM1.5g\")\n\ndef filter_spec(ill):\n ill_a=ill.get_spectrum(to_x_unit='eV',to_photon_flux=True)\n ill_a=ill_a[:,ill_a[0,:]>1.1]\n ill_a=ill_a[:,ill_a[0,:]<1.42]\n ill_a[1,1]=0\n return Spectrum(ill_a[0,:],ill_a[1,:],'eV',y_unit='m**-2',is_spec_density=True,is_photon_flux=False)\n\n#ill=filter_spec(ill)\n\nwhile not it.finished:\n\n t=it[0] #thickness of Si layer\n\n qe=conv_abs_to_qe(si_alpha_sp,t)\n jsc_baseline[it.index]=calc_jsc(ill, qe)\n\n # Assme 100% reflection on the back side, essentially doubling the thickness of silicon\n qe_full_r=conv_abs_to_qe(si_alpha_sp,t*2)\n jsc_full_r[it.index]=calc_jsc(ill,qe_full_r)\n\n it.iternext()\nit.reset()",
"Photocurrent with and without the back reflector",
"plt.semilogx(layer_t*1e6, jsc_baseline,hold=True,label=\"Si\")\nplt.semilogx(layer_t*1e6,jsc_full_r,label=\"Si+100% mirror\")\nplt.xlabel(\"thickness of Si substrate (um)\")\nplt.ylabel(\"Jsc (A/m^2)\")\nplt.legend(loc=\"best\")",
"Normlize the Jsc(Si+mirror) by Jsc(Si only)",
"plt.semilogx(layer_t*1e6,jsc_full_r/jsc_baseline)\nplt.xlabel(\"thickness of Si substrate (um)\")\nplt.ylabel(\"Normalized Jsc enhancement\")\nplt.savefig(\"jsc_enhancement.pdf\")\nplt.show()",
"We can see that the back reflector can be very effective when the thickness of silicon substrate is thin (< 1um). Silicon substrates with more than 10-um thicknesses cannot be benefited from this structure very well. This is the reason that photonic or plasmonic structure are useful for thin-film or ultra-thin-film silicon cell, but not conventional bulk crystalline silicon cell.",
"# more detailed investigation\nplt.semilogx(layer_t*1e6,jsc_full_r/jsc_baseline)\nplt.xlabel(\"thickness of Si substrate (um)\")\nplt.ylabel(\"Jsc enhancement (2x)\")\nplt.xlim([100,1000])\nplt.ylim([1.0,1.5])\nplt.show()",
"More audacious assumption\nAssume that somehow we have a novel reflector that can increase the optical absorption length by 10 times.",
"while not it.finished:\n\n t=it[0] #thickness of Si layer\n\n qe=conv_abs_to_qe(si_alpha_sp,t)\n jsc_baseline[it.index]=calc_jsc(Illumination(\"AM1.5g\"), qe)\n\n # Assme 100% reflection on the back side, essentially doubling the thickness of silicon\n qe_full_r=conv_abs_to_qe(si_alpha_sp,t*10)\n jsc_full_r[it.index]=calc_jsc(Illumination(\"AM1.5g\"),qe_full_r)\n\n it.iternext()\nit.reset()\n\nplt.semilogx(layer_t*1e6,jsc_full_r/jsc_baseline)\nplt.xlabel(\"thickness of Si substrate (um)\")\nplt.ylabel(\"Jsc enhancement (10x)\")\nplt.show()",
"We can see that increasing the optical absorption length by 10 times does not increase the photocurrent much for thick silicon substrates.\nConclusion\nThe result is not to say that using photonic/plasmonic structure to enhance the photocurrent of thick silicon substrates is completely hopeless. In my view, to make this possible, this photonic/plasmonic structure should\n\nHave the absortpion mechanism other than Beer-Lambert's law.\nIncrease the optical abosrption length of silicon by a very large amount (at least 10 times or more)."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dm-wyncode/zipped-code | content/posts/makefile-tutorial/makefile_tutorial_0.ipynb | mit | [
"Learning how to make a Makefile\nAdapted from swcarpentry/make-novice repository.\nMake’s fundamental concepts are common across build tools.\n\nGNU Make is a free, fast, well-documented, and very popular Make implementation. From now on, we will focus on it, and when we say Make, we mean GNU Make.\n\nA tutorial named Introduction.\nCells that follow are the result of following this introduction.\nI have adapted the tutorial so that the steps take place in this Jupyter notebook so that the notebook can be transpiled into a Pelican blog post using a danielfrg/pelican-ipynb Pelican plugin.\nSome Jupyter notebook housekeeping to set up some variables with path references.",
"import os\n\n(\n TAB_CHAR,\n) = (\n '\\t',\n)\n\nhome = os.path.expanduser('~')",
"repo_path is the path to a clone of swcarpentry/make-novice",
"repo_path = os.path.join(\n home, \n 'Dropbox/spikes/make-novice',\n)\n\nassert os.path.exists(repo_path)",
"paths are the paths to child directories in a clone of swcarpentry/make-novice",
"paths = (\n 'code',\n 'data',\n)\npaths = (\n code,\n data,\n) = [os.path.join(repo_path, path) for path in paths]\nassert all(os.path.exists(path) for path in paths)",
"Begin tutorial.\nUse the magic run to execute the Python script wordcount.py.\nThe variables with '$' in front of them are the values of the Python variables in this\nnotebook.",
"run $code/wordcount.py $data/books/isles.txt $repo_path/isles.dat",
"Use shell to examine the first 5 lines of the output file from running wordcount.py",
"!head -5 $repo_path/isles.dat",
"We can see that the file consists of one row per word. Each row shows the word itself, the number of occurrences of that word, and the number of occurrences as a percentage of the total number of words in the text file.",
"run $code/wordcount.py $data/books/abyss.txt $repo_path/abyss.dat\n\n!head -5 $repo_path/abyss.dat",
"Let’s visualize the results. The script plotcount.py reads in a data file and plots the 10 most frequently occurring words as a text-based bar plot:",
"run $code/plotcount.py $repo_path/isles.dat ascii",
"plotcount.py can also show the plot graphically",
"run $code/plotcount.py $repo_path/isles.dat show",
"plotcount.py can also create the plot as an image file (e.g. a PNG file)",
"run $code/plotcount.py $repo_path/isles.dat $repo_path/isles.png",
"Import the objects necessary to display the generated png file in this notebook.",
"from IPython.display import Image\nImage(filename=os.path.join(repo_path, 'isles.png'))",
"Finally, let’s test Zipf’s law for these books\nThe most frequently-occurring word occurs approximately twice as often as the second most frequent word. This is Zipf’s Law.",
"run $code/zipf_test.py $repo_path/abyss.dat $repo_path/isles.dat",
"What we really want is an executable description of our pipeline that allows software to do the tricky part for us: figuring out what steps need to be rerun.\n\nCreate a file, called Makefile, with the following contents.\nPython's built-in format is used to create the contents of the Makefile.",
"makefile_contents = \"\"\"\n# Count words.\n{repo_path}/isles.dat : {data}/books/isles.txt\n{tab_char}python {code}/wordcount.py {data}/books/isles.txt {repo_path}/isles.dat\n\"\"\".format(code=code, data=data, repo_path=repo_path, tab_char=TAB_CHAR)",
"Write the contents to a file named Makefile.",
"with open('Makefile', 'w') as fh:\n fh.write(makefile_contents)",
"Let’s first sure we start from scratch and delete the .dat and .png files we created earlier:\n\nRun rm in shell.",
"!rm $repo_path/*.dat $repo_path/*.png",
"Run make in shell.\n\nBy default, Make prints out the actions it executes:",
"!make",
"Let’s see if we got what we expected.\n\nRun head in shell.",
"!head -5 $repo_path/isles.dat",
"A simple Makefile was created. If the dependencies exist, the commands are not run.\n\nUnlike shell scripts it explicitly records the dependencies between files - what files are needed to create what other files - and so can determine when to recreate our data files or image files, if our text files change. Make can be used for any commands that follow the general pattern of processing files to create new files…\n\ntutorial continues: Makefiles"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/pipelines | samples/core/lightweight_component/lightweight_component.ipynb | apache-2.0 | [
"Lightweight python components\nLightweight python components do not require you to build a new container image for every code change.\nThey're intended to use for fast iteration in notebook environment.\nBuilding a lightweight python component\nTo build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.\nThere are several requirements for the function:\n* The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function.\n* The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.)\n* If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string.\n* To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])",
"# Install the SDK\n!pip3 install 'kfp>=0.1.31.2' --quiet\n\nimport kfp.deprecated as kfp\nimport kfp.deprecated.components as components",
"Simple function that just add two numbers:",
"#Define a Python function\ndef add(a: float, b: float) -> float:\n '''Calculates sum of two arguments'''\n return a + b",
"Convert the function to a pipeline operation",
"add_op = components.create_component_from_func(add)",
"A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.",
"#Advanced function\n#Demonstrates imports, helper functions and multiple outputs\nfrom typing import NamedTuple\ndef my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):\n '''Divides two numbers and calculate the quotient and remainder'''\n \n #Imports inside a component function:\n import numpy as np\n\n #This function demonstrates how to use nested functions inside a component function:\n def divmod_helper(dividend, divisor):\n return np.divmod(dividend, divisor)\n\n (quotient, remainder) = divmod_helper(dividend, divisor)\n\n from tensorflow.python.lib.io import file_io\n import json\n \n # Exports a sample tensorboard:\n metadata = {\n 'outputs' : [{\n 'type': 'tensorboard',\n 'source': 'gs://ml-pipeline-dataset/tensorboard-train',\n }]\n }\n\n # Exports two sample metrics:\n metrics = {\n 'metrics': [{\n 'name': 'quotient',\n 'numberValue': float(quotient),\n },{\n 'name': 'remainder',\n 'numberValue': float(remainder),\n }]}\n\n from collections import namedtuple\n divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])\n return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))",
"Test running the python function directly",
"my_divmod(100, 7)",
"Convert the function to a pipeline operation\nYou can specify an alternative base container image (the image needs to have Python 3.5+ installed).",
"divmod_op = components.create_component_from_func(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')",
"Define the pipeline\nPipeline function has to be decorated with the @dsl.pipeline decorator",
"import kfp.deprecated.dsl as dsl\[email protected](\n name='calculation-pipeline',\n description='A toy pipeline that performs arithmetic calculations.'\n)\ndef calc_pipeline(\n a=7,\n b=8,\n c=17,\n):\n #Passing pipeline parameter and a constant value as operation arguments\n add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance. \n \n #Passing a task output reference as operation arguments\n #For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax\n divmod_task = divmod_op(add_task.output, b)\n\n #For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax\n result_task = add_op(divmod_task.outputs['quotient'], c)",
"Submit the pipeline for execution",
"#Specify pipeline argument values\narguments = {'a': 7, 'b': 8}\n\n#Submit a pipeline run\nkfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments)\n\n# Run the pipeline on a separate Kubeflow Cluster instead\n# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)\n# kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments)\n\n#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
InsightLab/data-science-cookbook | 2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb | mit | [
"Hands-on!\nNessa prática, sugerimos alguns pequenos exemplos para você implementar sobre o Spark.\nLogistic Regression com Cross-Validation\nNo exercício LogisticRegression foi utilizado TrainValidationSplit como abordagem de avaliação do modelo gerado. Atualize o exercício consideram CrossValidator e compare os resultados. Não esqueça de utilizar Pipeline.\nBibliotecas",
"from pyspark.ml.classification import LogisticRegression\nfrom pyspark.ml.evaluation import RegressionEvaluator, MulticlassClassificationEvaluator\nfrom pyspark.ml import Pipeline\nfrom pyspark.mllib.regression import LabeledPoint\nfrom pyspark.ml.linalg import Vectors\nfrom pyspark.ml.feature import StringIndexer\nfrom pyspark.mllib.evaluation import MulticlassMetrics\nfrom pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit, CrossValidator",
"Funções",
"def mapLibSVM(row): \n return (row[5],Vectors.dense(row[:3]))\n\ndf = spark.read \\\n .format(\"csv\") \\\n .option(\"header\", \"true\") \\\n .option(\"inferSchema\", \"true\") \\\n .load(\"datasets/iris.data\")",
"Convertendo a saída de categórica para numérica",
"indexer = StringIndexer(inputCol=\"label\", outputCol=\"labelIndex\")\nindexer = indexer.fit(df).transform(df)\nindexer.show()\n\ndfLabeled = indexer.rdd.map(mapLibSVM).toDF([\"label\", \"features\"])\ndfLabeled.show()\n\ntrain, test = dfLabeled.randomSplit([0.9, 0.1], seed=12345)",
"Definição do Modelo Logístico",
"lr = LogisticRegression(labelCol=\"label\", maxIter=15)",
"Cross-Validation - TrainValidationSplit e CrossValidator",
"paramGrid = ParamGridBuilder()\\\n .addGrid(lr.regParam, [0.1, 0.001]) \\\n .build()\n\ntvs = TrainValidationSplit(estimator=lr,\n estimatorParamMaps=paramGrid,\n evaluator=MulticlassClassificationEvaluator(),\n trainRatio=0.8)\n\ncval = CrossValidator(estimator=lr,\n estimatorParamMaps=paramGrid,\n evaluator=MulticlassClassificationEvaluator(),\n numFolds=10)",
"Treino do Modelo e Predição do Teste",
"result_tvs = tvs.fit(train).transform(test)\nresult_cval = cval.fit(train).transform(test)\n\npreds_tvs = result_tvs.select([\"prediction\", \"label\"])\npreds_cval = result_cval.select([\"prediction\", \"label\"])",
"Avaliação dos Modelos",
"# Instânciação dos Objetos de Métrics\nmetrics_tvs = MulticlassMetrics(preds_tvs.rdd)\nmetrics_cval = MulticlassMetrics(preds_cval.rdd)\n\n# Estatísticas Gerais para o Método TrainValidationSplit\nprint(\"Summary Stats\")\nprint(\"F1 Score = %s\" % metrics_tvs.fMeasure())\nprint(\"Accuracy = %s\" % metrics_tvs.accuracy)\nprint(\"Weighted recall = %s\" % metrics_tvs.weightedRecall)\nprint(\"Weighted precision = %s\" % metrics_tvs.weightedPrecision)\nprint(\"Weighted F(1) Score = %s\" % metrics_tvs.weightedFMeasure())\nprint(\"Weighted F(0.5) Score = %s\" % metrics_tvs.weightedFMeasure(beta=0.5))\nprint(\"Weighted false positive rate = %s\" % metrics_tvs.weightedFalsePositiveRate)\n\n# Estatísticas Gerais para o Método TrainValidationSplit\nprint(\"Summary Stats\")\nprint(\"F1 Score = %s\" % metrics_cval.fMeasure())\nprint(\"Accuracy = %s\" % metrics_cval.accuracy)\nprint(\"Weighted recall = %s\" % metrics_cval.weightedRecall)\nprint(\"Weighted precision = %s\" % metrics_cval.weightedPrecision)\nprint(\"Weighted F(1) Score = %s\" % metrics_cval.weightedFMeasure())\nprint(\"Weighted F(0.5) Score = %s\" % metrics_cval.weightedFMeasure(beta=0.5))\nprint(\"Weighted false positive rate = %s\" % metrics_cval.weightedFalsePositiveRate)",
"Conclusão:\nUma vez que ambos os modelos de CrossValidation usam o mesmo modelo de predição (a Regressão Logística), e contando com o fato de que o dataset é relativamente pequeno, é natural que ambos os métodos de CrossValidation encontrem o mesmo (ou aproximadamente igual) valor ótimo para os hyperparâmetros testados.\nPor esse motivo, após descobrirem esse valor de hiperparâmetros, os dois modelos irão demonstrar resultados bastante similiares quando avaliados sobre o Conjunto de Treino (que também é o mesmo para os dois modelos).\n\nRandom Forest\nUse o exercício anterior como base, mas agora utilizando pyspark.ml.classification.RandomForestClassifier. Use Pipeline e CrossValidator para avaliar o modelo gerado.\nBibliotecas",
"from pyspark.ml.classification import RandomForestClassifier",
"Definição do Modelo de Árvores Randômicas",
"rf = RandomForestClassifier(labelCol=\"label\", featuresCol=\"features\")",
"Cross-Validation - CrossValidator",
"paramGrid = ParamGridBuilder()\\\n .addGrid(rf.numTrees, [1, 100]) \\\n .build()\n\ncval = CrossValidator(estimator=rf,\n estimatorParamMaps=paramGrid,\n evaluator=MulticlassClassificationEvaluator(),\n numFolds=10)",
"Treino do Modelo e Predição do Teste",
"results = cval.fit(train).transform(test)\n\npredictions = results.select([\"prediction\", \"label\"])",
"Avaliação do Modelo",
"# Instânciação dos Objetos de Métrics\nmetrics = MulticlassMetrics(predictions.rdd)\n\n# Estatísticas Gerais para o Método TrainValidationSplit\nprint(\"Summary Stats\")\nprint(\"F1 Score = %s\" % metrics.fMeasure())\nprint(\"Accuracy = %s\" % metrics.accuracy)\nprint(\"Weighted recall = %s\" % metrics.weightedRecall)\nprint(\"Weighted precision = %s\" % metrics.weightedPrecision)\nprint(\"Weighted F(1) Score = %s\" % metrics.weightedFMeasure())\nprint(\"Weighted F(0.5) Score = %s\" % metrics.weightedFMeasure(beta=0.5))\nprint(\"Weighted false positive rate = %s\" % metrics.weightedFalsePositiveRate)",
"Conclusão:\nUma vez que o RandomForest é um classificador relatiamente robusto, e o Iris é um problema relativamente simples, é notável que esse modelo é suficientemente capaz de perfeitamente predizer observações desse dataset."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pgmpy/pgmpy | examples/Structure Learning in Bayesian Networks.ipynb | mit | [
"Structure Learning in Bayesian Networks\nIn this notebook, we show examples for using the Structure Learning Algorithms in pgmpy. Currently, pgmpy has implementation of 3 main algorithms:\n1. PC with stable and parallel variants.\n2. Hill-Climb Search\n3. Exhaustive Search\nFor PC the following conditional independence test can be used:\n1. Chi-Square test (https://en.wikipedia.org/wiki/Chi-squared_test)\n2. Pearsonr (https://en.wikipedia.org/wiki/Partial_correlation#Using_linear_regression)\n3. G-squared (https://en.wikipedia.org/wiki/G-test)\n4. Log-likelihood (https://en.wikipedia.org/wiki/G-test)\n5. Freeman-Tuckey (Read, Campbell B. \"Freeman—Tukey chi-squared goodness-of-fit statistics.\" Statistics & probability letters 18.4 (1993): 271-278.)\n6. Modified Log-likelihood\n7. Neymann (https://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma)\n8. Cressie Read (Cressie, Noel, and Timothy RC Read. \"Multinomial goodness‐of‐fit tests.\" Journal of the Royal Statistical Society: Series B (Methodological) 46.3 (1984): 440-464)\n9. Power Divergence (Cressie, Noel, and Timothy RC Read. \"Multinomial goodness‐of‐fit tests.\" Journal of the Royal Statistical Society: Series B (Methodological) 46.3 (1984): 440-464.)\nFor Hill-Climb and Exhausitive Search the following scoring methods can be used:\n1. K2 Score\n2. BDeu Score\n3. Bic Score\nGenerate some data",
"from itertools import combinations\n\nimport networkx as nx\nfrom sklearn.metrics import f1_score\n\nfrom pgmpy.estimators import PC, HillClimbSearch, ExhaustiveSearch\nfrom pgmpy.estimators import K2Score\nfrom pgmpy.utils import get_example_model\nfrom pgmpy.sampling import BayesianModelSampling\n\nmodel = get_example_model(\"alarm\")\nsamples = BayesianModelSampling(model).forward_sample(size=int(1e3))\nsamples.head()\n\n# Funtion to evaluate the learned model structures.\ndef get_f1_score(estimated_model, true_model):\n nodes = estimated_model.nodes()\n est_adj = nx.to_numpy_matrix(\n estimated_model.to_undirected(), nodelist=nodes, weight=None\n )\n true_adj = nx.to_numpy_matrix(\n true_model.to_undirected(), nodelist=nodes, weight=None\n )\n\n f1 = f1_score(np.ravel(true_adj), np.ravel(est_adj))\n print(\"F1-score for the model skeleton: \", f1)",
"Learn the model structure using PC",
"est = PC(data=samples)\nestimated_model = est.estimate(variant=\"stable\", max_cond_vars=4)\nget_f1_score(estimated_model, model)\n\nest = PC(data=samples)\nestimated_model = est.estimate(variant=\"orig\", max_cond_vars=4)\nget_f1_score(estimated_model, model)",
"Learn the model structure using Hill-Climb Search",
"scoring_method = K2Score(data=samples)\nest = HillClimbSearch(data=samples)\nestimated_model = est.estimate(\n scoring_method=scoring_method, max_indegree=4, max_iter=int(1e4)\n)\nget_f1_score(estimated_model, model)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sloanesturz/cs224u-final-project | sippycup-unit-3.ipynb | gpl-2.0 | [
"<img src=\"img/sippycup-small.jpg\" align=\"left\" style=\"padding-right: 30px\"/>\n<h1 style=\"line-height: 125%\">\n SippyCup<br />\n Unit 3: Geography queries\n</h1>\n\n<p>\n <a href=\"http://nlp.stanford.edu/~wcmac/\">Bill MacCartney</a><br/>\n Spring 2015\n <!-- <a href=\"mailto:[email protected]\">[email protected]</a> -->\n</p>\n\n<div style=\"margin: 0px 0px; padding: 10px; background-color: #ddddff; border-style: solid; border-color: #aaaacc; border-width: 1px\">\nThis is Unit 3 of the <a href=\"./sippycup-unit-0.ipynb\">SippyCup codelab</a>.\n</div>\n\nOur third case study will examine the domain of geography queries. In particular, we'll focus on the Geo880 corpus, which contains 880 queries about U.S. geography. Examples include:\n\"which states border texas?\"\n\"how many states border the largest state?\"\n\"what is the size of the capital of texas?\"\n\nThe Geo880 queries have a quite different character from the arithmetic queries and travel queries we have examined previously. They differ from the arithmetic queries in using a large vocabulary, and in exhibiting greater degrees of both lexical and syntactic ambiguity. They differ from the travel queries in adhering to conventional rules for spelling and syntax, and in having semantics with arbitrarily complex compositional structure. For example:\n\"what rivers flow through states that border the state with the largest population?\"\n\"what is the population of the capital of the largest state through which the mississippi runs?\"\n\"what is the longest river that passes the states that border the state that borders the most states?\"\n\nGeo880 was developed in Ray Mooney's group at UT Austin. It is of particular interest because it has for many years served as a standard evaluation for semantic parsing systems. (See, for example, Zelle & Mooney 1996, Tang & Mooney 2001, Zettlemoyer & Collins 2005, and Liang et al. 2011.) It has thereby become, for many, a paradigmatic application of semantic parsing. It has also served as a bridge between an older current of research on natural language interfaces to databases (NLIDBs) (see Androutsopoulos et al. 1995) and the modern era of semantic parsing.\nThe domain of geography queries is also of interest because there are many plausible real-world applications for semantic parsing which similarly involve complex compositional queries against a richly structured knowledge base. For example, some people are passionate about baseball statistics, and might want to ask queries like:\n\"pitchers who have struck out four batters in one inning\"\n\"players who have stolen at least 100 bases in a season\"\n\"complete games with fewer than 90 pitches\"\n\"most home runs hit in one game\"\n\nEnvironmental advocates and policymakers might have queries like:\n\"which country has the highest co2 emissions\"\n\"what five countries have the highest per capita co2 emissions\"\n\"what country's co2 emissions increased the most over the last five years\"\n\"what fraction of co2 emissions was from european countries in 2010\"\n\nTechniques that work in the geography domain are like to work in these other domains too.\nThe Geo880 dataset\nI've been told that the Geo880 queries were collected from students in classes taught by Ray Mooney at UT Austin. I'm not sure whether I've got the story right. But this account is consistent with one of the notable limitations of the dataset: it is not a natural distribution, and not a realistic representation of the geography queries that people actually ask on, say, Google. Nobody ever asks, \"what is the longest river that passes the states that border the state that borders the most states?\" Nobody. Ever.\nThe dataset was published online by Rohit Jaivant Kate in a Prolog file containing semantic representations in Prolog style. It was later republished by Yuk Wah Wong as an XML file containing additional metadata for each example, including translations into Spanish, Japanese, and Turkish; syntactic parse trees; and semantics in two different representations: Prolog and FunQL.\nIn SippyCup, we're not going to use either Prolog or FunQL semantics. Instead, we'll use examples which have been annotated only with denotations (which were provided by Percy Liang — thanks!). Of course, our grammar will require a semantic representation, even if our examples are not annotated with semantics. We will introduce one below.\nThe Geo880 dataset is conventionally divided into 600 training examples and 280 test examples. In SippyCup, the dataset can in found in geo880.py. Let's take a peek.",
"from geo880 import geo880_train_examples, geo880_test_examples\n\nprint('train examples:', len(geo880_train_examples))\nprint('test examples: ', len(geo880_test_examples))\nprint(geo880_train_examples[0])\nprint(geo880_test_examples[0])",
"The Geobase knowledge base\nGeobase is a small knowledge base about the geography of the United States. It contains (almost) all the information needed to answer queries in the Geo880 dataset, including facts about:\n\nstates: capital, area, population, major cities, neighboring states, highest and lowest points and elevations\ncities: containing state and population\nrivers: length and states traversed\nmountains: containing state and height\nroads: states traversed\nlakes: area, states traversed\n\nSippyCup contains a class called GeobaseReader (in geobase.py) which facilitates working with Geobase in Python. It reads and parses the Geobase Prolog file, and creates a set of tuples representing its content. Let's take a look.",
"from geobase import GeobaseReader\n\nreader = GeobaseReader()\nunaries = [str(t) for t in reader.tuples if len(t) == 2]\nprint('\\nSome unaries:\\n ' + '\\n '.join(unaries[:10]))\nbinaries = [str(t) for t in reader.tuples if len(t) == 3]\nprint('\\nSome binaries:\\n ' + '\\n '.join(binaries[:10]))",
"Some observations here:\n\nUnaries are pairs consisting of a unary predicate (a type) and an entity.\nBinaries are triples consisting of binary predicate (a relation) and two entities (or an entity and a numeric or string value).\nEntities are named by unique identifiers of the form /type/name. This is a GeobaseReader convention; these identifiers are not used in the original Prolog file.\nSome entities have the generic type place because they occur in the Prolog file only as the highest or lowest point in a state, and it's hard to reliably assign such points to one of the more specific types.\nThe original Prolog file is inconsistent about units. For example, the area of states is expressed in square miles, but the area of lakes is expressed in square kilometers. GeobaseReader converts everything to SI units: meters and square meters.\n\nSemantic representation <a id=\"geoquery-semantic-representation\"></a>\nGeobaseReader merely reads the data in Geobase into a set of tuples. It doesn't provide any facility for querying that data. That's where GraphKB and GraphKBExecutor come in. GraphKB is a graph-structured knowledge base, with indexing for fast lookups. GraphKBExecutor defines a representation for formal queries against that knowledge base, and supports query execution. The formal query language defined by GraphKBExecutor will serve as our semantic representation for the geography domain.\nThe GraphKB class\nA GraphKB is a generic graph-structured knowledge base, or equivalently, a set of relational pairs and triples, with indexing for fast lookups. It represents a knowledge base as set of tuples, each either:\n\na pair, consisting of a unary relation and an element which belongs to it,\n or\na triple consisting of a binary relation and a pair of elements which\n belong to it.\n\nFor example, we can construct a GraphKB representing facts about The Simpsons:",
"from graph_kb import GraphKB\n\nsimpsons_tuples = [\n\n # unaries\n ('male', 'homer'),\n ('female', 'marge'),\n ('male', 'bart'),\n ('female', 'lisa'),\n ('female', 'maggie'),\n ('adult', 'homer'),\n ('adult', 'marge'),\n ('child', 'bart'),\n ('child', 'lisa'),\n ('child', 'maggie'),\n\n # binaries\n ('has_age', 'homer', 36),\n ('has_age', 'marge', 34),\n ('has_age', 'bart', 10),\n ('has_age', 'lisa', 8),\n ('has_age', 'maggie', 1),\n ('has_brother', 'lisa', 'bart'),\n ('has_brother', 'maggie', 'bart'),\n ('has_sister', 'bart', 'maggie'),\n ('has_sister', 'bart', 'lisa'),\n ('has_sister', 'lisa', 'maggie'),\n ('has_sister', 'maggie', 'lisa'),\n ('has_father', 'bart', 'homer'),\n ('has_father', 'lisa', 'homer'),\n ('has_father', 'maggie', 'homer'),\n ('has_mother', 'bart', 'marge'),\n ('has_mother', 'lisa', 'marge'),\n ('has_mother', 'maggie', 'marge'),\n]\n\nsimpsons_kb = GraphKB(simpsons_tuples)",
"The GraphKB object now contains three indexes:\n\nunaries[U]: all entities belonging to unary relation U\nbinaries_fwd[B][E]: all entities X such that (E, X) belongs to binary relation B\nbinaries_rev[B][E]: all entities X such that (X, E) belongs to binary relation B\n\nFor example:",
"simpsons_kb.unaries['child']\n\nsimpsons_kb.binaries_fwd['has_sister']['lisa']\n\nsimpsons_kb.binaries_rev['has_sister']['lisa']",
"The GraphKBExecutor class\nA GraphKBExecutor executes formal queries against a GraphKB and returns their denotations.\nQueries are represented by Python tuples, and can be nested.\nDenotations are also represented by Python tuples, but are conceptually sets (possibly empty). The elements of these tuples are always sorted in canonical order, so that they can be reliably compared for set equality.\nThe query language defined by GraphKBExecutor is perhaps most easily explained by example:",
"queries = [\n 'bart',\n 'male',\n ('has_sister', 'lisa'), # who has sister lisa?\n ('lisa', 'has_sister'), # lisa has sister who, i.e., who is a sister of lisa?\n ('lisa', 'has_brother'), # lisa has brother who, i.e., who is a brother of lisa?\n ('.and', 'male', 'child'),\n ('.or', 'male', 'adult'),\n ('.not', 'child'),\n ('.any',), # anything\n ('.any', 'has_sister'), # anything has sister who, i.e., who is a sister of anything?\n ('.and', 'child', ('.not', ('.any', 'has_sister'))),\n ('.count', ('bart', 'has_sister')),\n ('has_age', ('.gt', 21)),\n ('has_age', ('.lt', 2)),\n ('has_age', ('.eq', 10)),\n ('.max', 'has_age', 'female'),\n ('.min', 'has_age', ('bart', 'has_sister')),\n ('.max', 'has_age', '.any'),\n ('.argmax', 'has_age', 'female'),\n ('.argmin', 'has_age', ('bart', 'has_sister')),\n ('.argmax', 'has_age', '.any'),\n]\n\nexecutor = simpsons_kb.executor()\nfor query in queries:\n print()\n print('Q ', query)\n print('D ', executor.execute(query))",
"Note that the query (R E) denotes entities having relation R to entity E,\nwhereas the query (E R) denotes entities to which entity E has relation R.\nFor a more detailed understanding of the style of semantic representation defined by GraphKBExecutor, take a look at the source code.\nUsing GraphKBExecutor with Geobase",
"geobase = GraphKB(reader.tuples)\nexecutor = geobase.executor()\nqueries = [\n ('/state/texas', 'capital'), # capital of texas\n ('.and', 'river', ('traverses', '/state/utah')), # rivers that traverse utah\n ('.argmax', 'height', 'mountain'), # tallest mountain\n]\nfor query in queries:\n print()\n print(query)\n print(executor.execute(query))",
"Grammar engineering\nIt's time to start developing a grammar for the geography domain. As in Unit 2, \nthe performance metric we'll focus on during grammar engineering is oracle accuracy (the proportion of examples for which any parse is correct), not accuracy (the proportion of examples for which the first parse is correct). Remember that oracle accuracy is an upper bound on accuracy, and is a measure of the expressive power of the grammar: does it have the rules it needs to generate the correct parse? The gap between oracle accuracy and accuracy, on the other hand, reflects the ability of the scoring model to bring the correct parse to the top of the candidate list. <!-- (TODO: rewrite.) -->\nAs always, we're going to take a data-driven approach to grammar engineering. We want to introduce rules which will enable us to handle the lexical items and syntactic structures that we actually observe in the Geo880 training data. To that end, let's count the words that appear among the 600 training examples. (We do not examine the test data!)",
"from collections import defaultdict\nfrom operator import itemgetter\nfrom geo880 import geo880_train_examples\n\nwords = [word for example in geo880_train_examples for word in example.input.split()]\ncounts = defaultdict(int)\nfor word in words:\n counts[word] += 1\ncounts = sorted([(count, word) for word, count in counts.items()], reverse=True)\nprint('There were %d tokens of %d types:\\n' % (len(words), len(counts)))\nprint(', '.join(['%s (%d)' % (word, count) for count, word in counts[:50]] + ['...']))",
"There are at least four major categories of words here:\n- Words that refer to entities, such as \"texas\", \"mississippi\", \"usa\", and \"austin\".\n- Words that refer to types, such as \"state\", \"river\", and \"cities\".\n- Words that refer to relations, such as \"in\", \"borders\", \"capital\", and \"long\".\n- Other function words, such as \"the\", \"what\", \"how\", and \"are\".\nOne might make finer distinctions, but this seems like a reasonable starting point. Note that these categories do not always correspond to traditional syntactic categories. While the entities are typically proper nouns, and the types are typically common nouns, the relations include prepositions, verbs, nouns, and adjectives.\nThe design of our grammar will roughly follow this schema. The major categories will include $Entity, $Type, $Collection, $Relation, and $Optional.\nOptionals\nIn Unit 2, our grammar engineering process didn't really start cooking until we introduced optionals. This time around, let's begin with the optionals. We'll define as $Optional every word in the Geo880 training data which does not plainly refer to an entity, type, or relation. And we'll let any query be preceded or followed by a sequence of one or more $Optionals.",
"from parsing import Grammar, Rule\n\noptional_words = [\n 'the', '?', 'what', 'is', 'in', 'of', 'how', 'many', 'are', 'which', 'that',\n 'with', 'has', 'major', 'does', 'have', 'where', 'me', 'there', 'give',\n 'name', 'all', 'a', 'by', 'you', 'to', 'tell', 'other', 'it', 'do', 'whose',\n 'show', 'one', 'on', 'for', 'can', 'whats', 'urban', 'them', 'list',\n 'exist', 'each', 'could', 'about'\n]\n\nrules_optionals = [\n Rule('$ROOT', '?$Optionals $Query ?$Optionals', lambda sems: sems[1]),\n Rule('$Optionals', '$Optional ?$Optionals'),\n] + [Rule('$Optional', word) for word in optional_words]",
"Because $Query has not yet been defined, we won't be able to parse anything yet.\nEntities and collections\nOur grammar will need to be able to recognize names of entities, such as \"utah\". There are hundreds of entities in Geobase, and we don't want to have to introduce a grammar rule for each entity. Instead, we'll define a new annotator, GeobaseAnnotator, which simply annotates phrases which exactly match names in Geobase.",
"from annotator import Annotator, NumberAnnotator\n\nclass GeobaseAnnotator(Annotator):\n def __init__(self, geobase):\n self.geobase = geobase\n\n def annotate(self, tokens):\n phrase = ' '.join(tokens)\n places = self.geobase.binaries_rev['name'][phrase]\n return [('$Entity', place) for place in places]",
"Now a couple of rules that will enable us to parse inputs that simply name locations, such as \"utah\".\n(TODO: explain rationale for $Collection and $Query.)",
"rules_collection_entity = [\n Rule('$Query', '$Collection', lambda sems: sems[0]),\n Rule('$Collection', '$Entity', lambda sems: sems[0]),\n]\n\nrules = rules_optionals + rules_collection_entity",
"Now let's make a grammar.",
"annotators = [NumberAnnotator(), GeobaseAnnotator(geobase)]\ngrammar = Grammar(rules=rules, annotators=annotators)",
"Let's try to parse some inputs which just name locations.",
"parses = grammar.parse_input('what is utah')\nfor parse in parses[:1]:\n print('\\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))",
"Great, it worked. Now let's run an evaluation on the Geo880 training examples.",
"from experiment import sample_wins_and_losses\nfrom geoquery import GeoQueryDomain\nfrom metrics import DenotationOracleAccuracyMetric\nfrom scoring import Model\n\ndomain = GeoQueryDomain()\nmodel = Model(grammar=grammar, executor=executor.execute)\nmetric = DenotationOracleAccuracyMetric()\n\nsample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)",
"We don't yet have a single win: denotation oracle accuracy remains stuck at zero. However, the average number of parses is slightly greater than zero, meaning that there are a few examples which our grammar can parse (though not correctly). It would be interesting to know which examples. There's a utility function in experiment.py which will give you the visibility you need. See if you can figure out what to do.\n<!-- 'where is san diego ?' is parsed as '/city/san_diego_ca' -->\n\nTypes\n(TODO: the words in the training data include lots of words for types. Let's write down some lexical rules defining the category $Type, guided as usual by the words we actually see in the training data. We'll also make $Type a kind of $Collection.)",
"rules_types = [\n Rule('$Collection', '$Type', lambda sems: sems[0]),\n\n Rule('$Type', 'state', 'state'),\n Rule('$Type', 'states', 'state'),\n Rule('$Type', 'city', 'city'),\n Rule('$Type', 'cities', 'city'),\n Rule('$Type', 'big cities', 'city'),\n Rule('$Type', 'towns', 'city'),\n Rule('$Type', 'river', 'river'),\n Rule('$Type', 'rivers', 'river'),\n Rule('$Type', 'mountain', 'mountain'),\n Rule('$Type', 'mountains', 'mountain'),\n Rule('$Type', 'mount', 'mountain'),\n Rule('$Type', 'peak', 'mountain'),\n Rule('$Type', 'road', 'road'),\n Rule('$Type', 'roads', 'road'),\n Rule('$Type', 'lake', 'lake'),\n Rule('$Type', 'lakes', 'lake'),\n Rule('$Type', 'country', 'country'),\n Rule('$Type', 'countries', 'country'),\n]",
"We should now be able to parse inputs denoting types, such as \"name the lakes\":",
"rules = rules_optionals + rules_collection_entity + rules_types\ngrammar = Grammar(rules=rules, annotators=annotators)\nparses = grammar.parse_input('name the lakes')\nfor parse in parses[:1]:\n print('\\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))",
"It worked. Let's evaluate on the Geo880 training data again.",
"model = Model(grammar=grammar, executor=executor.execute)\nsample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)",
"Liftoff! We have two wins, and denotation oracle accuracy is greater than zero! Just barely.\nRelations and joins\nIn order to really make this bird fly, we're going to have to handle relations. In particular, we'd like to be able to parse queries which combine a relation with an entity or collection, such as \"what is the capital of vermont\".\nAs usual, we'll adopt a data-driven approach. The training examples include lots of words and phrases which refer to relations, both \"forward\" relations (like \"traverses\") and \"reverse\" relations (like \"traversed by\"). Guided by the training data, we'll write lexical rules which define the categories $FwdRelation and $RevRelation. Then we'll add rules that allow either a $FwdRelation or a $RevRelation to be promoted to a generic $Relation, with semantic functions which ensure that the semantics are constructed with the proper orientation. Finally, we'll define a rule for joining a $Relation (such as \"capital of\") with a $Collection (such as \"vermont\") to yield another $Collection (such as \"capital of vermont\").\n<!-- (TODO: Give a fuller explanation of what's going on with the semantics.) -->",
"rules_relations = [\n Rule('$Collection', '$Relation ?$Optionals $Collection', lambda sems: sems[0](sems[2])),\n\n Rule('$Relation', '$FwdRelation', lambda sems: (lambda arg: (sems[0], arg))),\n Rule('$Relation', '$RevRelation', lambda sems: (lambda arg: (arg, sems[0]))),\n\n Rule('$FwdRelation', '$FwdBordersRelation', 'borders'),\n Rule('$FwdBordersRelation', 'border'),\n Rule('$FwdBordersRelation', 'bordering'),\n Rule('$FwdBordersRelation', 'borders'),\n Rule('$FwdBordersRelation', 'neighbor'),\n Rule('$FwdBordersRelation', 'neighboring'),\n Rule('$FwdBordersRelation', 'surrounding'),\n Rule('$FwdBordersRelation', 'next to'),\n\n Rule('$FwdRelation', '$FwdTraversesRelation', 'traverses'),\n Rule('$FwdTraversesRelation', 'cross ?over'),\n Rule('$FwdTraversesRelation', 'flow through'),\n Rule('$FwdTraversesRelation', 'flowing through'),\n Rule('$FwdTraversesRelation', 'flows through'),\n Rule('$FwdTraversesRelation', 'go through'),\n Rule('$FwdTraversesRelation', 'goes through'),\n Rule('$FwdTraversesRelation', 'in'),\n Rule('$FwdTraversesRelation', 'pass through'),\n Rule('$FwdTraversesRelation', 'passes through'),\n Rule('$FwdTraversesRelation', 'run through'),\n Rule('$FwdTraversesRelation', 'running through'),\n Rule('$FwdTraversesRelation', 'runs through'),\n Rule('$FwdTraversesRelation', 'traverse'),\n Rule('$FwdTraversesRelation', 'traverses'),\n\n Rule('$RevRelation', '$RevTraversesRelation', 'traverses'),\n Rule('$RevTraversesRelation', 'has'),\n Rule('$RevTraversesRelation', 'have'), # 'how many states have major rivers'\n Rule('$RevTraversesRelation', 'lie on'),\n Rule('$RevTraversesRelation', 'next to'),\n Rule('$RevTraversesRelation', 'traversed by'),\n Rule('$RevTraversesRelation', 'washed by'),\n\n Rule('$FwdRelation', '$FwdContainsRelation', 'contains'),\n # 'how many states have a city named springfield'\n Rule('$FwdContainsRelation', 'has'),\n Rule('$FwdContainsRelation', 'have'),\n\n Rule('$RevRelation', '$RevContainsRelation', 'contains'),\n Rule('$RevContainsRelation', 'contained by'),\n Rule('$RevContainsRelation', 'in'),\n Rule('$RevContainsRelation', 'found in'),\n Rule('$RevContainsRelation', 'located in'),\n Rule('$RevContainsRelation', 'of'),\n\n Rule('$RevRelation', '$RevCapitalRelation', 'capital'),\n Rule('$RevCapitalRelation', 'capital'),\n Rule('$RevCapitalRelation', 'capitals'),\n\n Rule('$RevRelation', '$RevHighestPointRelation', 'highest_point'),\n Rule('$RevHighestPointRelation', 'high point'),\n Rule('$RevHighestPointRelation', 'high points'),\n Rule('$RevHighestPointRelation', 'highest point'),\n Rule('$RevHighestPointRelation', 'highest points'),\n\n Rule('$RevRelation', '$RevLowestPointRelation', 'lowest_point'),\n Rule('$RevLowestPointRelation', 'low point'),\n Rule('$RevLowestPointRelation', 'low points'),\n Rule('$RevLowestPointRelation', 'lowest point'),\n Rule('$RevLowestPointRelation', 'lowest points'),\n Rule('$RevLowestPointRelation', 'lowest spot'),\n\n Rule('$RevRelation', '$RevHighestElevationRelation', 'highest_elevation'),\n Rule('$RevHighestElevationRelation', '?highest elevation'),\n\n Rule('$RevRelation', '$RevHeightRelation', 'height'),\n Rule('$RevHeightRelation', 'elevation'),\n Rule('$RevHeightRelation', 'height'),\n Rule('$RevHeightRelation', 'high'),\n Rule('$RevHeightRelation', 'tall'),\n\n Rule('$RevRelation', '$RevAreaRelation', 'area'),\n Rule('$RevAreaRelation', 'area'),\n Rule('$RevAreaRelation', 'big'),\n Rule('$RevAreaRelation', 'large'),\n Rule('$RevAreaRelation', 'size'),\n\n Rule('$RevRelation', '$RevPopulationRelation', 'population'),\n Rule('$RevPopulationRelation', 'big'),\n Rule('$RevPopulationRelation', 'large'),\n Rule('$RevPopulationRelation', 'populated'),\n Rule('$RevPopulationRelation', 'population'),\n Rule('$RevPopulationRelation', 'populations'),\n Rule('$RevPopulationRelation', 'populous'),\n Rule('$RevPopulationRelation', 'size'),\n\n Rule('$RevRelation', '$RevLengthRelation', 'length'),\n Rule('$RevLengthRelation', 'length'),\n Rule('$RevLengthRelation', 'long'),\n]",
"We should now be able to parse \"what is the capital of vermont\". Let's see:",
"rules = rules_optionals + rules_collection_entity + rules_types + rules_relations\ngrammar = Grammar(rules=rules, annotators=annotators)\nparses = grammar.parse_input('what is the capital of vermont ?')\nfor parse in parses[:1]:\n print('\\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))",
"Montpelier! I always forget that one.\nOK, let's evaluate our progress on the Geo880 training data.",
"model = Model(grammar=grammar, executor=executor.execute)\nsample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)",
"Hot diggity, it's working. Denotation oracle accuracy is over 12%, double digits. We have 75 wins, and they're what we expect: queries that simply combine a relation and an entity (or collection).\nIntersections",
"rules_intersection = [\n Rule('$Collection', '$Collection $Collection',\n lambda sems: ('.and', sems[0], sems[1])),\n Rule('$Collection', '$Collection $Optional $Collection',\n lambda sems: ('.and', sems[0], sems[2])),\n Rule('$Collection', '$Collection $Optional $Optional $Collection',\n lambda sems: ('.and', sems[0], sems[3])),\n]\n\nrules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection\ngrammar = Grammar(rules=rules, annotators=annotators)\nparses = grammar.parse_input('states bordering california')\nfor parse in parses[:1]:\n print('\\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))",
"Let's evaluate the impact on the Geo880 training examples.",
"model = Model(grammar=grammar, executor=executor.execute)\nsample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)",
"Great, denotation oracle accuracy has more than doubled, from 12% to 28%. And the wins now include intersections like \"which states border new york\". The losses, however, are clearly dominated by one category of error.\nSuperlatives\nMany of the losses involve superlatives, such as \"biggest\" or \"shortest\". Let's remedy that. As usual, we let the training examples guide us in adding lexical rules.",
"rules_superlatives = [\n Rule('$Collection', '$Superlative ?$Optionals $Collection', lambda sems: sems[0] + (sems[2],)),\n Rule('$Collection', '$Collection ?$Optionals $Superlative', lambda sems: sems[2] + (sems[0],)),\n\n Rule('$Superlative', 'largest', ('.argmax', 'area')),\n Rule('$Superlative', 'largest', ('.argmax', 'population')),\n Rule('$Superlative', 'biggest', ('.argmax', 'area')),\n Rule('$Superlative', 'biggest', ('.argmax', 'population')),\n Rule('$Superlative', 'smallest', ('.argmin', 'area')),\n Rule('$Superlative', 'smallest', ('.argmin', 'population')),\n Rule('$Superlative', 'longest', ('.argmax', 'length')),\n Rule('$Superlative', 'shortest', ('.argmin', 'length')),\n Rule('$Superlative', 'tallest', ('.argmax', 'height')),\n Rule('$Superlative', 'highest', ('.argmax', 'height')),\n\n Rule('$Superlative', '$MostLeast $RevRelation', lambda sems: (sems[0], sems[1])),\n Rule('$MostLeast', 'most', '.argmax'),\n Rule('$MostLeast', 'least', '.argmin'),\n Rule('$MostLeast', 'lowest', '.argmin'),\n Rule('$MostLeast', 'greatest', '.argmax'),\n Rule('$MostLeast', 'highest', '.argmax'),\n]",
"Now we should be able to parse \"tallest mountain\":",
"rules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection + rules_superlatives\ngrammar = Grammar(rules=rules, annotators=annotators)\nparses = grammar.parse_input('tallest mountain')\nfor parse in parses[:1]:\n print('\\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))",
"Let's evaluate the impact on the Geo880 training examples.",
"model = Model(grammar=grammar, executor=executor.execute)\nsample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)",
"Wow, superlatives make a big difference. Denotation oracle accuracy has surged from 28% to 42%.\nReverse joins",
"def reverse(relation_sem):\n \"\"\"TODO\"\"\"\n # relation_sem is a lambda function which takes an arg and forms a pair,\n # either (rel, arg) or (arg, rel). We want to swap the order of the pair.\n def apply_and_swap(arg):\n pair = relation_sem(arg)\n return (pair[1], pair[0])\n return apply_and_swap\n\nrules_reverse_joins = [\n Rule('$Collection', '$Collection ?$Optionals $Relation',\n lambda sems: reverse(sems[2])(sems[0])),\n]\n\nrules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection + rules_superlatives + rules_reverse_joins\ngrammar = Grammar(rules=rules, annotators=annotators)\nparses = grammar.parse_input('which states does the rio grande cross')\nfor parse in parses[:1]:\n print('\\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))",
"Let's evaluate the impact on the Geo880 training examples.",
"model = Model(grammar=grammar, executor=executor.execute)\nsample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)",
"This time the gain in denotation oracle accuracy was more modest, from 42% to 47%. Still, we are making good progress. However, note that a substantial gap has opened between accuracy and oracle accuracy. This indicates that we could benefit from adding a scoring model.\nFeature engineering\nThrough an iterative process of grammar engineering, we've managed to increase denotation oracle accuracy of 47%. But we've been ignoring denotation accuracy, which now lags far behind, at 25%. This represents an opportunity.\nIn order to figure out how best to fix the problem, we need to do some error analysis. Let's look for some specific examples where denotation accuracy is 0, even though denotation oracle accuracy is 1. In other words, let's look for some examples where we have a correct parse, but it's not ranked at the top. We should be able to find some cases like that among the first ten examples of the Geo880 training data.",
"from experiment import evaluate_model\nfrom metrics import denotation_match_metrics\n\nevaluate_model(model=model,\n examples=geo880_train_examples[:10],\n metrics=denotation_match_metrics(),\n print_examples=True)",
"Take a look through that output. Over the ten examples, we achieved denotation oracle accuracy of 60%, but denotation accuracy of just 40%. In other words, there were two examples where we generated a correct parse, but failed to rank it at the top. Take a closer look at those two cases.\nThe first case is \"what state has the shortest river ?\". The top parse has semantics ('.and', 'state', ('.argmin', 'length', 'river')), which means something like \"states that are the shortest river\". That's not right. In fact, there's no such thing: the denotation is empty.\nThe second case is \"what is the highest mountain in alaska ?\". The top parse has semantics ('.argmax', 'height', ('.and', 'mountain', '/state/alaska')), which means \"the highest mountain which is alaska\". Again, there's no such thing: the denotation is empty.\nSo in both of the cases where we put the wrong parse at the top, the top parse had nonsensical semantics with an empty denotation. In fact, if you scroll through the output above, you will see that there are a lot of candidate parses with empty denotations. Seems like we could make a big improvement just by downweighting parses with empty denotations. This is easy to do.",
"def empty_denotation_feature(parse):\n features = defaultdict(float)\n if parse.denotation == ():\n features['empty_denotation'] += 1.0\n return features\n\nweights = {'empty_denotation': -1.0}\n\nmodel = Model(grammar=grammar,\n feature_fn=empty_denotation_feature,\n weights=weights,\n executor=executor.execute)",
"Let's evaluate the impact of using our new empty_denotation feature on the Geo880 training examples.",
"from experiment import evaluate_model\nfrom metrics import denotation_match_metrics\n\nevaluate_model(model=model,\n examples=geo880_train_examples,\n metrics=denotation_match_metrics(),\n print_examples=False)",
"Great! Using the empty_denotation feature has enabled us to increase denotation accuracy from 25% to 39%. That's a big gain! In fact, we've closed most of the gap between accuracy and oracle accuracy. As a result, the headroom for further gains from feature engineering is limited — but it's not zero. The exercises will ask you to push further.\nExercises <a id=\"geoquery-exercises\"></a>\nSeveral of these exercises ask you to measure the impact of your change on key evaluation metrics. Part of your job is to decide which evaluation metrics are most relevant for the change you're making. It's probably best to evaluate only on training data, in order to keep the test data unseen during development. (But the test data is hardly a state secret, so whatever.)\nStraightforward\n\n\nExtend the grammar to handle queries which use the phrase \"how many people\" to ask about population. There are many examples in the Geo880 training data. Measure the impact of this change on key evaluation metrics.\n\n\nExtend the grammar to handle counting questions, indicated by the phrases \"how many\" or \"number of\", as in \"how many states does missouri border\". Measure the impact of this change on key evaluation metrics.\n\n\nSeveral examples in the Geo880 training dataset fail to parse because they refer to locations using names that, while valid, are not recognized by the GeobaseAnnotator. For example, some queries use \"america\" to refer to /country/usa, but GeobaseAnnotator recognizes only \"usa\". Find unannotated location references in the Geo880 training dataset, and extend the grammar to handle them. Measure the impact of this change on key evaluation metrics.\n\n\nExtend the grammar to handle examples of the form \"where is X\", such as \"where is mount whitney\" or \"where is san diego\". Measure the impact of this change on key evaluation metrics.\n\n\nQuite a few examples in the Geo880 training dataset specify a set of entities by name, as in \"how many states have a city named springfield\" or \"how many rivers are called colorado\". Extend the grammar to handle these, leveraging the TokenAnnotator introduced in Unit 2. Measure the impact of this change on key evaluation metrics.\n\n\nExtend the grammar to handle phrases like \"austin texas\" or \"atlanta ga\", where two entity names appear in sequence. Make sure that \"atlanta ga\" has semantics and a denotation, whereas \"atlanta tx\" has semantics but no denotation. Measure the impact of this change on key evaluation metrics.\n\n\nIn Unit 2, while examining the travel domain, we saw big gains from including rule features in our feature representation. Experiment with adding rule features to the GeoQuery model. Are the learned weights intuitive? Do the rule features help? If so, identify a few specific examples which are fixed by the inclusion of rule features. Measure the impact on key evaluation metrics.\n\n\nFind an example where using the empty_denotation feature causes a loss (a bad prediction).\n\n\nThe empty_denotation feature doesn't help on count questions (e.g., \"how many states ...\"), where all parses, good or bad, typically yield a denotation which is a single number. Add a new feature which helps in such cases. Identify a few specific examples which are fixed by the new feature. Measure the impact on key evaluation metrics. [This exercise assumes you have already done the exercise on handling count questions.]\n\n\nChallenging\n\n\nExtend the grammar to handle comparisons, such as \"mountains with height greater than 5000\" or \"mountains with height greater than the height of bona\".\n\n\nBuilding on the previous exercise, extend the grammar to handle even those comparisons which involve ellipsis, such as \"mountains higher than [the height of] mt. katahdin\" or \"rivers longer than [the length of] the colorado river\" (where the bracketed phrase does not appear in the surface form!).\n\n\nExtend the grammar to handle queries involving units, such as \"what is the area of maryland in square kilometers\" or \"how long is the mississippi river in miles\".\n\n\nThe Geo880 training dataset contains many examples involving population density. Extend the grammar to handle these examples. Measure the impact on key evaluation metrics.\n\n\nSeveral examples in the Geo880 training dataset involve some form of negation, expressed by the words \"not\", \"no\", or \"excluding\". Extend the grammar to handle these examples. Measure the impact on key evaluation metrics.\n\n\nThe current grammar handles \"capital of texas\", but not \"texas capital\". It handles \"has austin capital\", but not \"has capital austin\". In general, it defines every phrase which expresses a relation as either a $FwdRelation or a $RevRelation, and constrains word order accordingly. Extend the grammar to allow any phrase which is ordinarily a $FwdRelation to function as a $RevRelation, and vice versa. Can you now handle \"texas capital\" and \"has capital austin\"? Measure the impact on key evaluation metrics.\n\n\nWhat if we permit any word to be optionalized by adding the rule Rule('$Optional', '$Token') to our grammar? (Recall that $Token is produced by the TokenAnnotator.) Measure the impact of this change on key evaluation metrics. You will likely find that the change has some negative effects and some positive effects. Is there a way to mitigate the negative effects, while preserving the positive effects?\n\n\nThe success of the empty_denotation feature demonstrates the potential of denotation features. Can we go further? Experiment with features that characterize the size of the denotation (that is, the number of answers). Are two answers better than one? Are ten answers better than two? If you find some features that seem to work, identify a few specific examples which are fixed by your new features. Measure the impact on key evaluation metrics.\n\n\n<!-- (TODO: There should be an exercise related to spurious ambiguity.) -->\n\n<!--\n1. Because `GeobaseAnnotator` relies on exact string matching, it isn't very robust. As a result, it fails to annotate \"ft. lauderdale\" (which Geobase knows as \"fort lauderdale\"), \"baltamore\" (a misspelling of \"baltimore\"), or \"portlad\" (a typo for \"portland\"). Find a way to make it more robust by doing approximate string matching, perhaps using [Levenshtein distance][] or [minhashing][] of character-level n-grams (shingles).\n-->\n\nCopyright (C) 2015 Bill MacCartney"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb | apache-2.0 | [
"# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex client library: AutoML text entity extraction model for batch prediction\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex client library for Python to create text entity extraction models and do batch prediction using Google Cloud's AutoML.\nDataset\nThe dataset used for this tutorial is the NCBI Disease Research Abstracts dataset from National Center for Biotechnology Information. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.\nObjective\nIn this tutorial, you create an AutoML text entity extraction model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nView the model evaluation.\nMake a batch prediction.\n\nThere is one key difference between using batch prediction and using online prediction:\n\n\nPrediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.\n\n\nBatch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.\n\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest version of Vertex client library.",
"import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG",
"Install the latest GA version of google-cloud-storage library as well.",
"! pip3 install -U google-cloud-storage $USER_FLAG",
"Restart the kernel\nOnce you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.",
"if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.",
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_NAME",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex client library\nImport the Vertex client library into our Python environment.",
"import time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value",
"Vertex constants\nSetup up the following constants for Vertex:\n\nAPI_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\nPARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.",
"# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION",
"AutoML constants\nSet constants unique to AutoML datasets and training:\n\nDataset Schemas: Tells the Dataset resource service which type of dataset it is.\nData Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).\nDataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.",
"# Text Dataset type\nDATA_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml\"\n# Text Labeling type\nLABEL_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/ioformat/text_extraction_io_format_1.0.0.yaml\"\n# Text Training task\nTRAINING_SCHEMA = \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_extraction_1.0.0.yaml\"",
"Hardware Accelerators\nSet the hardware accelerators (e.g., GPU), if any, for prediction.\nSet the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nFor GPU, available accelerators include:\n - aip.AcceleratorType.NVIDIA_TESLA_K80\n - aip.AcceleratorType.NVIDIA_TESLA_P100\n - aip.AcceleratorType.NVIDIA_TESLA_P4\n - aip.AcceleratorType.NVIDIA_TESLA_T4\n - aip.AcceleratorType.NVIDIA_TESLA_V100\nOtherwise specify (None, None) to use a container image to run on a CPU.",
"if os.getenv(\"IS_TESTING_DEPOLY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPOLY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)",
"Container (Docker) image\nFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.\nMachine Type\nNext, set the machine type to use for prediction.\n\nSet the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU.\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs",
"if os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)",
"Tutorial\nNow you are ready to start creating your own AutoML text entity extraction model.\nSet up clients\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nDataset Service for Dataset resources.\nModel Service for Model resources.\nPipeline Service for training.\nJob Service for batch prediction and custom training.",
"# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_dataset_client():\n client = aip.DatasetServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_pipeline_client():\n client = aip.PipelineServiceClient(client_options=client_options)\n return client\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"dataset\"] = create_dataset_client()\nclients[\"model\"] = create_model_client()\nclients[\"pipeline\"] = create_pipeline_client()\nclients[\"job\"] = create_job_client()\n\nfor client in clients.items():\n print(client)",
"Dataset\nNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.\nCreate Dataset resource instance\nUse the helper function create_dataset to create the instance of a Dataset resource. This function does the following:\n\nUses the dataset client service.\nCreates an Vertex Dataset resource (aip.Dataset), with the following parameters:\ndisplay_name: The human-readable name you choose to give it.\nmetadata_schema_uri: The schema for the dataset type.\nCalls the client dataset service method create_dataset, with the following parameters:\nparent: The Vertex location root path for your Database, Model and Endpoint resources.\ndataset: The Vertex dataset object instance you created.\nThe method returns an operation object.\n\nAn operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.\nYou can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:\n| Method | Description |\n| ----------- | ----------- |\n| result() | Waits for the operation to complete and returns a result object in JSON format. |\n| running() | Returns True/False on whether the operation is still running. |\n| done() | Returns True/False on whether the operation is completed. |\n| canceled() | Returns True/False on whether the operation was canceled. |\n| cancel() | Cancels the operation (this may take up to 30 seconds). |",
"TIMEOUT = 90\n\n\ndef create_dataset(name, schema, labels=None, timeout=TIMEOUT):\n start_time = time.time()\n try:\n dataset = aip.Dataset(\n display_name=name, metadata_schema_uri=schema, labels=labels\n )\n\n operation = clients[\"dataset\"].create_dataset(parent=PARENT, dataset=dataset)\n print(\"Long running operation:\", operation.operation.name)\n result = operation.result(timeout=TIMEOUT)\n print(\"time:\", time.time() - start_time)\n print(\"response\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" metadata_schema_uri:\", result.metadata_schema_uri)\n print(\" metadata:\", dict(result.metadata))\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n print(\" etag:\", result.etag)\n print(\" labels:\", dict(result.labels))\n return result\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nresult = create_dataset(\"biomedical-\" + TIMESTAMP, DATA_SCHEMA)",
"Now save the unique dataset identifier for the Dataset resource instance you created.",
"# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split(\"/\")[-1]\n\nprint(dataset_id)",
"Data preparation\nThe Vertex Dataset resource for text has a couple of requirements for your text entity extraction data.\n\nText examples must be stored in a JSONL file. Unlike text classification and sentiment analysis, a CSV index file is not supported.\nThe examples must be either inline text or reference text files that are in Cloud Storage buckets.\n\nJSONL\nFor text entity extraction, the JSONL file has a few requirements:\n\nEach data item is a separate JSON object, on a separate line.\nThe key/value pair text_segment_annotations is a list of character start/end positions in the text per entity with the corresponding label.\ndisplay_name: The label.\nstart_offset/end_offset: The character offsets of the start/end of the entity.\n\nThe key/value pair text_content is the text.\n{'text_segment_annotations': [{'end_offset': value, 'start_offset': value, 'display_name': label}, ...], 'text_content': text}\n\n\nNote: The dictionary key fields may alternatively be in camelCase. For example, 'display_name' can also be 'displayName'.\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.",
"IMPORT_FILE = \"gs://ucaip-test-us-central1/dataset/ucaip_ten_dataset.jsonl\"",
"Quick peek at your data\nYou will use a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.",
"if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head",
"Import data\nNow, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:\n\nUses the Dataset client.\nCalls the client method import_data, with the following parameters:\nname: The human readable name you give to the Dataset resource (e.g., biomedical).\n\nimport_configs: The import configuration.\n\n\nimport_configs: A Python list containing a dictionary, with the key/value entries:\n\ngcs_sources: A list of URIs to the paths of the one or more index files.\nimport_schema_uri: The schema identifying the labeling type.\n\nThe import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.",
"def import_data(dataset, gcs_sources, schema):\n config = [{\"gcs_source\": {\"uris\": gcs_sources}, \"import_schema_uri\": schema}]\n print(\"dataset:\", dataset_id)\n start_time = time.time()\n try:\n operation = clients[\"dataset\"].import_data(\n name=dataset_id, import_configs=config\n )\n print(\"Long running operation:\", operation.operation.name)\n\n result = operation.result()\n print(\"result:\", result)\n print(\"time:\", int(time.time() - start_time), \"secs\")\n print(\"error:\", operation.exception())\n print(\"meta :\", operation.metadata)\n print(\n \"after: running:\",\n operation.running(),\n \"done:\",\n operation.done(),\n \"cancelled:\",\n operation.cancelled(),\n )\n\n return operation\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nimport_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)",
"Train the model\nNow train an AutoML text entity extraction model using your Vertex Dataset resource. To train the model, do the following steps:\n\nCreate an Vertex training pipeline for the Dataset resource.\nExecute the pipeline to start the training.\n\nCreate a training pipeline\nYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:\n\nBeing reusable for subsequent training jobs.\nCan be containerized and ran as a batch job.\nCan be distributed.\nAll the steps are associated with the same pipeline job for tracking progress.\n\nUse this helper function create_pipeline, which takes the following parameters:\n\npipeline_name: A human readable name for the pipeline job.\nmodel_name: A human readable name for the model.\ndataset: The Vertex fully qualified dataset identifier.\nschema: The dataset labeling (annotation) training schema.\ntask: A dictionary describing the requirements for the training job.\n\nThe helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:\n\nparent: The Vertex location root path for your Dataset, Model and Endpoint resources.\ntraining_pipeline: the full specification for the pipeline training job.\n\nLet's look now deeper into the minimal requirements for constructing a training_pipeline specification:\n\ndisplay_name: A human readable name for the pipeline job.\ntraining_task_definition: The dataset labeling (annotation) training schema.\ntraining_task_inputs: A dictionary describing the requirements for the training job.\nmodel_to_upload: A human readable name for the model.\ninput_data_config: The dataset specification.\ndataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.\nfraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.",
"def create_pipeline(pipeline_name, model_name, dataset, schema, task):\n\n dataset_id = dataset.split(\"/\")[-1]\n\n input_config = {\n \"dataset_id\": dataset_id,\n \"fraction_split\": {\n \"training_fraction\": 0.8,\n \"validation_fraction\": 0.1,\n \"test_fraction\": 0.1,\n },\n }\n\n training_pipeline = {\n \"display_name\": pipeline_name,\n \"training_task_definition\": schema,\n \"training_task_inputs\": task,\n \"input_data_config\": input_config,\n \"model_to_upload\": {\"display_name\": model_name},\n }\n\n try:\n pipeline = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT, training_pipeline=training_pipeline\n )\n print(pipeline)\n except Exception as e:\n print(\"exception:\", e)\n return None\n return pipeline",
"Construct the task requirements\nNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.\nThe minimal fields you need to specify are:\n\nmulti_label: Whether True/False this is a multi-label (vs single) classification.\nbudget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.\nmodel_type: The type of deployed model:\nCLOUD: For deploying to Google Cloud.\ndisable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.\n\nFinally, you create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.",
"PIPE_NAME = \"biomedical_pipe-\" + TIMESTAMP\nMODEL_NAME = \"biomedical_model-\" + TIMESTAMP\n\ntask = json_format.ParseDict(\n {\n \"multi_label\": False,\n \"budget_milli_node_hours\": 8000,\n \"model_type\": \"CLOUD\",\n \"disable_early_stopping\": False,\n },\n Value(),\n)\n\nresponse = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)",
"Now save the unique identifier of the training pipeline you created.",
"# The full unique ID for the pipeline\npipeline_id = response.name\n# The short numeric ID for the pipeline\npipeline_short_id = pipeline_id.split(\"/\")[-1]\n\nprint(pipeline_id)",
"Get information on a training pipeline\nNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:\n\nname: The Vertex fully qualified pipeline identifier.\n\nWhen the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.",
"def get_training_pipeline(name, silent=False):\n response = clients[\"pipeline\"].get_training_pipeline(name=name)\n if silent:\n return response\n\n print(\"pipeline\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" state:\", response.state)\n print(\" training_task_definition:\", response.training_task_definition)\n print(\" training_task_inputs:\", dict(response.training_task_inputs))\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", dict(response.labels))\n return response\n\n\nresponse = get_training_pipeline(pipeline_id)",
"Deployment\nTraining the above model may take upwards of 120 minutes time.\nOnce your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.",
"while True:\n response = get_training_pipeline(pipeline_id, True)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_to_deploy_id = None\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n raise Exception(\"Training Job Failed\")\n else:\n model_to_deploy = response.model_to_upload\n model_to_deploy_id = model_to_deploy.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(\"model to deploy:\", model_to_deploy_id)",
"Model information\nNow that your model is trained, you can get some information on your model.\nEvaluate the Model resource\nNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.\nList evaluations for all slices\nUse this helper function list_model_evaluations, which takes the following parameter:\n\nname: The Vertex fully qualified model identifier for the Model resource.\n\nThis helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.\nFor each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (confusionMatrix and confidenceMetrics) you will print the result.",
"def list_model_evaluations(name):\n response = clients[\"model\"].list_model_evaluations(parent=name)\n for evaluation in response:\n print(\"model_evaluation\")\n print(\" name:\", evaluation.name)\n print(\" metrics_schema_uri:\", evaluation.metrics_schema_uri)\n metrics = json_format.MessageToDict(evaluation._pb.metrics)\n for metric in metrics.keys():\n print(metric)\n print(\"confusionMatrix\", metrics[\"confusionMatrix\"])\n print(\"confidenceMetrics\", metrics[\"confidenceMetrics\"])\n\n return evaluation.name\n\n\nlast_evaluation = list_model_evaluations(model_to_deploy_id)",
"Model deployment for batch prediction\nNow deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.\nFor online prediction, you:\n\n\nCreate an Endpoint resource for deploying the Model resource to.\n\n\nDeploy the Model resource to the Endpoint resource.\n\n\nMake online prediction requests to the Endpoint resource.\n\n\nFor batch-prediction, you:\n\n\nCreate a batch prediction job.\n\n\nThe job service will provision resources for the batch prediction request.\n\n\nThe results of the batch prediction request are returned to the caller.\n\n\nThe job service will unprovision the resoures for the batch prediction request.\n\n\nMake a batch prediction request\nNow do a batch prediction to your deployed model.\nMake test items\nYou will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.",
"test_item_1 = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign \" pseudodeficient \" allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'\ntest_item_2 = \"Analysis of alkaptonuria (AKU) mutations and polymorphisms reveals that the CCC sequence motif is a mutational hot spot in the homogentisate 1,2 dioxygenase gene (HGO).\tWe recently showed that alkaptonuria ( AKU ) is caused by loss-of-function mutations in the homogentisate 1 , 2 dioxygenase gene ( HGO ) . Herein we describe haplotype and mutational analyses of HGO in seven new AKU pedigrees . These analyses identified two novel single-nucleotide polymorphisms ( INV4 + 31A-- > G and INV11 + 18A-- > G ) and six novel AKU mutations ( INV1-1G-- > A , W60G , Y62C , A122D , P230T , and D291E ) , which further illustrates the remarkable allelic heterogeneity found in AKU . Reexamination of all 29 mutations and polymorphisms thus far described in HGO shows that these nucleotide changes are not randomly distributed ; the CCC sequence motif and its inverted complement , GGG , are preferentially mutated . These analyses also demonstrated that the nucleotide substitutions in HGO do not involve CpG dinucleotides , which illustrates important differences between HGO and other genes for the occurrence of mutation at specific short-sequence motifs . Because the CCC sequence motifs comprise a significant proportion ( 34 . 5 % ) of all mutated bases that have been observed in HGO , we conclude that the CCC triplet is a mutational hot spot in HGO .\"",
"Make the batch input file\nNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:\n\ncontent: The Cloud Storage path to the file with the text item.\nmime_type: The content type. In our example, it is an text file.\n\nFor example:\n {'content': '[your-bucket]/file1.txt', 'mime_type': 'text'}",
"import json\n\nimport tensorflow as tf\n\ngcs_test_item_1 = BUCKET_NAME + \"/test1.txt\"\nwith tf.io.gfile.GFile(gcs_test_item_1, \"w\") as f:\n f.write(test_item_1 + \"\\n\")\ngcs_test_item_2 = BUCKET_NAME + \"/test2.txt\"\nwith tf.io.gfile.GFile(gcs_test_item_2, \"w\") as f:\n f.write(test_item_2 + \"\\n\")\n\ngcs_input_uri = BUCKET_NAME + \"/test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n data = {\"content\": gcs_test_item_1, \"mime_type\": \"text/plain\"}\n f.write(json.dumps(data) + \"\\n\")\n data = {\"content\": gcs_test_item_2, \"mime_type\": \"text/plain\"}\n f.write(json.dumps(data) + \"\\n\")\n\nprint(gcs_input_uri)\n! gsutil cat $gcs_input_uri",
"Compute instance scaling\nYou have several choices on scaling the compute instances for handling your batch prediction requests:\n\nSingle Instance: The batch prediction requests are processed on a single compute instance.\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.\n\n\nManual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified.\n\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.\n\n\nAuto Scaling: The batch prediction requests are split across a scaleable number of compute instances.\n\nSet the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.",
"MIN_NODES = 1\nMAX_NODES = 1",
"Make batch prediction request\nNow that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters:\n\ndisplay_name: The human readable name for the prediction job.\nmodel_name: The Vertex fully qualified identifier for the Model resource.\ngcs_source_uri: The Cloud Storage path to the input file -- which you created above.\ngcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to.\nparameters: Additional filtering parameters for serving prediction results.\n\nThe helper function calls the job client service's create_batch_prediction_job metho, with the following parameters:\n\nparent: The Vertex location root path for Dataset, Model and Pipeline resources.\nbatch_prediction_job: The specification for the batch prediction job.\n\nLet's now dive into the specification for the batch_prediction_job:\n\ndisplay_name: The human readable name for the prediction batch job.\nmodel: The Vertex fully qualified identifier for the Model resource.\ndedicated_resources: The compute resources to provision for the batch prediction job.\nmachine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\nstarting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.\nmax_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.\nmodel_parameters: Additional filtering parameters for serving prediction results. Note, text models do not support additional parameters.\ninput_config: The input source and format type for the instances to predict.\ninstances_format: The format of the batch prediction request file: jsonl only supported.\ngcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.\noutput_config: The output destination and format for the predictions.\nprediction_format: The format of the batch prediction response file: jsonl only supported.\ngcs_destination: The output destination for the predictions.\ndedicated_resources: The compute resources to provision for the batch prediction job.\nmachine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\nstarting_replica_count: The number of compute instances to initially provision.\nmax_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.\n\nThis call is an asychronous operation. You will print from the response object a few select fields, including:\n\nname: The Vertex fully qualified identifier assigned to the batch prediction job.\ndisplay_name: The human readable name for the prediction batch job.\nmodel: The Vertex fully qualified identifier for the Model resource.\ngenerate_explanations: Whether True/False explanations were provided with the predictions (explainability).\nstate: The state of the prediction job (pending, running, etc).\n\nSince this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.",
"BATCH_MODEL = \"biomedical_batch-\" + TIMESTAMP\n\n\ndef create_batch_prediction_job(\n display_name,\n model_name,\n gcs_source_uri,\n gcs_destination_output_uri_prefix,\n parameters=None,\n):\n\n if DEPLOY_GPU:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_type\": DEPLOY_GPU,\n \"accelerator_count\": DEPLOY_NGPU,\n }\n else:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_count\": 0,\n }\n\n batch_prediction_job = {\n \"display_name\": display_name,\n # Format: 'projects/{project}/locations/{location}/models/{model_id}'\n \"model\": model_name,\n \"model_parameters\": json_format.ParseDict(parameters, Value()),\n \"input_config\": {\n \"instances_format\": IN_FORMAT,\n \"gcs_source\": {\"uris\": [gcs_source_uri]},\n },\n \"output_config\": {\n \"predictions_format\": OUT_FORMAT,\n \"gcs_destination\": {\"output_uri_prefix\": gcs_destination_output_uri_prefix},\n },\n \"dedicated_resources\": {\n \"machine_spec\": machine_spec,\n \"starting_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n },\n }\n response = clients[\"job\"].create_batch_prediction_job(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n )\n print(\"response\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" model:\", response.model)\n try:\n print(\" generate_explanation:\", response.generate_explanation)\n except:\n pass\n print(\" state:\", response.state)\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", response.labels)\n return response\n\n\nIN_FORMAT = \"jsonl\"\nOUT_FORMAT = \"jsonl\" # [jsonl]\n\nresponse = create_batch_prediction_job(\n BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None\n)",
"Now get the unique identifier for the batch prediction job you created.",
"# The full unique ID for the batch job\nbatch_job_id = response.name\n# The short numeric ID for the batch job\nbatch_job_short_id = batch_job_id.split(\"/\")[-1]\n\nprint(batch_job_id)",
"Get information on a batch prediction job\nUse this helper function get_batch_prediction_job, with the following paramter:\n\njob_name: The Vertex fully qualified identifier for the batch prediction job.\n\nThe helper function calls the job client service's get_batch_prediction_job method, with the following paramter:\n\nname: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id\n\nThe helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination.",
"def get_batch_prediction_job(job_name, silent=False):\n response = clients[\"job\"].get_batch_prediction_job(name=job_name)\n if silent:\n return response.output_config.gcs_destination.output_uri_prefix, response.state\n\n print(\"response\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" model:\", response.model)\n try: # not all data types support explanations\n print(\" generate_explanation:\", response.generate_explanation)\n except:\n pass\n print(\" state:\", response.state)\n print(\" error:\", response.error)\n gcs_destination = response.output_config.gcs_destination\n print(\" gcs_destination\")\n print(\" output_uri_prefix:\", gcs_destination.output_uri_prefix)\n return gcs_destination.output_uri_prefix, response.state\n\n\npredictions, state = get_batch_prediction_job(batch_job_id)",
"Get the predictions\nWhen the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.\nFinally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called predictions*.jsonl.\nNow display (cat) the contents. You will see multiple JSON objects, one for each prediction.\nThe first field text_snippet is the text file you did the prediction on, and the second field annotations is the prediction, which is further broken down into:\n\ntext_extraction: The extracted entity from the text.\ndisplay_name: The predicted label for the extraction entity.\nscore: The confidence level between 0 and 1 in the prediction.\nstartOffset: The character offset in the text of the start of the extracted entity.\nendOffset: The character offset in the text of the end of the extracted entity.",
"def get_latest_predictions(gcs_out_dir):\n \"\"\" Get the latest prediction subfolder using the timestamp in the subfolder name\"\"\"\n folders = !gsutil ls $gcs_out_dir\n latest = \"\"\n for folder in folders:\n subfolder = folder.split(\"/\")[-2]\n if subfolder.startswith(\"prediction-\"):\n if subfolder > latest:\n latest = folder[:-1]\n return latest\n\n\nwhile True:\n predictions, state = get_batch_prediction_job(batch_job_id, True)\n if state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"The job has not completed:\", state)\n if state == aip.JobState.JOB_STATE_FAILED:\n raise Exception(\"Batch Job Failed\")\n else:\n folder = get_latest_predictions(predictions)\n ! gsutil ls $folder/prediction*.jsonl\n\n ! gsutil cat $folder/prediction*.jsonl\n break\n time.sleep(60)",
"Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket",
"delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pyGrowler/Growler | examples/ExampleNotebook_1.ipynb | apache-2.0 | [
"Growler Example in Jupyter",
"import growler\n\ngrowler.__meta__.version_info",
"Create growler application with name NotebookServer",
"app = growler.App(\"NotebookServer\")",
"Add a general purpose method which prints ip address and the USER-AGENT header",
"@app.use\ndef print_client_info(req, res):\n ip = req.ip\n reqpath = req.path\n print(\"[{ip}] {path}\".format(ip=ip, path=reqpath))\n print(\" >\", req.headers['USER-AGENT'])\n print(flush=True)\n",
"Next, add a route matching any GET requests for the root (/) of the site. This uses a simple global variable to count the number times this page has been accessed, and return text to the client",
"i = 0\[email protected](\"/\")\ndef index(req, res):\n global i\n res.send_text(\"It Works! (%d)\" % i)\n i += 1",
"We can see the tree of middleware all requests will pass through - Notice the router object that was implicitly created which will match all requests.",
"app.print_middleware_tree()",
"Use the helper method to create the asyncio server listening on port 9000.",
"app.create_server_and_run_forever(host='127.0.0.1', port=9000)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
yuhao0531/dmc | notebooks/week-2/04 - Lab 2 Assignment.ipynb | apache-2.0 | [
"Lab 2 assignment\nThis assignment will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.\nWe will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.",
"import random\nimport numpy",
"First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.",
"gameStake = 50 \ncards = range(10)",
"Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.",
"class Player:\n \n # create here two local variables to store a unique ID for each player and the player's current 'pot' of money\n PN=0\n Pot=0# [FILL IN YOUR VARIABLES HERE]\n \n # in the __init__() function, use the two input variables to initialize the ID and starting pot of each player\n \n def __init__(self, inputID, startingPot):\n self.PN=inputID\n self.Pot=startingPot# [CREATE YOUR INITIALIZATIONS HERE]\n \n # create a function for playing the game. This function starts by taking an input for the dealer's card\n # and picking a random number from the 'cards' list for the player's card\n\n def play(self, dealerCard):\n # we use the random.choice() function to select a random item from a list\n playerCard = random.choice(cards)\n \n # here we should have a conditional that tests the player's card value against the dealer card\n # and returns a statement saying whether the player won or lost the hand\n # before returning the statement, make sure to either add or subtract the stake from the player's pot so that\n # the 'pot' variable tracks the player's money\n \n if playerCard < dealerCard:\n self.Pot=self.Pot-gameStake\n print 'player'+str(self.PN)+' Lose,'+str(playerCard)+' vs '+str(dealerCard)# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE] \n else:\n self.Pot=self.Pot+gameStake\n print 'player'+str(self.PN)+' Win,'+str(playerCard)+' vs '+str(dealerCard)# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]\n \n # create an accessor function to return the current value of the player's pot\n def returnPot(self):\n return self.Pot# [FILL IN THE RETURN STATEMENT]\n \n # create an accessor function to return the player's ID\n def returnID(self):\n return self.PN# [FILL IN THE RETURN STATEMENT]",
"Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.",
"def playHand(players):\n \n for player in players:\n dealerCard = random.choice(cards)\n player.play(dealerCard)#[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]",
"Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.",
"def checkBalances(players):\n \n for player in players:\n print 'player '+str(player.returnID())+ ' has $ '+str(player.returnPot())+ ' left'#[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]",
"Now we are ready to start the game. First we create an empy list to store the collection of players in the game.",
"players = [] ",
"Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.",
"for i in range(5):\n players.append(Player(i, 500))",
"Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.",
"for i in range(10):\n print ''\n print 'start game ' + str(i)\n playHand(players)",
"Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.",
"print ''\nprint 'game results:'\ncheckBalances(players)",
"Below is a version of the expected printout if you've done everything correctly (note that since the cards are chosen randomly the actual results will differ, but the structure should be the same). Once you finish the assignment please submit a pull request to the main dmc-2016 repo before the deadline.\n```\nstart game 0\nplayer 0 Lose, 4 vs 7\nplayer 1 Win, 2 vs 0\nplayer 2 Lose, 0 vs 4\nplayer 3 Win, 7 vs 2\nplayer 4 Win, 5 vs 0\nstart game 1\nplayer 0 Win, 1 vs 0\nplayer 1 Lose, 1 vs 5\nplayer 2 Lose, 6 vs 9\nplayer 3 Lose, 1 vs 8\nplayer 4 Lose, 0 vs 9\nstart game 2\nplayer 0 Win, 3 vs 3\nplayer 1 Lose, 0 vs 2\nplayer 2 Win, 9 vs 6\nplayer 3 Win, 8 vs 7\nplayer 4 Win, 8 vs 6\nstart game 3\nplayer 0 Win, 9 vs 7\nplayer 1 Lose, 7 vs 8\nplayer 2 Lose, 2 vs 3\nplayer 3 Lose, 0 vs 8\nplayer 4 Lose, 0 vs 6\nstart game 4\nplayer 0 Win, 7 vs 4\nplayer 1 Win, 3 vs 0\nplayer 2 Win, 8 vs 5\nplayer 3 Win, 2 vs 1\nplayer 4 Lose, 4 vs 7\nstart game 5\nplayer 0 Lose, 2 vs 8\nplayer 1 Lose, 4 vs 6\nplayer 2 Win, 2 vs 0\nplayer 3 Lose, 4 vs 5\nplayer 4 Lose, 3 vs 8\nstart game 6\nplayer 0 Lose, 3 vs 6\nplayer 1 Win, 8 vs 0\nplayer 2 Win, 5 vs 5\nplayer 3 Lose, 2 vs 6\nplayer 4 Win, 8 vs 7\nstart game 7\nplayer 0 Lose, 0 vs 9\nplayer 1 Lose, 6 vs 8\nplayer 2 Lose, 1 vs 9\nplayer 3 Lose, 4 vs 8\nplayer 4 Win, 9 vs 8\nstart game 8\nplayer 0 Lose, 1 vs 8\nplayer 1 Lose, 3 vs 9\nplayer 2 Win, 5 vs 4\nplayer 3 Win, 6 vs 2\nplayer 4 Win, 3 vs 0\nstart game 9\nplayer 0 Lose, 5 vs 6\nplayer 1 Win, 6 vs 1\nplayer 2 Lose, 8 vs 9\nplayer 3 Lose, 3 vs 9\nplayer 4 Win, 7 vs 5\ngame results:\nplayer 0 has $400 left.\nplayer 1 has $400 left.\nplayer 2 has $500 left.\nplayer 3 has $400 left.\nplayer 4 has $600 left.\n```"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/cccr-iitm/cmip6/models/sandbox-3/atmoschem.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CCCR-IITM\nSource ID: SANDBOX-3\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:48\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-3', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jsub10/Machine-Learning-By-Example | Chapter-4-Non-Linear-Regression.ipynb | gpl-3.0 | [
"Think Like a Machine - Chapter 4\nNon-Linear Regression with Multiple Variables\nACKNOWLEDGEMENT\nA lot of the code in this notebook is from John D. Wittenauer's notebooks that cover the exercises in Andrew Ng's course on Machine Learning on Coursera. This is mostly Wittenauer's and Ng's work and acknowledged as such. I've also used some code from Sebastian Raschka's book Python Machine Learning.",
"# Use the functions from another notebook in this notebook\n%run SharedFunctions.ipynb\n\n# Import our usual libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Our model in step 3 of the previous chapter has been simple. It multipled our inputs by constants (the values of $\\theta$) and added them up. That is classic linear stuff.\nWith only a slight modification of that model we can easily extend regression to any number of variables -- even millions of them!\nLoad the Data",
"# Load up the housing price data we used before\nimport os\npath = os.getcwd() + '/Data/ex1data2.txt'\ndata2 = pd.read_csv(path, header=None, names=['Size', 'Bedrooms', 'Price'])\ndata2.head()",
"Visualize the Data\nWe can visualize the entire dataset as follows.",
"import seaborn as sns\nsns.set(style='whitegrid', context='notebook')\ncols = ['Size', 'Bedrooms', 'Price']\nsns.pairplot(data2[cols], size=2.5)\nplt.show()",
"Exercise 4-1\nBased on the visuals above, how would you describe the data? Write a short paragraph describing the data.\nUse Size as the Key Variable\nParadoxically, to demostrate how multivariate non-linear regression works, we'll strip down our original dataset into one that just has Size and Price; the Bedrooms part of the data is removed. This simplifies things so that we can easily visualize what's going on.\nSo, to visualize things more easily, we're going to focus just on the sinlge variable -- the size of the house. We'll turn this into a multi-variable situation in just a bit.",
"# Just checking on the type of object data2 is ... good to remind ourselves\ntype(data2)\n\n# First drop the Bedrooms column from the data set -- we're not going to be using it for the rest of this notebook\ndata3 = data2.drop('Bedrooms', axis = 1)\ndata3.head()\n\n# Visualize this simplified data set\ndata3.plot.scatter(x='Size', y='Price', figsize=(8,5))",
"How Polynomials Fit the Data\nLet's visualize the fit for various degrees of polynomial functions.",
"# Because Price is about 100 times Size, first normalize the data\ndata3Norm = (data3 - data3.mean()) / data3.std()\ndata3Norm.head()\n\nX = data3Norm['Size']\ny = data3Norm['Price']\n\n# fit the data with a 2nd degree polynomial\nz2 = np.polyfit(X, y, 2) \np2 = np.poly1d(z2) # construct the polynomial (note: that's a one in \"poly1d\")\n\n# fit the data with a 3rd degree polynomial\nz3 = np.polyfit(X, y, 3) \np3 = np.poly1d(z3) # construct the polynomial\n\n# fit the data with a 4th degree polynomial\nz4 = np.polyfit(X, y, 4) \np4 = np.poly1d(z4) # construct the polynomial\n\n# fit the data with a 8th degree polynomial - just for the heck of it :-)\nz8 = np.polyfit(X, y, 8) \np8 = np.poly1d(z8) # construct the polynomial\n\n# fit the data with a 16th degree polynomial - just for the heck of it :-)\nz16 = np.polyfit(X, y, 16) \np16 = np.poly1d(z16) # construct the polynomial\n\nxx = np.linspace(-2, 4, 100)\nplt.figure(figsize=(8,5))\nplt.plot(X, y, 'o', label='data')\nplt.xlabel('Size')\nplt.ylabel('Price')\nplt.plot(xx, p2(xx), 'g-', label='2nd degree poly')\nplt.plot(xx, p3(xx), 'y-', label='3rd degree poly')\n#plt.plot(xx, p4(xx), 'r-', label='4th degree poly')\nplt.plot(xx, p8(xx), 'c-', label='8th degree poly')\n#plt.plot(xx, p16(xx), 'm-', label='16th degree poly')\nplt.legend(loc=2)\nplt.axis([-2,4,-1.5,3]) # Use for higher degrees of polynomials",
"Steps 1 and 2: Define the Inputs and the Outputs",
"# Add a column of 1s to the X input (keeps the notation simple)\ndata3Norm.insert(0,'x0',1)\ndata3Norm.head()\n\ndata3Norm.insert(2,'Size^2', np.power(data3Norm['Size'],2))\ndata3Norm.head()\n\ndata3Norm.insert(3,'Size^3', np.power(data3Norm['Size'],3))\ndata3Norm.head()\n\ndata3Norm.insert(4,'Size^4', np.power(data3Norm['Size'],4))\ndata3Norm.head()",
"We now have 4 input variables -- they're various powers of the one input variable we started with.",
"X3 = data3Norm.iloc[:, 0:5]\ny3 = data3Norm.iloc[:, 5]",
"Step 3: Define the Model\nWe're going to turn this one (dependent) variable data set consisting of Size values into a dataset that will be represented by a multi-variate, polynomial model. First let's define the kind of model we're interested in. In the expressions below $x$ represents the Size of a house and the model is saying that the price of the house is a polynomial function of size.\nHere's a second-degree polynomial model:\nModel p2 = $h_{\\theta}(x) = \\theta_{0}x_{0} + \\theta_{1}x + \\theta_{2}x^{2}$\nHere's a third-degree polynomial model:\nModel p3 = $h_{\\theta}(x) = \\theta_{0}x_{0} + \\theta_{1}x + \\theta_{2}x^{2} + \\theta_{3}x^3$\nAnd here's a fourth-degree polynomial model:\nModel p4 = $h_{\\theta}(x) = \\theta_{0}x_{0} + \\theta_{1}x + \\theta_{2}x^{2} + \\theta_{3}x^3 + \\theta_{4}x^4$\nOur models are more complicated than before, but $h_{\\theta}(x)$ is still the same calculation as before because our inputs have been transformed to represent $x^{2}$, $x^{3}$, and $x^{4}$.\nWe'll use Model p4 for the rest of the calculations. It's a legitimate question to ask how to decide which model choose. We'll answer that question a few chapters later. \nStep 4: Define the Parameters of the Model\n$\\theta_{0}$, $\\theta_{1}$, $\\theta_{2}$, $\\theta_{3}$, and $\\theta_{4}$ are the parameters of the model. Unlike our example of the boiling water in Chapter 1, these parameters can each take on an infinite number of values. $\\theta_{0}$ is called the bias value.\nWith this model, we know exactly how to transform an input into an output -- that is, once the values of the parameters are given.\nLet's pick a value of X from the dataset, fix specific values for $\\theta_{0}$, $\\theta_{1}$, $\\theta_{2}$, $\\theta_{3}$, and $\\theta_{4}$, and see what we get for the value of y.\nSpecifically, let\n$\\begin{bmatrix}\n\\theta_{0} \\\n\\theta_{1} \\\n\\theta_{2} \\\n\\theta_{3} \\\n\\theta_{4}\n\\end{bmatrix} = \n\\begin{bmatrix}\n-10 \\\n1 \\\n0 \\\n5 \\\n-1\n\\end{bmatrix}$\nThis means $\\theta_{0}$ is -10, $\\theta_{1}$ is 1, and so on.\nLet's try out X * $\\theta$ for the first few rows of X.",
"# Outputs generated by our model for the first 5 inputs with the specific theta values below\ntheta_test = np.matrix('-10;1;0;5;-1')\noutputs = np.matrix(X3.iloc[0:5, :]) * theta_test\noutputs\n\n# Compare with the first few values of the output\ny3.head()",
"That's quite a bit off from the actual values; so we know that the values for $\\theta$ in theta_test must be quite far from the optimal values for $\\theta$ -- the values that will minimize the cost of getting it wrong.\nStep 5: Define the Cost of Getting it Wrong\nOur cost function is exactly the same as it was before for the single variable case. \nThe cost of getting it wrong is defined as a function $J(\\theta)$:\n$$J(\\theta) = \\frac{1}{2m} \\sum_{i=1}^{m} (h_{\\theta}x^{(i)}) - y^{(i)})^2$$\nThe only difference from what we had before is the addition of the various $\\theta$s and $x$s\n$$h_{\\theta}(X) = \\theta_{0} * x_{0}\\ +\\ \\theta_{1} * x_{1} +\\ \\theta_{2} * x_{2} +\\ \\theta_{3} * x_{3} +\\ \\theta_{4} * x_{4}$$\nwhere $x_{2} = x_{1}^{2}$, $x_{3} = x_{1}^{3}$, and $x_{4} = x_{1}^{4}$.",
"# Compute the cost for a given set of theta values over the entire dataset\n# Get X and y in to matrix form\ncomputeCost(np.matrix(X3.values), np.matrix(y3.values), theta_test)",
"We don't know yet if this is high or low -- we'll have to try out a whole bunch of $\\theta$ values. Or better yet, we can use pick an iterative method and implement it.\nSteps 6 and 7: Pick an Iterative Method to Minimize the Cost of Getting it Wrong and Implement It\nOnce again, the method that will \"learn\" the optimal values for $\\theta$ is gradient descent. We don't have to do a thing to the function we wrote before for gradient descent. Let's use it to find the minimum cost and the values of $\\theta$ that result in that minimum cost.",
"theta_init = np.matrix('-1;0;1;0;-1')\n# Run gradient descent for a number of different learning rates\nalpha = 0.00001\niters = 5000\n\ntheta_opt, cost_min = gradientDescent(np.matrix(X3.values), np.matrix(y3.values), theta_init, alpha, iters)\n \n\n# This is the value of theta for the last iteration above -- hence for alpha = 0.1\ntheta_opt\n\n# The minimum cost\ncost_min[-1]",
"Step 8: The Results\nLet's make some predictions based on the values of $\\theta_{opt}$. We're using our 4th-order polynomial as the model.",
"size = 2\nsize_nonnorm = (size * data3.std()[0]) + data3.mean()[0]\nprice = (theta_opt[0] * 1) + (theta_opt[1] * size) + (theta_opt[2] * np.power(size,2)) + (theta_opt[3] * np.power(size,3)) + (theta_opt[4] * np.power(size,4))\n\nprice[0,0]\n\n# Transform the price into the real price (not normalized)\nprice_mean = data3.mean()\n\nprice_std = data3.std()[1]\n\nprice_pred = (price[0,0] * price_std) + price_mean\n\nprice_pred\n\nsize_nonnorm\n\ndata3.mean()[1]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ceos-seo/data_cube_notebooks | notebooks/Data_Challenge/LandCover.ipynb | apache-2.0 | [
"2022 EY Challenge - Land Cover\nThis notebook can be used to create a land cover dataset. This land cover information can be used as a \"predictor variable\" to relate to species samples. For example, certain land cover classifications (e.g. water, grass, trees) may be conducive to species habitats. This dataset contains global estimates of 10-class land use/land cover for the year 2020, derived from ESA Sentinel-2 imagery at 10-meter spatial resolution. The data can be found in the MS Planetary Computer catalog: https://planetarycomputer.microsoft.com/dataset/io-lulc#overview",
"# Supress Warnings \nimport warnings\nwarnings.filterwarnings('ignore')\n\n# Import common GIS tools\nimport numpy as np\nimport xarray as xr\nimport matplotlib.pyplot as plt\nimport rioxarray as rio\nimport rasterio.features\nimport folium\nimport math\nfrom matplotlib.colors import ListedColormap\n\n# Import Planetary Computer tools\nimport stackstac\nimport pystac_client\nimport planetary_computer as pc\nfrom pystac.extensions.raster import RasterExtension as raster",
"Define the analysis region and view on a map\nFirst, we define our area of interest using latitude and longitude coordinates. Our test region is near Richmond, NSW, Australia. The first line defines the lower-left corner of the bounding box and the second line defines the upper-right corner of the bounding box. GeoJSON format uses a specific order: (longitude, latitude), so be careful when entering the coordinates.",
"# Define the bounding box using corners\nmin_lon, min_lat = (150.62, -33.69) # Lower-left corner (longitude, latitude)\nmax_lon, max_lat = (150.83, -33.48) # Upper-right corner (longitude, latitude)\n\nbbox = (min_lon, min_lat, max_lon, max_lat)\nlatitude = (min_lat, max_lat)\nlongitude = (min_lon, max_lon)\n\ndef _degree_to_zoom_level(l1, l2, margin = 0.0):\n \n degree = abs(l1 - l2) * (1 + margin)\n zoom_level_int = 0\n if degree != 0:\n zoom_level_float = math.log(360/degree)/math.log(2)\n zoom_level_int = int(zoom_level_float)\n else:\n zoom_level_int = 18\n return zoom_level_int\n\ndef display_map(latitude = None, longitude = None):\n\n margin = -0.5\n zoom_bias = 0\n lat_zoom_level = _degree_to_zoom_level(margin = margin, *latitude ) + zoom_bias\n lon_zoom_level = _degree_to_zoom_level(margin = margin, *longitude) + zoom_bias\n zoom_level = min(lat_zoom_level, lon_zoom_level) \n center = [np.mean(latitude), np.mean(longitude)]\n \n map_hybrid = folium.Map(location=center,zoom_start=zoom_level, \n tiles=\" http://mt1.google.com/vt/lyrs=y&z={z}&x={x}&y={y}\",attr=\"Google\")\n \n line_segments = [(latitude[0],longitude[0]),(latitude[0],longitude[1]),\n (latitude[1],longitude[1]),(latitude[1],longitude[0]),\n (latitude[0],longitude[0])]\n \n map_hybrid.add_child(folium.features.PolyLine(locations=line_segments,color='red',opacity=0.8))\n map_hybrid.add_child(folium.features.LatLngPopup()) \n\n return map_hybrid\n\n# Plot bounding box on a map\nf = folium.Figure(width=600, height=600)\nm = display_map(latitude,longitude)\nf.add_child(m)",
"Discover and load the data for analysis\nUsing the pystac_client we can search the Planetary Computer's STAC endpoint for items matching our query parameters. We will look for data tiles (1-degree square) that intersect our bounding box.",
"stac = pystac_client.Client.open(\"https://planetarycomputer.microsoft.com/api/stac/v1\")\nsearch = stac.search(bbox=bbox,collections=[\"io-lulc\"])\n\nitems = list(search.get_items())\nprint('Number of data tiles intersecting our bounding box:',len(items))",
"Next, we'll load the data into an xarray DataArray using stackstac and then \"clip\" the data to only the pixels within our region (bounding box). There are also several other <b>important settings for the data</b>: We have changed the projection to EPSG=4326 which is standard latitude-longitude in degrees. We have specified the spatial resolution of each pixel to be 10-meters, which is the baseline accuracy for this data. After creating the DataArray, we will need to mosaic the raster chunks across the time dimension (remember, they're all from a single synthesized \"time\" from 2020) and drop the single band dimension. Finally, we will read the actual data by calling .compute(). In the end, the dataset will include land cover classifications (10 total) at 10-meters spatial resolution.",
"item = next(search.get_items())\nitems = [pc.sign(item).to_dict() for item in search.get_items()]\nnodata = raster.ext(item.assets[\"data\"]).bands[0].nodata\n\n# Define the pixel resolution for the final product\n# Define the scale according to our selected crs, so we will use degrees\nresolution = 10 # meters per pixel \nscale = resolution / 111320.0 # degrees per pixel for crs=4326 \n\ndata = stackstac.stack(\n items, # use only the data from our search results\n epsg=4326, # use common lat-lon coordinates\n resolution=scale, # Use degrees for crs=4326\n dtype=np.ubyte, # matches the data versus default float64\n fill_value=nodata, # fills voids with no data\n bounds_latlon=bbox # clips to our bounding box\n)\n\nland_cover = stackstac.mosaic(data, dim=\"time\", axis=None).squeeze().drop(\"band\").compute()",
"Land Cover Map\nNow we will create a land cover classification map. The source GeoTIFFs contain a colormap and the STAC metadata contains the class names. We'll open one of the source files just to read this metadata and construct the right colors and names for our plot.",
"# Create a custom colormap using the file metadata\nclass_names = land_cover.coords[\"label:classes\"].item()[\"classes\"]\nclass_count = len(class_names)\n\nwith rasterio.open(pc.sign(item.assets[\"data\"].href)) as src:\n colormap_def = src.colormap(1) # get metadata colormap for band 1\n colormap = [np.array(colormap_def[i]) / 255 for i in range(class_count)\n ] # transform to matplotlib color format\n\ncmap = ListedColormap(colormap)\n\nimage = land_cover.plot(size=8,cmap=cmap,add_colorbar=False,vmin=0,vmax=class_count)\ncbar = plt.colorbar(image)\ncbar.set_ticks(range(class_count))\ncbar.set_ticklabels(class_names)\nplt.gca().set_aspect('equal')\nplt.title('Land Cover Classification')\nplt.xlabel('Longitude')\nplt.ylabel('Latitude')\nplt.show()",
"Save the output data in a GeoTIFF file",
"filename = \"Land_Cover_sample2.tiff\"\n\n# Set the dimensions of file in pixels\nheight = land_cover.shape[0]\nwidth = land_cover.shape[1]\n\n# Define the Coordinate Reference System (CRS) to be common Lat-Lon coordinates\n# Define the tranformation using our bounding box so the Lat-Lon information is written to the GeoTIFF\ngt = rasterio.transform.from_bounds(min_lon,min_lat,max_lon,max_lat,width,height)\nland_cover.rio.write_crs(\"epsg:4326\", inplace=True)\nland_cover.rio.write_transform(transform=gt, inplace=True);\n\n# Create the GeoTIFF output file using the defined parameters\nwith rasterio.open(filename,'w',driver='GTiff',width=width,height=height,\n crs='epsg:4326',transform=gt,count=1,compress='lzw',dtype=np.ubyte) as dst:\n dst.write(land_cover,1)\n dst.close()\n\n# Show the location and size of the new output file\n!ls *.tiff -lah",
"How will the participants use this data?\nThe GeoTIFF file will contain the Lat-Lon coordinates of each pixel and will also contain the land class for each pixel. Since the FrogID data is also Lat-Lon position, it is possible to find the closest pixel using code similar to what is demonstrated below. Once this pixel is found, then the corresponding land class can be used for modeling species distribution. In addition, participants may want to consider proximity to specific land classes. For example, there may be a positive correlation with land classes such as trees, grass or water and there may be a negative correlation with land classes such as built-up area or bare soil.\nThese are the possible <b>land classifications</b>, reported below:<br>\n1 = water, 2 = trees, 3 = grass, 4 = flooded vegetation, 5 = crops<br>\n6 = scrub, 7 = built-up (urban), 8 = bare soil, 9 = snow/ice, 10=clouds",
"# This is an example for a specific Lon-Lat location randomly selected within our sample region.\nvalues = land_cover.sel(x=150.71, y=-33.51, method=\"nearest\").values \nprint(\"This is the land classification for the closest pixel: \",values)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wei-Z/Python-Machine-Learning | code/ch13/ch13.ipynb | mit | [
"Sebastian Raschka, 2015\nhttps://github.com/rasbt/python-machine-learning-book\nPython Machine Learning - Code Examples\nChapter 13 - Parallelizing Neural Network Training with Theano\nNote that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).",
"%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,matplotlib,theano,keras\n\n# to install watermark just uncomment the following line:\n#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py",
"Overview\n\nBuilding, compiling, and running expressions with Theano\nWhat is Theano?\nFirst steps with Theano\nConfiguring Theano\nWorking with array structures\nWrapping things up – a linear regression example\nChoosing activation functions for feedforward neural networks\nLogistic function recap\nEstimating probabilities in multi-class classification via the softmax function\nBroadening the output spectrum by using a hyperbolic tangent\nTraining neural networks efficiently using Keras\nSummary\n\n<br>\n<br>",
"from IPython.display import Image",
"Building, compiling, and running expressions with Theano\nDepending on your system setup, it is typically sufficient to install Theano via\npip install Theano\n\nFor more help with the installation, please see: http://deeplearning.net/software/theano/install.html",
"Image(filename='./images/13_01.png', width=500) ",
"<br>\n<br>\nWhat is Theano?\n...\nFirst steps with Theano\nIntroducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors",
"import theano\nfrom theano import tensor as T\n\n# initialize\nx1 = T.scalar()\nw1 = T.scalar()\nw0 = T.scalar()\nz1 = w1 * x1 + w0\n\n# compile\nnet_input = theano.function(inputs=[w1, x1, w0], outputs=z1)\n\n# execute\nnet_input(2.0, 1.0, 0.5)",
"<br>\n<br>\nConfiguring Theano\nConfiguring Theano. For more options, see\n- http://deeplearning.net/software/theano/library/config.html\n- http://deeplearning.net/software/theano/library/floatX.html",
"print(theano.config.floatX)\n\ntheano.config.floatX = 'float32'",
"To change the float type globally, execute \nexport THEANO_FLAGS=floatX=float32\n\nin your bash shell. Or execute Python script as\nTHEANO_FLAGS=floatX=float32 python your_script.py\n\nRunning Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html\nNote that float32 is recommended for GPUs; float64 on GPUs is currently still relatively slow.",
"print(theano.config.device)",
"You can run a Python script on CPU via:\nTHEANO_FLAGS=device=cpu,floatX=float64 python your_script.py\n\nor GPU via\nTHEANO_FLAGS=device=gpu,floatX=float32 python your_script.py\n\nIt may also be convenient to create a .theanorc file in your home directory to make those configurations permanent. For example, to always use float32, execute\necho -e \"\\n[global]\\nfloatX=float32\\n\" >> ~/.theanorc\n\nOr, create a .theanorc file manually with the following contents\n[global]\nfloatX = float32\ndevice = gpu\n\n<br>\n<br>\nWorking with array structures",
"import numpy as np\n\n# initialize\n# if you are running Theano on 64 bit mode, \n# you need to use dmatrix instead of fmatrix\nx = T.fmatrix(name='x')\nx_sum = T.sum(x, axis=0)\n\n# compile\ncalc_sum = theano.function(inputs=[x], outputs=x_sum)\n\n# execute (Python list)\nary = [[1, 2, 3], [1, 2, 3]]\nprint('Column sum:', calc_sum(ary))\n\n# execute (NumPy array)\nary = np.array([[1, 2, 3], [1, 2, 3]], dtype=theano.config.floatX)\nprint('Column sum:', calc_sum(ary))",
"Updating shared arrays.\nMore info about memory management in Theano can be found here: http://deeplearning.net/software/theano/tutorial/aliasing.html",
"# initialize\nx = T.fmatrix(name='x')\nw = theano.shared(np.asarray([[0.0, 0.0, 0.0]], \n dtype=theano.config.floatX))\nz = x.dot(w.T)\nupdate = [[w, w + 1.0]]\n\n# compile\nnet_input = theano.function(inputs=[x], \n updates=update, \n outputs=z)\n\n# execute\ndata = np.array([[1, 2, 3]], dtype=theano.config.floatX)\nfor i in range(5):\n print('z%d:' % i, net_input(data))",
"We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).",
"# initialize\ndata = np.array([[1, 2, 3]], \n dtype=theano.config.floatX)\nx = T.fmatrix(name='x')\nw = theano.shared(np.asarray([[0.0, 0.0, 0.0]], \n dtype=theano.config.floatX))\nz = x.dot(w.T)\nupdate = [[w, w + 1.0]]\n\n# compile\nnet_input = theano.function(inputs=[], \n updates=update, \n givens={x: data},\n outputs=z)\n\n# execute\nfor i in range(5):\n print('z:', net_input())",
"<br>\n<br>\nWrapping things up: A linear regression example\nCreating some training data.",
"import numpy as np\nX_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],\n [5.0], [6.0], [7.0], [8.0], [9.0]], \n dtype=theano.config.floatX)\n\ny_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0, \n 6.3, 6.6, 7.4, 8.0, 9.0], \n dtype=theano.config.floatX)",
"Implementing the training function.",
"import theano\nfrom theano import tensor as T\nimport numpy as np\n\ndef train_linreg(X_train, y_train, eta, epochs):\n\n costs = []\n # Initialize arrays\n eta0 = T.fscalar('eta0')\n y = T.fvector(name='y') \n X = T.fmatrix(name='X') \n w = theano.shared(np.zeros(\n shape=(X_train.shape[1] + 1),\n dtype=theano.config.floatX),\n name='w')\n \n # calculate cost\n net_input = T.dot(X, w[1:]) + w[0]\n errors = y - net_input\n cost = T.sum(T.pow(errors, 2)) \n\n # perform gradient update\n gradient = T.grad(cost, wrt=w)\n update = [(w, w - eta0 * gradient)]\n\n # compile model\n train = theano.function(inputs=[eta0],\n outputs=cost,\n updates=update,\n givens={X: X_train,\n y: y_train,}) \n \n for _ in range(epochs):\n costs.append(train(eta))\n \n return costs, w",
"Plotting the sum of squared errors cost vs epochs.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\ncosts, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)\n \nplt.plot(range(1, len(costs)+1), costs)\n\nplt.tight_layout()\nplt.xlabel('Epoch')\nplt.ylabel('Cost')\nplt.tight_layout()\n# plt.savefig('./figures/cost_convergence.png', dpi=300)\nplt.show()",
"Making predictions.",
"def predict_linreg(X, w):\n Xt = T.matrix(name='X')\n net_input = T.dot(Xt, w[1:]) + w[0]\n predict = theano.function(inputs=[Xt], givens={w: w}, outputs=net_input)\n return predict(X)\n\nplt.scatter(X_train, y_train, marker='s', s=50)\nplt.plot(range(X_train.shape[0]), \n predict_linreg(X_train, w), \n color='gray', \n marker='o', \n markersize=4, \n linewidth=3)\n\nplt.xlabel('x')\nplt.ylabel('y')\n\nplt.tight_layout()\n# plt.savefig('./figures/linreg.png', dpi=300)\nplt.show()",
"<br>\n<br>\nChoosing activation functions for feedforward neural networks\n...\nLogistic function recap\nThe logistic function, often just called \"sigmoid function\" is in fact a special case of a sigmoid function.\nNet input $z$:\n$$z = w_1x_{1} + \\dots + w_mx_{m} = \\sum_{j=1}^{m} x_{j}w_{j} \\ = \\mathbf{w}^T\\mathbf{x}$$\nLogistic activation function:\n$$\\phi_{logistic}(z) = \\frac{1}{1 + e^{-z}}$$\nOutput range: (0, 1)",
"# note that first element (X[0] = 1) to denote bias unit\n\nX = np.array([[1, 1.4, 1.5]])\nw = np.array([0.0, 0.2, 0.4])\n\ndef net_input(X, w):\n z = X.dot(w)\n return z\n\ndef logistic(z):\n return 1.0 / (1.0 + np.exp(-z))\n\ndef logistic_activation(X, w):\n z = net_input(X, w)\n return logistic(z)\n\nprint('P(y=1|x) = %.3f' % logistic_activation(X, w)[0])",
"Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.",
"# W : array, shape = [n_output_units, n_hidden_units+1]\n# Weight matrix for hidden layer -> output layer.\n# note that first column (A[:][0] = 1) are the bias units\nW = np.array([[1.1, 1.2, 1.3, 0.5],\n [0.1, 0.2, 0.4, 0.1],\n [0.2, 0.5, 2.1, 1.9]])\n\n# A : array, shape = [n_hidden+1, n_samples]\n# Activation of hidden layer.\n# note that first element (A[0][0] = 1) is for the bias units\n\nA = np.array([[1.0], \n [0.1], \n [0.3], \n [0.7]])\n\n# Z : array, shape = [n_output_units, n_samples]\n# Net input of output layer.\n\nZ = W.dot(A) \ny_probas = logistic(Z)\nprint('Probabilities:\\n', y_probas)\n\ny_class = np.argmax(Z, axis=0)\nprint('predicted class label: %d' % y_class[0])",
"<br>\n<br>\nEstimating probabilities in multi-class classification via the softmax function\nThe softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).\n$$P(y=j|z) =\\phi_{softmax}(z) = \\frac{e^{z_j}}{\\sum_{k=1}^K e^{z_k}}$$\nthe input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is:\nOutput range: (0, 1)",
"def softmax(z): \n return np.exp(z) / np.sum(np.exp(z))\n\ndef softmax_activation(X, w):\n z = net_input(X, w)\n return softmax(z)\n\ny_probas = softmax(Z)\nprint('Probabilities:\\n', y_probas)\n\ny_probas.sum()\n\ny_class = np.argmax(Z, axis=0)\ny_class",
"<br>\n<br>\nBroadening the output spectrum using a hyperbolic tangent\nAnother special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.\n$$\\phi_{tanh}(z) = \\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$\nOutput range: (-1, 1)",
"def tanh(z):\n e_p = np.exp(z) \n e_m = np.exp(-z)\n return (e_p - e_m) / (e_p + e_m) \n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nz = np.arange(-5, 5, 0.005)\nlog_act = logistic(z)\ntanh_act = tanh(z)\n\n# alternatives:\n# from scipy.special import expit\n# log_act = expit(z)\n# tanh_act = np.tanh(z)\n\nplt.ylim([-1.5, 1.5])\nplt.xlabel('net input $z$')\nplt.ylabel('activation $\\phi(z)$')\nplt.axhline(1, color='black', linestyle='--')\nplt.axhline(0.5, color='black', linestyle='--')\nplt.axhline(0, color='black', linestyle='--')\nplt.axhline(-1, color='black', linestyle='--')\n\nplt.plot(z, tanh_act, \n linewidth=2, \n color='black', \n label='tanh')\nplt.plot(z, log_act, \n linewidth=2, \n color='lightgreen', \n label='logistic')\n\nplt.legend(loc='lower right')\nplt.tight_layout()\n# plt.savefig('./figures/activation.png', dpi=300)\nplt.show()\n\nImage(filename='./images/13_05.png', width=700) ",
"<br>\n<br>\nTraining neural networks efficiently using Keras\nLoading MNIST\n1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/\n\ntrain-images-idx3-ubyte.gz: training set images (9912422 bytes) \ntrain-labels-idx1-ubyte.gz: training set labels (28881 bytes) \nt10k-images-idx3-ubyte.gz: test set images (1648877 bytes) \nt10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)\n\n2) Unzip those files\n3 Copy the unzipped files to a directory ./mnist",
"import os\nimport struct\nimport numpy as np\n \ndef load_mnist(path, kind='train'):\n \"\"\"Load MNIST data from `path`\"\"\"\n labels_path = os.path.join(path, \n '%s-labels-idx1-ubyte' \n % kind)\n images_path = os.path.join(path, \n '%s-images-idx3-ubyte' \n % kind)\n \n with open(labels_path, 'rb') as lbpath:\n magic, n = struct.unpack('>II', \n lbpath.read(8))\n labels = np.fromfile(lbpath, \n dtype=np.uint8)\n\n with open(images_path, 'rb') as imgpath:\n magic, num, rows, cols = struct.unpack(\">IIII\", \n imgpath.read(16))\n images = np.fromfile(imgpath, \n dtype=np.uint8).reshape(len(labels), 784)\n \n return images, labels\n\nX_train, y_train = load_mnist('mnist', kind='train')\nprint('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))\n\nX_test, y_test = load_mnist('mnist', kind='t10k')\nprint('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))",
"Multi-layer Perceptron in Keras\nOnce you have Theano installed, Keras can be installed via\npip install Keras\n\nIn order to run the following code via GPU, you can execute the Python script that was placed in this directory via\nTHEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py",
"import theano \n\ntheano.config.floatX = 'float32'\nX_train = X_train.astype(theano.config.floatX)\nX_test = X_test.astype(theano.config.floatX)",
"One-hot encoding of the class variable:",
"from keras.utils import np_utils\n\nprint('First 3 labels: ', y_train[:3])\n\ny_train_ohe = np_utils.to_categorical(y_train) \nprint('\\nFirst 3 labels (one-hot):\\n', y_train_ohe[:3])\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense\nfrom keras.optimizers import SGD\n\nnp.random.seed(1) \n\nmodel = Sequential()\nmodel.add(Dense(input_dim=X_train.shape[1], \n output_dim=50, \n init='uniform', \n activation='tanh'))\n\nmodel.add(Dense(input_dim=50, \n output_dim=50, \n init='uniform', \n activation='tanh'))\n\nmodel.add(Dense(input_dim=50, \n output_dim=y_train_ohe.shape[1], \n init='uniform', \n activation='softmax'))\n\nsgd = SGD(lr=0.001, decay=1e-7, momentum=.9)\nmodel.compile(loss='categorical_crossentropy', optimizer=sgd)\n\nmodel.fit(X_train, y_train_ohe, \n nb_epoch=50, \n batch_size=300, \n verbose=1, \n validation_split=0.1, \n show_accuracy=True)\n\ny_train_pred = model.predict_classes(X_train, verbose=0)\nprint('First 3 predictions: ', y_train_pred[:3])\n\ntrain_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]\nprint('Training accuracy: %.2f%%' % (train_acc * 100))\n\ny_test_pred = model.predict_classes(X_test, verbose=0)\ntest_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]\nprint('Test accuracy: %.2f%%' % (test_acc * 100))",
"<br>\n<br>\nSummary\n..."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tanghaibao/goatools | notebooks/goea_nbt3102.ipynb | bsd-2-clause | [
"Run a Gene Ontology Enrichment Analysis (GOEA)\nWe use data from a 2014 Nature paper: \nComputational analysis of cell-to-cell heterogeneity\nin single-cell RNA-sequencing data reveals hidden \nsubpopulations of cells\n\nNote: you must have the Python package, xlrd, installed to run this example. \nNote: To create plots, you must have:\n * Python packages: pyparsing, pydot\n * Graphviz loaded and your PATH environmental variable pointing to the Graphviz bin directory.\n1. Download Ontologies and Associations\n1a. Download Ontologies, if necessary",
"# Get http://geneontology.org/ontology/go-basic.obo\nfrom goatools.base import download_go_basic_obo\nobo_fname = download_go_basic_obo()",
"1b. Download Associations, if necessary\nThe NCBI gene2go file contains numerous species. We will select mouse shortly.",
"# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz\nfrom goatools.base import download_ncbi_associations\nfin_gene2go = download_ncbi_associations()",
"2. Load Ontologies, Associations and Background gene set\n2a. Load Ontologies",
"from goatools.obo_parser import GODag\n\nobodag = GODag(\"go-basic.obo\")",
"2b. Load Associations",
"from __future__ import print_function\nfrom goatools.anno.genetogo_reader import Gene2GoReader\n\n# Read NCBI's gene2go. Store annotations in a list of namedtuples\nobjanno = Gene2GoReader(fin_gene2go, taxids=[10090])\n\n# Get namespace2association where:\n# namespace is:\n# BP: biological_process \n# MF: molecular_function\n# CC: cellular_component\n# assocation is a dict:\n# key: NCBI GeneID\n# value: A set of GO IDs associated with that gene\nns2assoc = objanno.get_ns2assc()\n\nfor nspc, id2gos in ns2assoc.items():\n print(\"{NS} {N:,} annotated mouse genes\".format(NS=nspc, N=len(id2gos)))",
"2c. Load Background gene set\nIn this example, the background is all mouse protein-codinge genes. \nFollow the instructions in the background_genes_ncbi notebook to download a set of background population genes from NCBI.",
"from genes_ncbi_10090_proteincoding import GENEID2NT as GeneID2nt_mus\nprint(len(GeneID2nt_mus))",
"3. Initialize a GOEA object\nThe GOEA object holds the Ontologies, Associations, and background. \nNumerous studies can then be run withough needing to re-load the above items. \nIn this case, we only run one GOEA.",
"from goatools.goea.go_enrichment_ns import GOEnrichmentStudyNS\n\ngoeaobj = GOEnrichmentStudyNS(\n GeneID2nt_mus.keys(), # List of mouse protein-coding genes\n ns2assoc, # geneid/GO associations\n obodag, # Ontologies\n propagate_counts = False,\n alpha = 0.05, # default significance cut-off\n methods = ['fdr_bh']) # defult multipletest correction method\n",
"4. Read study genes\n~400 genes from the Nature paper supplemental table 4",
"# Data will be stored in this variable\nimport os\ngeneid2symbol = {}\n# Get xlsx filename where data is stored\nROOT = os.path.dirname(os.getcwd()) # go up 1 level from current working directory\ndin_xlsx = os.path.join(ROOT, \"goatools/test_data/nbt_3102/nbt.3102-S4_GeneIDs.xlsx\")\n# Read data\nif os.path.isfile(din_xlsx): \n import xlrd\n book = xlrd.open_workbook(din_xlsx)\n pg = book.sheet_by_index(0)\n for r in range(pg.nrows):\n symbol, geneid, pval = [pg.cell_value(r, c) for c in range(pg.ncols)]\n if geneid:\n geneid2symbol[int(geneid)] = symbol\n print('{N} genes READ: {XLSX}'.format(N=len(geneid2symbol), XLSX=din_xlsx))\nelse:\n raise RuntimeError('FILE NOT FOUND: {XLSX}'.format(XLSX=din_xlsx))",
"5. Run Gene Ontology Enrichment Analysis (GOEA)\nYou may choose to keep all results or just the significant results. In this example, we choose to keep only the significant results.",
"# 'p_' means \"pvalue\". 'fdr_bh' is the multipletest method we are currently using.\ngeneids_study = geneid2symbol.keys()\ngoea_results_all = goeaobj.run_study(geneids_study)\ngoea_results_sig = [r for r in goea_results_all if r.p_fdr_bh < 0.05]",
"5a. Quietly Run Gene Ontology Enrichment Analysis (GOEA)\nGOEAs can be run quietly using prt=None:\ngoea_results = goeaobj.run_study(geneids_study, prt=None)\nNo output is printed if prt=None:",
"goea_quiet_all = goeaobj.run_study(geneids_study, prt=None)\ngoea_quiet_sig = [r for r in goea_results_all if r.p_fdr_bh < 0.05]",
"Print customized results summaries\nExample 1: Significant v All GOEA results",
"print('{N} of {M:,} results were significant'.format(\n N=len(goea_quiet_sig),\n M=len(goea_quiet_all)))",
"Example 2: Enriched v Purified GOEA results",
"print('Significant results: {E} enriched, {P} purified'.format(\n E=sum(1 for r in goea_quiet_sig if r.enrichment=='e'),\n P=sum(1 for r in goea_quiet_sig if r.enrichment=='p')))",
"Example 3: Significant GOEA results by namespace",
"import collections as cx\nctr = cx.Counter([r.NS for r in goea_quiet_sig])\nprint('Significant results[{TOTAL}] = {BP} BP + {MF} MF + {CC} CC'.format(\n TOTAL=len(goea_quiet_sig),\n BP=ctr['BP'], # biological_process\n MF=ctr['MF'], # molecular_function\n CC=ctr['CC'])) # cellular_component",
"6. Write results to an Excel file and to a text file",
"goeaobj.wr_xlsx(\"nbt3102.xlsx\", goea_results_sig)\ngoeaobj.wr_txt(\"nbt3102.txt\", goea_results_sig)",
"7. Plot all significant GO terms\nPlotting all significant GO terms produces a messy spaghetti plot. Such a plot can be useful sometimes because you can open it and zoom and scroll around. But sometimes it is just too messy to be of use.\nThe \"{NS}\" in \"nbt3102_{NS}.png\" indicates that you will see three plots, one for \"biological_process\"(BP), \"molecular_function\"(MF), and \"cellular_component\"(CC)",
"from goatools.godag_plot import plot_gos, plot_results, plot_goid2goobj\n\nplot_results(\"nbt3102_{NS}.png\", goea_results_sig)",
"7a. These plots are likely to messy\nThe Cellular Component plot is the smallest plot...\n\n7b. So make a smaller sub-plot\nThis plot contains GOEA results:\n * GO terms colored by P-value:\n * pval < 0.005 (light red)\n * pval < 0.01 (light orange)\n * pval < 0.05 (yellow)\n * pval > 0.05 (grey) Study terms that are not statistically significant\n * GO terms with study gene counts printed. e.g., \"32 genes\"",
"# Plot subset starting from these significant GO terms\ngoid_subset = [\n 'GO:0003723', # MF D04 RNA binding (32 genes)\n 'GO:0044822', # MF D05 poly(A) RNA binding (86 genes)\n 'GO:0003729', # MF D06 mRNA binding (11 genes)\n 'GO:0019843', # MF D05 rRNA binding (6 genes)\n 'GO:0003746', # MF D06 translation elongation factor activity (5 genes)\n]\nplot_gos(\"nbt3102_MF_RNA_genecnt.png\", \n goid_subset, # Source GO ids\n obodag, \n goea_results=goea_results_all) # Use pvals for coloring\n",
"7c. Add study gene Symbols to plot\ne.g., 11 genes: Calr, Eef1a1, Pabpc1",
"plot_gos(\"nbt3102_MF_RNA_Symbols.png\", \n goid_subset, # Source GO ids\n obodag,\n goea_results=goea_results_all, # use pvals for coloring\n # We can further configure the plot...\n id2symbol=geneid2symbol, # Print study gene Symbols, not Entrez GeneIDs\n study_items=6, # Only only 6 gene Symbols max on GO terms\n items_p_line=3, # Print 3 genes per line\n )",
"Copyright (C) 2016-present, DV Klopfenstein, H Tang. All rights reserved."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mlperf/training_results_v0.5 | v0.5.0/google/research_v3.32/gnmt-tpuv3-32/code/gnmt/model/t2t/tensor2tensor/visualization/TransformerVisualization.ipynb | apache-2.0 | [
"#@title\n# Copyright 2018 Google LLC.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Create Your Own Visualizations!\nInstructions:\n1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor.\n2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above.\n3. If you used custom hyper parameters then update cell 4.\n4. Run the notebook!",
"import os\n\nimport tensorflow as tf\n\nfrom tensor2tensor import problems\nfrom tensor2tensor.bin import t2t_decoder # To register the hparams set\nfrom tensor2tensor.utils import registry\nfrom tensor2tensor.utils import trainer_lib\nfrom tensor2tensor.visualization import attention\nfrom tensor2tensor.visualization import visualization\n\n%%javascript\nrequire.config({\n paths: {\n d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min'\n }\n});",
"HParams",
"# PUT THE MODEL YOU WANT TO LOAD HERE!\nCHECKPOINT = os.path.expanduser('~/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu')\n\n# HParams\nproblem_name = 'translate_ende_wmt32k'\ndata_dir = os.path.expanduser('~/t2t_data/')\nmodel_name = \"transformer\"\nhparams_set = \"transformer_base_single_gpu\"",
"Visualization",
"visualizer = visualization.AttentionVisualizer(hparams_set, model_name, data_dir, problem_name, beam_size=1)\n\ntf.Variable(0, dtype=tf.int64, trainable=False, name='global_step')\n\nsess = tf.train.MonitoredTrainingSession(\n checkpoint_dir=CHECKPOINT,\n save_summaries_secs=0,\n)\n\ninput_sentence = \"I have two dogs.\"\noutput_string, inp_text, out_text, att_mats = visualizer.get_vis_data_from_string(sess, input_sentence)\nprint(output_string)",
"Interpreting the Visualizations\n\nThe layers drop down allow you to view the different Transformer layers, 0-indexed of course.\nTip: The first layer, last layer and 2nd to last layer are usually the most interpretable.\nThe attention dropdown allows you to select different pairs of encoder-decoder attentions:\nAll: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters.\nInput - Input: Shows only the encoder self-attention.\nInput - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down.\nOutput - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded.\nThe colored squares represent the different attention heads.\nYou can hide or show a given head by clicking on it’s color.\nDouble clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again.\nYou can hover over a word to see the individual attention weights for just that position.\nHovering over the words on the left will show what that position attended to.\nHovering over the words on the right will show what positions attended to it.",
"attention.show(inp_text, out_text, *att_mats)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cliburn/sta-663-2017 | homework/07_Linear_Algebra_Applications_Solutions_Explanation.ipynb | mit | [
"%matplotlib inline\nimport numpy as np\n\nnp.set_printoptions(precision=2, suppress=True)",
"Exercise 1",
"def cosine_dist(u, v, axis):\n \"\"\"Returns cosine of angle betwwen two vectors.\"\"\"\n return 1 - (u*v).sum(axis)/(np.sqrt((u**2).sum(axis))*np.sqrt((v**2).sum(axis)))\n\nu = np.array([1,2,3])\nv = np.array([4,5,6])",
"Note 1: We write the dot product as the sum of element-wise products. This allows us to generalize when u, v are matrices rather than vectors. The norms in the denominator are calculated in the same way.",
"u @ v\n\n(u * v).sum()",
"Note 2: Broadcasting",
"M = np.array([[1.,2,3],[4,5,6]])\nM.shape",
"Note 2A: Broadcasting for M as collection of row vectors. How we broadcast and which axis to broadcast over are determined by the need to end up with a 2x2 matrix.",
"M[None,:,:].shape, M[:,None,:].shape\n\n(M[None,:,:] + M[:,None,:]).shape\n\ncosine_dist(M[None,:,:], M[:,None,:], 2)",
"Note 2B: Broadcasting for M as a collection of column vectors. How we broadcast and which axis to broadcast over are determined by the need to end up with a 3x3 matrix.",
"M[:,None,:].shape, M[:,:,None].shape\n\n(M[:,None,:] + M[:,:,None]).shape\n\ncosine_dist(M[:,None,:], M[:,:,None], 0)",
"Exeercise 2\nNote 1: Using collections.Counter and pandas.DataFrame reduces the amount of code to write.\nExercise 3",
"M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0],\n [1, 0, 1, 0, 0, 0, 0, 0, 0],\n [1, 1, 0, 0, 0, 0, 0, 0, 0],\n [0, 1, 1, 0, 1, 0, 0, 0, 0],\n [0, 1, 1, 2, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 1, 0, 0, 0, 0],\n [0, 1, 0, 0, 1, 0, 0, 0, 0],\n [0, 0, 1, 1, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 0, 0, 0, 1],\n [0, 0, 0, 0, 0, 1, 1, 1, 0],\n [0, 0, 0, 0, 0, 0, 1, 1, 1],\n [0, 0, 0, 0, 0, 0, 0, 1, 1]])\n\nM.shape\n\nU, s, V = np.linalg.svd(M, full_matrices=False)\n\nU.shape, s.shape, V.shape\n\ns[2:] = 0\nM2 = U @ np.diag(s) @ V\n\nfrom scipy.stats import spearmanr\n\nr2 = spearmanr(M2)[0]\n\nr2\n\nr2[np.tril_indices_from(r2[:5, :5], -1)]\n\nr2[np.tril_indices_from(r2[5:, 5:], -1)]",
"Exercise 4\n\nPart 2 is similar to previous questions\nPart 3 is Googling\nPart 4: defining the query vector\n\nFollow explanation here\n```python\nk = 10\nT, s, D = sparsesvd(csc_matrix(df), k=100)\ndoc = {'mystery': open('mystery.txt').read()}\nterms = tf_idf(doc)\nquery_terms = df.join(terms).fillna(0)['mystery']\nq = query_terms.T.dot(T.T.dot(np.diag(1.0/s)))\nranked_docs = df.columns[np.argsort(cosine_dist(q, x))][::-1]\n```"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gatmeh/Udacity-deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | [
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n return None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)",
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n return None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.",
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = None\n# Batch Size\nbatch_size = None\n# RNN Size\nrnn_size = None\n# Embedding Dimension Size\nembed_dim = None\n# Sequence Length\nseq_length = None\n# Learning Rate\nlearning_rate = None\n# Show stats for every n number of batches\nshow_every_n_batches = None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n return None, None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rsignell-usgs/notebook | ERDDAP/GliderDAC_Search.ipynb | mit | [
"Search GliderDAC for Pioneer Glider Data\nUse ERDDAP's RESTful advanced search to try to find OOI Pioneer glider water temperatures from the IOOS GliderDAC. Use case from Stace Beaulieu ([email protected])",
"import pandas as pd",
"First try just searching for \"glider\"",
"url = 'https://data.ioos.us/gliders/erddap/search/advanced.csv?page=1&itemsPerPage=1000&searchFor={}'.format('glider')\ndft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution','Dataset ID']) \ndft.head()",
"Now search for all temperature data in specified bounding box and temporal extent",
"start = '2000-01-01T00:00:00Z'\nstop = '2017-02-22T00:00:00Z'\nlat_min = 39.\nlat_max = 41.5\nlon_min = -72.\nlon_max = -69.\nstandard_name = 'sea_water_temperature'\nendpoint = 'https://data.ioos.us/gliders/erddap/search/advanced.csv'\n\nimport pandas as pd\n\nbase = (\n '{}'\n '?page=1'\n '&itemsPerPage=1000'\n '&searchFor='\n '&protocol=(ANY)'\n '&cdm_data_type=(ANY)'\n '&institution=(ANY)'\n '&ioos_category=(ANY)'\n '&keywords=(ANY)'\n '&long_name=(ANY)'\n '&standard_name={}'\n '&variableName=(ANY)'\n '&maxLat={}'\n '&minLon={}'\n '&maxLon={}'\n '&minLat={}'\n '&minTime={}'\n '&maxTime={}').format\n\nurl = base(\n endpoint,\n standard_name,\n lat_max,\n lon_min,\n lon_max,\n lat_min,\n start,\n stop\n)\n\nprint(url)\n\ndft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution', 'Dataset ID']) \nprint('Glider Datasets Found = {}'.format(len(dft)))\ndft",
"Define a function that returns a Pandas DataFrame based on the dataset ID. The ERDDAP request variables (e.g. pressure, temperature) are hard-coded here, so this routine should be modified for other ERDDAP endpoints or datasets",
"def download_df(glider_id):\n from pandas import DataFrame, read_csv\n# from urllib.error import HTTPError\n uri = ('https://data.ioos.us/gliders/erddap/tabledap/{}.csv'\n '?trajectory,wmo_id,time,latitude,longitude,depth,pressure,temperature'\n '&time>={}'\n '&time<={}'\n '&latitude>={}'\n '&latitude<={}'\n '&longitude>={}'\n '&longitude<={}').format\n url = uri(glider_id,start,stop,lat_min,lat_max,lon_min,lon_max)\n print(url)\n # Not sure if returning an empty df is the best idea.\n try:\n df = read_csv(url, index_col='time', parse_dates=True, skiprows=[1])\n except:\n df = pd.DataFrame()\n return df\n\n# concatenate the dataframes for each dataset into one single dataframe \ndf = pd.concat(list(map(download_df, dft['Dataset ID'].values)))\n\nprint('Total Data Values Found: {}'.format(len(df)))\n\ndf.head()\n\ndf.tail()",
"plot up the trajectories with Cartopy (Basemap replacement)",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\nfrom cartopy.feature import NaturalEarthFeature\nbathym_1000 = NaturalEarthFeature(name='bathymetry_J_1000',\n scale='10m', category='physical')\nfig, ax = plt.subplots(\n figsize=(9, 9),\n subplot_kw=dict(projection=ccrs.PlateCarree())\n)\nax.coastlines(resolution='10m')\nax.add_feature(bathym_1000, facecolor=[0.9, 0.9, 0.9], edgecolor='none')\ndx = dy = 0.5\nax.set_extent([lon_min-dx, lon_max+dx, lat_min-dy, lat_max+dy])\n\ng = df.groupby('trajectory')\nfor glider in g.groups:\n traj = df[df['trajectory'] == glider]\n ax.plot(traj['longitude'], traj['latitude'], label=glider)\n\ngl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,\n linewidth=2, color='gray', alpha=0.5, linestyle='--')\nax.legend();"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jbwhit/jupyter-best-practices | notebooks/05-SQL-Example.ipynb | mit | [
"SQL\nAccessing data stored in databases is a routine exercise. I demonstrate a few helpful methods in the Jupyter Notebook.",
"%load_ext sql_magic\n\nimport sqlalchemy\nimport pandas as pd\nimport sqlite3\nfrom sqlalchemy import create_engine\nsqlite_engine = create_engine('sqlite://')\n\n%config SQL.conn_name = \"sqlite_engine\"\n\n%config SQL\n\n%config SQL.output_result = False",
"SQL\nCREATE TABLE presidents (first_name, last_name, year_of_birth);\nINSERT INTO presidents VALUES ('George', 'Washington', 1732);\nINSERT INTO presidents VALUES ('John', 'Adams', 1735);\nINSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743);\nINSERT INTO presidents VALUES ('James', 'Madison', 1751);\nINSERT INTO presidents VALUES ('James', 'Monroe', 1758);\nINSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784);\nINSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809);\nINSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858);\nINSERT INTO presidents VALUES ('Richard', 'Nixon', 1913);\nINSERT INTO presidents VALUES ('Barack', 'Obama', 1961);",
"%%read_sql temp\nCREATE TABLE presidents (first_name, last_name, year_of_birth);\nINSERT INTO presidents VALUES ('George', 'Washington', 1732);\nINSERT INTO presidents VALUES ('John', 'Adams', 1735);\nINSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743);\nINSERT INTO presidents VALUES ('James', 'Madison', 1751);\nINSERT INTO presidents VALUES ('James', 'Monroe', 1758);\nINSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784);\nINSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809);\nINSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858);\nINSERT INTO presidents VALUES ('Richard', 'Nixon', 1913);\nINSERT INTO presidents VALUES ('Barack', 'Obama', 1961);\n\n%%read_sql df\nSELECT * FROM presidents\n\ndf",
"Inline magic",
"later_presidents = %read_sql SELECT * FROM presidents WHERE year_of_birth > 1825\nlater_presidents\n\n%%read_sql later_presidents\nSELECT * FROM presidents WHERE year_of_birth > 1825",
"Through pandas directly",
"birthyear = 1800\n\n%%read_sql df1\nSELECT first_name,\n last_name,\n year_of_birth\nFROM presidents\nWHERE year_of_birth > {birthyear}\n\ndf1\n\ncoal = pd.read_csv(\"../data/coal_prod_cleaned.csv\")\ncoal.head()\n\ncoal.to_sql('coal', con=sqlite_engine, if_exists='append', index=False)\n\n%%read_sql example\nSELECT * FROM coal\n\nexample.head()\n\nexample.columns"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fja05680/pinkfish | examples/280.pyfolio-integration/strategy.ipynb | mit | [
"pyfolio-integration\nThis example shows how to integrate pinkfish with the pyfolio library.",
"import datetime\n\nimport pyfolio\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport pinkfish as pf\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# Format price data\npd.options.display.float_format = '{:0.2f}'.format\n\n%matplotlib inline\n\n# Set size of inline plots\n'''note: rcParams can't be in same cell as import matplotlib\n or %matplotlib inline\n \n %matplotlib notebook: will lead to interactive plots embedded within\n the notebook, you can zoom and resize the figure\n \n %matplotlib inline: only draw static images in the notebook\n'''\nplt.rcParams[\"figure.figsize\"] = (10, 7)",
"Some global data",
"symbol = '^GSPC'\ncapital = 10000\n#start = datetime.datetime(1900, 1, 1)\nstart = datetime.datetime(*pf.SP500_BEGIN)\nend = datetime.datetime.now()",
"Define Strategy Class - sell in may and go away",
"class Strategy:\n\n def __init__(self, symbol, capital, start, end):\n self.symbol = symbol\n self.capital = capital\n self.start = start\n self.end = end\n \n self.ts = None\n self.rlog = None\n self.tlog = None\n self.dbal = None\n self.stats = None\n\n def _algo(self):\n pf.TradeLog.cash = capital\n\n for i, row in enumerate(self.ts.itertuples()):\n\n date = row.Index.to_pydatetime()\n end_flag = pf.is_last_row(self.ts, i)\n\n # Buy (at the close on first trading day in Nov).\n if self.tlog.shares == 0:\n if row.month == 11 and row.first_dotm:\n self.tlog.buy(date, row.close)\n # Sell (at the close on first trading day in May).\n else:\n if ((row.month == 5 and row.first_dotm) or end_flag):\n self.tlog.sell(date, row.close)\n\n # Record daily balance\n self.dbal.append(date, row.close)\n\n def run(self):\n \n # Fetch and select timeseries.\n self.ts = pf.fetch_timeseries(self.symbol)\n self.ts = pf.select_tradeperiod(self.ts, self.start, self.end,\n use_adj=True)\n # Add calendar columns.\n self.ts = pf.calendar(self.ts)\n \n # Finalize timeseries.\n self.ts, self.start = pf.finalize_timeseries(self.ts, self.start,\n dropna=True, drop_columns=['open', 'high', 'low'])\n \n # Create tlog and dbal objects\n self.tlog = pf.TradeLog(symbol)\n self.dbal = pf.DailyBal()\n \n # Run algorithm, get logs\n self._algo()\n self._get_logs()\n self._get_stats()\n\n def _get_logs(self):\n self.rlog = self.tlog.get_log_raw()\n self.tlog = self.tlog.get_log()\n self.dbal = self.dbal.get_log(self.tlog)\n\n def _get_stats(self):\n s.stats = pf.stats(self.ts, self.tlog, self.dbal, self.capital)",
"Run Strategy",
"s = Strategy(symbol, capital, start, end)\ns.run()",
"Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats",
"benchmark = pf.Benchmark(symbol, s.capital, s.start, s.end)\nbenchmark.run()",
"Pyfolio Returns Tear Sheet\n(create_returns_tear_sheet() seems to be a bit broke in Pyfolio, see: https://github.com/quantopian/pyfolio/issues/520)",
"# Convert pinkfish data to Empyrical format\nreturns = s.dbal['close'].pct_change()\n#returns.index = returns.index.tz_localize('UTC')\nreturns.index = returns.index.to_pydatetime()\ntype(returns.index)\n\n# Filter warnings\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\n\n# Convert pinkfish data to Empyrical format\nreturns = s.dbal['close'].pct_change()\nreturns.index = returns.index.tz_localize('UTC')\n\nbenchmark_rets = benchmark.dbal['close'].pct_change()\nbenchmark_rets.index = benchmark_rets.index.tz_localize('UTC')\n\nlive_start_date=None\nlive_start_date='2010-01-01'\n\n# Uncomment to select the tear sheet you are interested in.\n\n#pyfolio.create_returns_tear_sheet(returns, benchmark_rets=benchmark_rets, live_start_date=live_start_date)\npyfolio.create_simple_tear_sheet(returns, benchmark_rets=benchmark_rets)\n#pyfolio.create_interesting_times_tear_sheet(returns, benchmark_rets=benchmark_rets)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb | bsd-3-clause | [
"%matplotlib inline",
".. _tut_stats_cluster_source_1samp:\nPermutation t-test on source data with spatio-temporal clustering\nTests if the evoked response is significantly different between\nconditions across subjects (simulated here using one subject's data).\nThe multiple comparisons problem is addressed with a cluster-level\npermutation test across space and time.",
"# Authors: Alexandre Gramfort <[email protected]>\n# Eric Larson <[email protected]>\n# License: BSD (3-clause)\n\n\nimport os.path as op\nimport numpy as np\nfrom numpy.random import randn\nfrom scipy import stats as stats\n\nimport mne\nfrom mne import (io, spatial_tris_connectivity, compute_morph_matrix,\n grade_to_tris)\nfrom mne.epochs import equalize_epoch_counts\nfrom mne.stats import (spatio_temporal_cluster_1samp_test,\n summarize_clusters_stc)\nfrom mne.minimum_norm import apply_inverse, read_inverse_operator\nfrom mne.datasets import sample\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nsubjects_dir = data_path + '/subjects'\n\ntmin = -0.2\ntmax = 0.3 # Use a lower tmax to reduce multiple comparisons\n\n# Setup for reading the raw data\nraw = io.Raw(raw_fname)\nevents = mne.read_events(event_fname)",
"Read epochs for all channels, removing a bad one",
"raw.info['bads'] += ['MEG 2443']\npicks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')\nevent_id = 1 # L auditory\nreject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)\nepochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject, preload=True)\n\nevent_id = 3 # L visual\nepochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject, preload=True)\n\n# Equalize trial counts to eliminate bias (which would otherwise be\n# introduced by the abs() performed below)\nequalize_epoch_counts([epochs1, epochs2])",
"Transform to source space",
"fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nsnr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\ninverse_operator = read_inverse_operator(fname_inv)\nsample_vertices = [s['vertno'] for s in inverse_operator['src']]\n\n# Let's average and compute inverse, resampling to speed things up\nevoked1 = epochs1.average()\nevoked1.resample(50)\ncondition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)\nevoked2 = epochs2.average()\nevoked2.resample(50)\ncondition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)\n\n# Let's only deal with t > 0, cropping to reduce multiple comparisons\ncondition1.crop(0, None)\ncondition2.crop(0, None)\ntmin = condition1.tmin\ntstep = condition1.tstep",
"Transform to common cortical space",
"# Normally you would read in estimates across several subjects and morph\n# them to the same cortical space (e.g. fsaverage). For example purposes,\n# we will simulate this by just having each \"subject\" have the same\n# response (just noisy in source space) here. Note that for 7 subjects\n# with a two-sided statistical test, the minimum significance under a\n# permutation test is only p = 1/(2 ** 6) = 0.015, which is large.\nn_vertices_sample, n_times = condition1.data.shape\nn_subjects = 7\nprint('Simulating data for %d subjects.' % n_subjects)\n\n# Let's make sure our results replicate, so set the seed.\nnp.random.seed(0)\nX = randn(n_vertices_sample, n_times, n_subjects, 2) * 10\nX[:, :, :, 0] += condition1.data[:, :, np.newaxis]\nX[:, :, :, 1] += condition2.data[:, :, np.newaxis]\n\n# It's a good idea to spatially smooth the data, and for visualization\n# purposes, let's morph these to fsaverage, which is a grade 5 source space\n# with vertices 0:10242 for each hemisphere. Usually you'd have to morph\n# each subject's data separately (and you might want to use morph_data\n# instead), but here since all estimates are on 'sample' we can use one\n# morph matrix for all the heavy lifting.\nfsave_vertices = [np.arange(10242), np.arange(10242)]\nmorph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,\n fsave_vertices, 20, subjects_dir)\nn_vertices_fsave = morph_mat.shape[0]\n\n# We have to change the shape for the dot() to work properly\nX = X.reshape(n_vertices_sample, n_times * n_subjects * 2)\nprint('Morphing data.')\nX = morph_mat.dot(X) # morph_mat is a sparse matrix\nX = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)\n\n# Finally, we want to compare the overall activity levels in each condition,\n# the diff is taken along the last axis (condition). The negative sign makes\n# it so condition1 > condition2 shows up as \"red blobs\" (instead of blue).\nX = np.abs(X) # only magnitude\nX = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast",
"Compute statistic",
"# To use an algorithm optimized for spatio-temporal clustering, we\n# just pass the spatial connectivity matrix (instead of spatio-temporal)\nprint('Computing connectivity.')\nconnectivity = spatial_tris_connectivity(grade_to_tris(5))\n\n# Note that X needs to be a multi-dimensional array of shape\n# samples (subjects) x time x space, so we permute dimensions\nX = np.transpose(X, [2, 1, 0])\n\n# Now let's actually do the clustering. This can take a long time...\n# Here we set the threshold quite high to reduce computation.\np_threshold = 0.001\nt_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)\nprint('Clustering.')\nT_obs, clusters, cluster_p_values, H0 = clu = \\\n spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=2,\n threshold=t_threshold)\n# Now select the clusters that are sig. at p < 0.05 (note that this value\n# is multiple-comparisons corrected).\ngood_cluster_inds = np.where(cluster_p_values < 0.05)[0]",
"Visualize the clusters",
"print('Visualizing clusters.')\n\n# Now let's build a convenient representation of each cluster, where each\n# cluster becomes a \"time point\" in the SourceEstimate\nstc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,\n vertices=fsave_vertices,\n subject='fsaverage')\n\n# Let's actually plot the first \"time point\" in the SourceEstimate, which\n# shows all the clusters, weighted by duration\nsubjects_dir = op.join(data_path, 'subjects')\n# blue blobs are for condition A < condition B, red for A > B\nbrain = stc_all_cluster_vis.plot(hemi='both', subjects_dir=subjects_dir,\n time_label='Duration significant (ms)')\nbrain.set_data_time_index(0)\nbrain.show_view('lateral')\nbrain.save_image('clusters.png')"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
d-k-b/udacity-deep-learning | embeddings/Skip-Gram_word2vec.ipynb | mit | [
"Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.",
"import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils",
"Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()",
"Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.",
"words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))",
"And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.",
"vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]",
"Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.",
"## Your code here\ntrain_words = # The final subsampled word list",
"Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.",
"def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n # Your code here\n \n return",
"Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.",
"def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ",
"Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.",
"train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = \n labels = ",
"Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.",
"n_vocab = len(int_to_vocab)\nn_embedding = # Number of embedding features \nwith train_graph.as_default():\n embedding = # create embedding weight matrix here\n embed = # use tf.nn.embedding_lookup to get the hidden layer output",
"Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.",
"# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = # create softmax weight matrix here\n softmax_b = # create softmax biases here\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss \n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)",
"Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.",
"with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints",
"Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.",
"epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)",
"Restore the trained network if you need to:",
"with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)",
"Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amniskin/amniskin.github.io | assets/notebooks/2017/10/07/active_portfolio_management_slides.ipynb | mit | [
"Data Science in Finance\nLousy models are great!\nWe often hear that in the world of hedge funds and seeking alpha (a term we'll go over in a bit), extremely poor models are used and hailed as great achievements. A model with an $R^2$ of 0.1 is great! A model with an $R^2$ of 0.3 is unheard of.\nIs it because the data scientists working in the field are not as good as the Physicists working at the LHC, or the engineers working on Google's search prediction algorithms?\nThe answer might surprise you!\nHungry for coin flips?\nLet's imagine that you happen to have some inside source at the mint who told you that a common quarter is actually not a fair coin. This information is only known to you and your friend. Let's pretend like the probability of getting \"heads\" is actually 0.55. So not anything you'd expect to notice on a short scale, but enough to where if you bet on coin flips enough, you might actually be able to make lots of money.\nWould you bet on those coin flips? Would you consider yourself very lucky for having such privileged information?\nWhat's your $R^2$?\nOur model:\n$$\n\\begin{align}\nY =& \\begin{cases}\n1 & \\text{ if \"heads\"} \\\n0 & \\text{ if \"tails\"}\n\\end{cases} \\\nP(Y=1) =& w \\\n\\hat Y \\equiv& 1\n\\end{align}\n$$\nFirst we need to figure out what our model is! It's not entirely clear that we're using a predictive model, but we are. Our model happens to be very simple: always pick \"heads\".\nFormally, we define a random variable $Y$ such that $Y=0$ if the coin flip results in \"tails\" and $Y=1$ if the coin flip results in \"heads\". Our model is very simple: it takes no input data (so no features), and always returns 1.\nSo let's calculate our $R^2$ value. To do this, we should first calculate SSE and SST. Let $w$ be the probability of getting \"heads\" (just for generality). In our particular case, $w=0.55$.\nNote that the mean we use for this the commonly excepted mean (not the mean your model predicts)!\n$$\n\\begin{align}\n\\text{SSE} =& \\sum\\limits_{i=0}^{n-1}\\left(y_i - \\hat y_i\\right)^2 & \\text{SST} =& \\sum\\limits_{i=0}^{n-1}\\left(y_i - \\bar y\\right)^2 \\\n=& \\sum\\limits_{i=0}^{n-1}\\left(y_i - 1\\right)^2 & =& \\sum\\limits_{i=0}^{n-1}\\left(y_i - 0.5\\right)^2 \\\n=& \\sum\\limits_{i=0}^{n-1}\\left(y_i^2 -2y_i + 1^2\\right) & =& \\sum\\limits_{i=0}^{n-1}\\left(y_i^2 - 2(0.5)y_i + 0.5^2\\right) \\\n=& \\sum\\limits_{i=0}^{n-1}\\left(-y_i + 1\\right) & =& \\sum\\limits_{i=0}^{n-1}\\left(0.25\\right) \\\n=& -nw + n & =& 0.5n \\\n=& n(1-w) & =& 0.5n \\\n\\end{align}\n$$\nSo, our $R^2$ is:\n$$\n\\begin{align}\nR^2 =& 1 - \\frac{\\text{SSE}}{\\text{SST}} \\\n=& 1 - \\frac{n(1-w)}{0.5n} \\\n=& 2w - 1 = 1.1 - 1 = 0.1\n\\end{align}\n$$\nSo we can see that one reason models with such low predictive power succeed so well in finance: it's trade-off between quality and quantity. It's also true that financial data is extremely noisy and there is very little stationarity due to an ever changing landscape of laws and company leaderships, etc.\nNot necessarily bad Data Scientists\nSo what are the bets we're making?\nNot just making money\nFinding which stocks will go up is pretty much a solved problem. Most \"secure\" stocks will rise in price on a long enough time-line. But just because the total price of stocks in your account has risen doesn't mean the value has risen. You have to account for the value of money (which is constantly dropping -- inflation).\nTo illustrate this: we could make money by investing in General Electric in 2010 and holding our stock.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom pandas_datareader import DataReader\n\nreader = DataReader([\"AAPL\", \"SPY\", \"GOOG\", \"GE\"], data_source=\"yahoo\")\n\nfig = plt.figure()\nax = reader[\"Adj Close\", :, \"GE\"].plot(label=\"GE\")\nax.legend()\nax.set_title(\"Stock Adjusted Closing Price\")\nplt.savefig(\"img/close_price_GE.png\")\n\ntmp = reader[\"Adj Close\", :, \"GE\"]\na,b = tmp.iloc[0], tmp.iloc[-1]\nprint(a,b)\n\nc = (a - b) / b\nprint(c)\n\nprint((1+c)**(1.0/11) - 1)",
"If we'd done that, we would have seen on average about 6% return per year! That's over the average inflation of somewhere between 3-5 percent, so we're looking pretty good, right?\nLooking good?\nWell, yes and no. On the one hand, we did make money (at the expense of some risk, of course). But what if we'd chosen a better company to invest in like Apple? Or what if we'd invested instead in an index fund like Spyder?",
"fig = plt.figure()\nax = reader[\"Adj Close\", :, \"SPY\"].plot(label=\"SPY\")\nax = reader[\"Adj Close\", :, \"AAPL\"].plot(label=\"AAPL\", ax=ax)\nax = reader[\"Adj Close\", :, \"GE\"].plot(label=\"GE\", ax=ax)\nax.legend()\nax.set_title(\"Stock Adjusted Closing Price\")\nplt.savefig(\"img/close_price_3.png\")",
"Better yet!\nWe can continue this thought process ad infinitum. For instance, we could've invested in Google. Or done something even crazier (a plot I won't show for simplicity reasons) -- volatility trading.",
"fig = plt.figure()\nax = reader[\"Adj Close\", :, \"SPY\"].plot(label=\"SPY\")\nax = reader[\"Adj Close\", :, \"AAPL\"].plot(label=\"AAPL\", ax=ax)\nax = reader[\"Adj Close\", :, \"GOOG\"].plot(label=\"GOOG\", ax=ax)\nax = reader[\"Adj Close\", :, \"GE\"].plot(label=\"GE\", ax=ax)\nax.legend()\nax.set_title(\"Stock Adjusted Closing Price\")\nplt.savefig(\"img/close_price_all_4.png\")",
"So what's the game?\nActive Portfolio Management\nRichard C. Grinold, Ronald N. Kahn\nThe CAPM\nThe Capital Asset Pricing Model\n$$ r_i = \\alpha_i + \\beta_i r_m $$\n$$ E[\\alpha_i] = 0 $$\n$$ r_p = \\alpha_p + \\beta_p r_m $$\nRoughly speaking, this is the $\\alpha$ we've all heard so much about. One problem is that a linear model is not the best model for predicting stock returns.\nThe Hedge Fund Mission\nMake money\nLike Roulette\n\nA paraphrasing\nRisk\nWhat is it?\nVariance?\nExceptional Returns"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
permamodel/permamodel | notebooks/Ku_2D.ipynb | mit | [
"This model was developed by Permamodel workgroup.\nBasic theory is Kudryavtsev's method.\nReference:\n Anisimov, O. A., Shiklomanov, N. I., & Nelson, F. E. (1997).\n Global warming and active-layer thickness: results from transient general circulation models.\n Global and Planetary Change, 15(3), 61-77.",
"import os,sys\n\nsys.path.append('../../permamodel/')\n\nfrom permamodel.components import bmi_Ku_component\nfrom permamodel import examples_directory\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.basemap import Basemap, addcyclic\nimport matplotlib as mpl\n\nprint examples_directory\n\ncfg_file = os.path.join(examples_directory, 'Ku_method_2D.cfg')\nx = bmi_Ku_component.BmiKuMethod()\n\nx.initialize(cfg_file)\ny0 = x.get_value('datetime__start')\ny1 = x.get_value('datetime__end')\n\nfor i in np.linspace(y0,y1,y1-y0+1):\n \n x.update()\n print i\n\nx.finalize()\n\nALT = x.get_value('soil__active_layer_thickness')\nTTOP = x.get_value('soil__temperature')\nLAT = x.get_value('latitude')\nLON = x.get_value('longitude')\nSND = x.get_value('snowpack__depth')\n\nLONS, LATS = np.meshgrid(LON, LAT)\n\n#print np.shape(ALT)\n#print np.shape(LONS)",
"Spatially visualize active layer thickness:",
"fig=plt.figure(figsize=(8,4.5))\n\nax = fig.add_axes([0.05,0.05,0.9,0.85])\n\nm = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\\\n rsphere=(6378137.00,6356752.3142),\\\n resolution='l',area_thresh=1000.,projection='lcc',\\\n lat_1=50.,lon_0=-107.,ax=ax)\n\nX, Y = m(LONS, LATS)\n\nm.drawcoastlines(linewidth=1.25)\n# m.fillcontinents(color='0.8')\nm.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])\nm.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])\n\nclev = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0])\ncs = m.contourf(X, Y, ALT, clev, cmap=plt.cm.PuBu_r, extend='both')\n\ncbar = m.colorbar(cs)\ncbar.set_label('m')\n\nplt.show()\n\n# print x._values[\"ALT\"][:]\nALT2 = np.reshape(ALT, np.size(ALT))\nALT2 = ALT2[np.where(~np.isnan(ALT2))]\n\nprint 'Simulated ALT:'\nprint 'Max:', np.nanmax(ALT2),'m', '75% = ', np.percentile(ALT2, 75)\nprint 'Min:', np.nanmin(ALT2),'m', '25% = ', np.percentile(ALT2, 25)\n\nplt.hist(ALT2)",
"Spatially visualize mean annual ground temperature:",
"fig2=plt.figure(figsize=(8,4.5))\n\nax2 = fig2.add_axes([0.05,0.05,0.9,0.85])\n\nm2 = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\\\n rsphere=(6378137.00,6356752.3142),\\\n resolution='l',area_thresh=1000.,projection='lcc',\\\n lat_1=50.,lon_0=-107.,ax=ax2)\n\nX, Y = m2(LONS, LATS)\n\nm2.drawcoastlines(linewidth=1.25)\n# m.fillcontinents(color='0.8')\nm2.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])\nm2.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])\n\nclev = np.linspace(start=-10, stop=0, num =11)\ncs2 = m2.contourf(X, Y, TTOP, clev, cmap=plt.cm.seismic, extend='both')\n\ncbar2 = m2.colorbar(cs2)\ncbar2.set_label('Ground Temperature ($^\\circ$C)')\n\nplt.show()\n\n# # print x._values[\"ALT\"][:]\nTTOP2 = np.reshape(TTOP, np.size(TTOP))\nTTOP2 = TTOP2[np.where(~np.isnan(TTOP2))]\n\n# Hist plot:\nplt.hist(TTOP2)\n\nmask = x._model.mask\n\nprint np.shape(mask)\n\nplt.imshow(mask)\n\nprint np.nanmin(x._model.tot_percent)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
msampathkumar/data_science_sessions | Session-2-Hands-Experience-for-ML/DataScience_Presentation2-LR3.ipynb | mit | [
"Linear Regression - Part 3\nIn this section, we shall try to apply machine learning to a well known boston dataset.\nReal World Examples",
"from sklearn.datasets import load_boston\n\ndata = load_boston()\n\nprint(data.DESCR)\n\ndata.data[1]\n\ndata.target[1]\n\nX = data.data\nY = data.target",
"Dummy Classifier\nhttp://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyRegressor.html",
"from sklearn.dummy import DummyRegressor\ndummy_regr = DummyRegressor(strategy=\"median\")\ndummy_regr.fit(X, Y)\n\nfrom sklearn.metrics import mean_squared_error\nmean_squared_error(Y, dummy_regr.predict(X))",
"Linear Regression\nhttp://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html",
"from sklearn import linear_model\nregr = linear_model.LinearRegression()\nregr.fit(X, Y)\n\nmean_squared_error(Y, regr.predict(X))",
"Linear SVR\nhttp://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html",
"from sklearn.svm import LinearSVR\n# Step1: create an instance class as `regr`\n# Step2: fit the data into class instance\n\n# score\nmean_squared_error(Y, regr.predict(X))",
"Random Forest Regressor\nhttp://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html",
"from sklearn.ensemble import RandomForestRegressor\n# set random_state as zero\n\n# score",
"K-NearestNeighbors"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Uberi/zen-and-the-art-of-telemetry | Moon Phase Correlation Analysis.ipynb | mit | [
"Moon Phase Correlation Analysis",
"import ujson as json\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport plotly.plotly as py\n\nfrom moztelemetry import get_pings, get_pings_properties, get_one_ping_per_client\nfrom moztelemetry.histogram import Histogram\n\nimport datetime as dt\n\n%pylab inline",
"This Wikipedia article has a nice description of how to calculate the current phase of the moon. In code, that looks like this:",
"def approximate_moon_visibility(current_date):\n days_per_synodic_month = 29.530588853 # change this if the moon gets towed away\n days_since_known_new_moon = (current_date - dt.date(2015, 7, 16)).days\n phase_fraction = (days_since_known_new_moon % days_per_synodic_month) / days_per_synodic_month\n return (1 - phase_fraction if phase_fraction > 0.5 else phase_fraction) * 2\n\ndef date_string_to_date(date_string):\n return dt.datetime.strptime(date_string, \"%Y%m%d\").date()",
"Let's randomly sample 10% of pings for nightly submissions made from 2015-07-05 to 2015-08-05:",
"pings = get_pings(sc, app=\"Firefox\", channel=\"nightly\", submission_date=(\"20150705\", \"20150805\"), fraction=0.1, schema=\"v4\")",
"Extract the startup time metrics with their submission date and make sure we only consider one submission per user:",
"subset = get_pings_properties(pings, [\"clientId\", \"meta/submissionDate\", \"payload/simpleMeasurements/firstPaint\"])\nsubset = get_one_ping_per_client(subset)\ncached = subset.cache()",
"Obtain an array of pairs, each containing the moon visibility and the startup time:",
"pairs = cached.map(lambda p: (approximate_moon_visibility(date_string_to_date(p[\"meta/submissionDate\"])), p[\"payload/simpleMeasurements/firstPaint\"]))\npairs = np.asarray(pairs.filter(lambda p: p[1] != None and p[1] < 100000000).collect())",
"Let's see what this data looks like:",
"plt.figure(figsize=(15, 7))\nplt.scatter(pairs.T[0], pairs.T[1])\nplt.xlabel(\"Moon visibility ratio\")\nplt.ylabel(\"Startup time (ms)\")\nplt.show()",
"The correlation coefficient is now easy to calculate:",
"np.corrcoef(pairs.T)[0, 1]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
UWSEDS/LectureNotes | week_4/unit-tests.ipynb | bsd-2-clause | [
"import numpy as np\nimport pandas as pd",
"Unit Tests\nOverview and Principles\nTesting is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test. \nThere are two parts to writing tests.\n1. invoking the code under test so that it is exercised in a particular way;\n1. evaluating the results of executing code under test to determine if it behaved as expected.\nThe collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.\nFor dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.\nTest cases can be of several types. Below are listed some common classifications of test cases.\n- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.\n- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.\n- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.\n- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.\nAnother principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.\nA best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.\nExamples of Test Cases\nThis section presents examples of test cases. The code under test is the calculation of entropy.\nEntropy of a set of probabilities\n$$\nH = -\\sum_i p_i \\log(p_i)\n$$\nwhere $\\sum_i p_i = 1$.",
"import numpy as np\n# Code Under Test\ndef entropy(ps):\n if any([(p < 0.0) or (p > 1.0) for p in ps]):\n raise ValueError(\"Bad input.\")\n if sum(ps) > 1:\n raise ValueError(\"Bad input.\")\n items = ps * np.log(ps)\n new_items = []\n for item in items:\n if np.isnan(item):\n new_items.append(0)\n else:\n new_items.append(item)\n return np.abs(-np.sum(new_items))\n\n# Smoke test\ndef smoke_test(ps):\n try:\n entropy(ps)\n return True\n except:\n return False\n \nsmoke_test([0.5, 0.5])\n\n# One shot test\n0.0 == entropy([1, 0, 0, 0])\n\n# Edge tests\ndef edge_test(ps):\n try:\n entropy(ps)\n except ValueError:\n return True\n return False\n\nedge_test([-1, 2])",
"Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.\nWhat is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!",
"# One-shot test. Need to know the correct answer.\nentries = [\n [0, [1]],\n]\n\nfor entry in entries:\n ans = entry[0]\n prob = entry[1]\n if not np.isclose(entropy(prob), ans):\n print(\"Test failed!\")\nprint (\"Test completed!\")",
"Question: What is an example of another one-shot test? (Hint: You need to know the expected result.)\nOne edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.",
"# Edge test. This is something that should cause an exception.\n#entropy([-0.5])",
"Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \\frac{1}{n}$.\n$$\nH = -\\sum_{i=1}^{n} p_i \\log(p_i) \n= -\\sum_{i=1}^{n} \\frac{1}{n} \\log(\\frac{1}{n}) \n= n (-\\frac{1}{n} \\log(\\frac{1}{n}) )\n= -\\log(\\frac{1}{n})\n$$\nFor example, entropy([0.5, 0.5]) should be $-log(0.5)$.",
"# Pattern test\ndef test_equal_probabilities(n):\n prob = 1.0/n\n ps = np.repeat(prob , n)\n if np.isclose(entropy(ps), -np.log(prob)):\n print(\"Worked!\")\n else:\n import pdb; pdb.set_trace()\n print (\"Bad result.\")\n \n \n# Run a test\ntest_equal_probabilities(100000)",
"You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.\nUnittest Infrastructure\nThere are several reasons to use a test infrastructure:\n- If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code.\n- The infrastructure provides a uniform way to report test results, and to handle test failures.\n- A test infrastructure can tell you about coverage so you know what tests to add.\nWe'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following:\n1. import the unittest module\n1. define a class that inherits from unittest.TestCase\n1. write methods that run the code to be tested and check the outcomes.\nThe last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with \"test\".\nSecond, the \"test methods\" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.",
"import unittest\n\n# Define a class in which the tests will run\nclass UnitTests(unittest.TestCase):\n\n # Each method in the class to execute a test\n def test_success(self):\n self.assertEqual(1, 2)\n \n def test_success1(self):\n self.assertTrue(1 == 1)\n\n def test_failure(self):\n self.assertLess(1, 2)\n \nsuite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)\n_ = unittest.TextTestRunner().run(suite)\n\n\n# Function the handles test loading\n#def test_setup(argument ?):\n \n",
"Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach. \nAs expected, the first test passes, but the second test fails.\nExercise\n\nRewrite the above one-shot test for entropy using the unittest infrastructure.",
"# Implementating a pattern test. Use functions in the test.\nimport unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \n def test_equal_probability(self):\n def test(count):\n \"\"\"\n Invokes the entropy function for a number of values equal to count\n that have the same probability.\n :param int count:\n \"\"\"\n raise RuntimeError (\"Not implemented.\")\n #\n test(2)\n test(20)\n test(200)\n\n#test_setup(TestEntropy)\n\nimport unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \"\"\"Write the full set of tests.\"\"\"",
"Testing For Exceptions\nEdge test cases often involves handling exceptions. One approach is to code this directly.",
"import unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \n def test_invalid_probability(self):\n try:\n entropy([0.1, 0.5])\n self.assertTrue(False)\n except ValueError:\n self.assertTrue(True)\n \n#test_setup(TestEntropy)",
"unittest provides help with testing exceptions.",
"import unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \n def test_invalid_probability(self):\n with self.assertRaises(ValueError):\n entropy([0.1, 0.5])\n \nsuite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)\n_ = unittest.TextTestRunner().run(suite)\n",
"Test Files\nAlthough I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.\nThe structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.\nDiscussion\nQuestion: What tests would you write for a plotting function?\nTest Driven Development\nStart by writing the tests. Then write the code.\nWe illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output.",
"import unittest\n\n# Define a class in which the tests will run\nclass TestEntryopy(unittest.TestCase):\n \n def test_oneshot(self):\n self.assertEqual(geomean([1,1]), 1)\n \n def test_oneshot2(self):\n self.assertEqual(geomean([3, 3, 3]), 3)\n \n#test_setup(TestGeomean)\n\n#def geomean(argument?):\n# return ?",
"Other infrastructures\n\npytest\nnose\nUse binary functions that being with \"test\"\n\nReferences\nhttps://www.youtube.com/watch?v=GEqM9uJi64Q (Pydata 2015)\nhttps://www.youtube.com/watch?v=yACtdj1_IxE (Pycon 2017)\nThe first talk mentions some packages:\nengarde - https://github.com/TomAugspurger/engarde\nHypothesis - https://hypothesis.readthedocs.io/en/latest/\nFeature Forge - https://github.com/machinalis/featureforge\nDetlef Nauck talk: \nhttp://ukkdd.org.uk/2017/info/talks/nauck.pdf\nHe also had a list of R tools but I could not find the slides form the talk I saw.\nTest Driven Data Analysis:\nhttps://www.youtube.com/watch?v=TGwZnZYg0jw\nProfiling for Pandas:\nhttps://github.com/pandas-profiling/pandas-profiling"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathemage/h2o-3 | h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb | apache-2.0 | [
"H2O Tutorial: Breast Cancer Classification\nAuthor: Erin LeDell\nContact: [email protected]\nThis tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.\nDetailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.\nInstall H2O in Python\nPrerequisites\nThis tutorial assumes you have Python 2.7 installed. The h2o Python package has a few dependencies which can be installed using pip. The packages that are required are (which also have their own dependencies):\nbash\npip install requests\npip install tabulate\npip install scikit-learn\nIf you have any problems (for example, installing the scikit-learn package), check out this page for tips.\nInstall h2o\nOnce the dependencies are installed, you can install H2O. We will use the latest stable version of the h2o package, which is called \"Tibshirani-3.\" The installation instructions are on the \"Install in Python\" tab on this page.\n```bash\nThe following command removes the H2O module for Python (if it already exists).\npip uninstall h2o\nNext, use pip to install this version of the H2O Python module.\npip install http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/3/Python/h2o-3.6.0.3-py2.py3-none-any.whl\n```\nStart up an H2O cluster\nIn a Python terminal, we can import the h2o package and start up an H2O cluster.",
"import h2o\n\n# Start an H2O Cluster on your local machine\nh2o.init()",
"If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows:",
"# This will not actually do anything since it's a fake IP address\n# h2o.init(ip=\"123.45.67.89\", port=54321)",
"Download Data\nThe following code downloads a copy of the Wisconsin Diagnostic Breast Cancer dataset.\nWe can import the data directly into H2O using the Python API.",
"csv_url = \"https://h2o-public-test-data.s3.amazonaws.com/smalldata/wisc/wisc-diag-breast-cancer-shuffled.csv\"\ndata = h2o.import_file(csv_url)",
"Explore Data\nOnce we have loaded the data, let's take a quick look. First the dimension of the frame:",
"data.shape\n",
"Now let's take a look at the top of the frame:",
"data.head()",
"The first two columns contain an ID and the resposne. The \"diagnosis\" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors.",
"data.columns",
"To select a subset of the columns to look at, typical Pandas indexing applies:",
"columns = [\"id\", \"diagnosis\", \"area_mean\"]\ndata[columns].head()",
"Now let's select a single column, for example -- the response column, and look at the data more closely:",
"data['diagnosis']",
"It looks like a binary response, but let's validate that assumption:",
"data['diagnosis'].unique()\n\ndata['diagnosis'].nlevels()",
"We can query the categorical \"levels\" as well ('B' and 'M' stand for \"Benign\" and \"Malignant\" diagnosis):",
"data['diagnosis'].levels()",
"Since \"diagnosis\" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.",
"data.isna()\n\ndata['diagnosis'].isna()",
"The isna method doesn't directly answer the question, \"Does the diagnosis column contain any NAs?\", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look:",
"data['diagnosis'].isna().sum()",
"Great, no missing labels. \nOut of curiosity, let's see if there is any missing data in this frame:",
"data.isna().sum()",
"The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an \"imbalanace\" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically.",
"# TO DO: Insert a bar chart or something showing the proportion of M to B in the response.\n\n\ndata['diagnosis'].table()",
"Ok, the data is not exactly evenly distributed between the two classes -- there are almost twice as many Benign samples as there are Malicious samples. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).",
"n = data.shape[0] # Total number of training samples\ndata['diagnosis'].table()['Count']/n",
"Machine Learning in H2O\nWe will do a quick demo of the H2O software -- trying to predict malignant tumors using various machine learning algorithms.\nSpecify the predictor set and response\nThe response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis').",
"y = 'diagnosis'\n\nx = data.columns\ndel x[0:1]\nx",
"Split H2O Frame into a train and test set",
"train, test = data.split_frame(ratios=[0.75], seed=1)\n\ntrain.shape\n\n\ntest.shape",
"Train and Test a GBM model",
"# Import H2O GBM:\nfrom h2o.estimators.gbm import H2OGradientBoostingEstimator\n",
"We first create a model object of class, \"H2OGradientBoostingEstimator\". This does not actually do any training, it just sets the model up for training by specifying model parameters.",
"model = H2OGradientBoostingEstimator(distribution='bernoulli',\n ntrees=100,\n max_depth=4,\n learn_rate=0.1)",
"The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables.",
"model.train(x=x, y=y, training_frame=train, validation_frame=test)",
"Inspect Model\nThe type of results shown when you print a model, are determined by the following:\n- Model class of the estimator (e.g. GBM, RF, GLM, DL)\n- The type of machine learning problem (e.g. binary classification, multiclass classification, regression)\n- The data you specify (e.g. training_frame only, training_frame and validation_frame, or training_frame and nfolds)\nBelow, we see a GBM Model Summary, as well as training and validation metrics since we supplied a validation_frame. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score.\nThe scoring history is also printed, which shows the performance metrics over some increment such as \"number of trees\" in the case of GBM and RF.\nLastly, for tree-based methods (GBM and RF), we also print variable importance.",
"print(model)",
"Model Performance on a Test Set\nOnce a model has been trained, you can also use it to make predictions on a test set. In the case above, we passed the test set as the validation_frame in training, so we have technically already created test set predictions and performance. \nHowever, when performing model selection over a variety of model parameters, it is common for users to break their dataset into three pieces: Training, Validation and Test.\nAfter training a variety of models using different parameters (and evaluating them on a validation set), the user may choose a single model and then evaluate model performance on a separate test set. This is when the model_performance method, shown below, is most useful.",
"perf = model.model_performance(test)\nperf.auc()",
"Cross-validated Performance\nTo perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a \"fold_column\" to your H2O Frame which indicates a fold ID for each row.\nUnless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument.\nWhen performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which we call data.",
"cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli',\n ntrees=100,\n max_depth=4,\n learn_rate=0.1,\n nfolds=5)\n\ncvmodel.train(x=x, y=y, training_frame=data)\n",
"Grid Search\nOne way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over:\n- ntrees: Number of trees\n- max_depth: Maximum depth of a tree\n- learn_rate: Learning rate in the GBM\nWe will define a grid as follows:",
"ntrees_opt = [5,50,100]\nmax_depth_opt = [2,3,5]\nlearn_rate_opt = [0.1,0.2]\n\nhyper_params = {'ntrees': ntrees_opt, \n 'max_depth': max_depth_opt,\n 'learn_rate': learn_rate_opt}",
"Define an \"H2OGridSearch\" object by specifying the algorithm (GBM) and the hyper parameters:",
"from h2o.grid.grid_search import H2OGridSearch\n\ngs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params)",
"An \"H2OGridSearch\" object also has a train method, which is used to train all the models in the grid.",
"gs.train(x=x, y=y, training_frame=train, validation_frame=test)",
"Compare Models",
"print(gs)\n\n# print out the auc for all of the models\nfor g in gs:\n print(g.model_id + \" auc: \" + str(g.auc()))\n\n#TO DO: Compare grid search models"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
YuriyGuts/kaggle-quora-question-pairs | notebooks/feature-wm-intersect.ipynb | mit | [
"Feature: Intersections Weighted by Word Match\nQuestion intersections weighted by word match ratio (based on the kernel by @skihikingkevin).\nImports\nThis utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.",
"from pygoose import *\n\nfrom collections import defaultdict\n\nimport seaborn as sns\n\nimport nltk\n\nnltk.download('stopwords')",
"Config\nAutomatically discover the paths to various data folders and compose the project structure.",
"project = kg.Project.discover()",
"Identifier for storing these features on disk and referring to them later.",
"feature_list_id = 'wm_intersect'",
"Load Data\nOriginal question datasets.",
"df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('none')\ndf_test = pd.read_csv(project.data_dir + 'test.csv').fillna('none')",
"Build features",
"df_all_pairs = pd.concat([\n df_train[['question1', 'question2']],\n df_test[['question1', 'question2']]\n], axis=0).reset_index(drop='index')\n\nstops = set(nltk.corpus.stopwords.words('english'))\n\ndef word_match_share(pair):\n q1 = str(pair[0]).lower().split()\n q2 = str(pair[1]).lower().split()\n q1words = {}\n q2words = {}\n \n for word in q1:\n if word not in stops:\n q1words[word] = 1\n for word in q2:\n if word not in stops:\n q2words[word] = 1\n \n if len(q1words) == 0 or len(q2words) == 0:\n # The computer-generated chaff includes a few questions that are nothing but stopwords\n return 0\n \n shared_words_in_q1 = [w for w in q1words.keys() if w in q2words]\n shared_words_in_q2 = [w for w in q2words.keys() if w in q1words]\n R = (len(shared_words_in_q1) + len(shared_words_in_q2)) / (len(q1words) + len(q2words))\n\n return R\n\nwms = kg.jobs.map_batch_parallel(\n df_all_pairs[['question1', 'question2']].as_matrix(),\n item_mapper=word_match_share,\n batch_size=1000,\n)\n\nq_dict = defaultdict(dict)\nfor i in progressbar(range(len(wms))):\n q_dict[df_all_pairs.question1[i]][df_all_pairs.question2[i]] = wms[i]\n q_dict[df_all_pairs.question2[i]][df_all_pairs.question1[i]] = wms[i]\n\ndef q1_q2_intersect(row):\n return len(set(q_dict[row['question1']]).intersection(set(q_dict[row['question2']])))\n\ndef q1_q2_wm_ratio(row):\n q1 = q_dict[row['question1']]\n q2 = q_dict[row['question2']]\n \n inter_keys = set(q1.keys()).intersection(set(q2.keys()))\n if len(inter_keys) == 0:\n return 0\n \n inter_wm = 0\n total_wm = 0\n \n for q, wm in q1.items():\n if q in inter_keys:\n inter_wm += wm\n total_wm += wm\n \n for q, wm in q2.items():\n if q in inter_keys:\n inter_wm += wm\n total_wm += wm\n \n if total_wm == 0:\n return 0\n \n return inter_wm / total_wm\n\ndf_train['q1_q2_wm_ratio'] = df_train.apply(q1_q2_wm_ratio, axis=1, raw=True)\ndf_test['q1_q2_wm_ratio'] = df_test.apply(q1_q2_wm_ratio, axis=1, raw=True)\n\ndf_train['q1_q2_intersect'] = df_train.apply(q1_q2_intersect, axis=1, raw=True)\ndf_test['q1_q2_intersect'] = df_test.apply(q1_q2_intersect, axis=1, raw=True)",
"Visualize",
"plt.figure(figsize=(12, 6))\n\nplt.subplot(1, 2, 1)\nintersect_counts = df_train.q1_q2_intersect.value_counts()\nsns.barplot(intersect_counts.index[:20], intersect_counts.values[:20])\n\nplt.subplot(1, 2, 2)\ndf_train['q1_q2_wm_ratio'].plot.hist()\n\nplt.figure(figsize=(12, 6))\n\nplt.subplot(1, 2, 1)\nsns.violinplot(x='is_duplicate', y='q1_q2_wm_ratio', data=df_train)\n\nplt.subplot(1, 2, 2)\nsns.violinplot(x='is_duplicate', y='q1_q2_intersect', data=df_train)\n\ndf_train.plot.scatter(x='q1_q2_intersect', y='q1_q2_wm_ratio', figsize=(12, 6))\nprint(df_train[['q1_q2_intersect', 'q1_q2_wm_ratio']].corr())",
"Build final features",
"columns_to_keep = [\n 'q1_q2_intersect',\n 'q1_q2_wm_ratio',\n]\n\nX_train = df_train[columns_to_keep].values\nX_test = df_test[columns_to_keep].values",
"Save features",
"feature_names = columns_to_keep\n\nproject.save_features(X_train, X_test, feature_names, feature_list_id)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vipmunot/Data-Science-Course | Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb | mit | [
"Loading necessary library",
"import numpy as np\nimport pandas as pd\nfrom sklearn import preprocessing\nfrom sklearn import metrics\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nimport xgboost as xgb\nimport numpy as np",
"Loading data\ndeleting irrelevant features",
"kobe = pd.read_csv('data.csv', sep=',') \nkobe= kobe[np.isfinite(kobe['shot_made_flag'])]\ndel kobe['lat']\ndel kobe['lon']\ndel kobe['game_id']\ndel kobe['team_id']\ndel kobe['team_name']\n\nkobe_2 = pd.read_csv('data.csv', sep=',') \nkobe_2= kobe_2[np.isfinite(kobe_2['shot_made_flag'])]\ndel kobe_2['lat']\ndel kobe_2['lon']\ndel kobe_2['game_id']\ndel kobe_2['team_id']\ndel kobe_2['team_name']\n",
"encoding catagorical features",
"mt_up = preprocessing.LabelEncoder()\nkobe.matchup = mt_up.fit_transform(kobe.matchup )\n#kobe_2.matchup = mt_up.fit_transform(kobe.matchup )\n\nopp = preprocessing.LabelEncoder()\nkobe.opponent = opp.fit_transform(kobe.opponent )\n#kobe_2.opponent = opp.fit_transform(kobe.opponent )\n\ndt = preprocessing.LabelEncoder()\nkobe.game_date = dt.fit_transform(kobe.game_date )\n#kobe_2.game_date = dt.fit_transform(kobe.game_date )\n\nat = preprocessing.LabelEncoder()\nkobe.action_type = at.fit_transform(kobe.action_type )\n#kobe_2.action_type = at.fit_transform(kobe.action_type )\n\ncst = preprocessing.LabelEncoder()\nkobe.combined_shot_type = cst.fit_transform(kobe.combined_shot_type )\n#kobe_2.combined_shot_type = cst.fit_transform(kobe.combined_shot_type )\n\nseson = preprocessing.LabelEncoder()\nkobe.season = seson.fit_transform(kobe.season )\n#kobe_2.season = seson.fit_transform(kobe.season )\n\nst = preprocessing.LabelEncoder()\nkobe.shot_type = st.fit_transform(kobe.shot_type )\n#kobe_2.shot_type = st.fit_transform(kobe.shot_type )\n\nsza = preprocessing.LabelEncoder()\nkobe.shot_zone_area = sza.fit_transform(kobe.shot_zone_area )\n#kobe_2.shot_zone_area = sza.fit_transform(kobe.shot_zone_area )\n\nszb = preprocessing.LabelEncoder()\nkobe.shot_zone_basic = szb.fit_transform(kobe.shot_zone_basic )\n#kobe_2.shot_zone_basic = szb.fit_transform(kobe.shot_zone_basic )\n\nszr = preprocessing.LabelEncoder()\nkobe.shot_zone_range = szr.fit_transform(kobe.shot_zone_range )\n#kobe_2.shot_zone_range = szr.fit_transform(kobe.shot_zone_range )\n\n\n",
"splitting data into test and train",
"from sklearn.cross_validation import train_test_split\n# Generate the training set. Set random_state to be able to replicate results.\ntrain = kobe.sample(frac=0.6, random_state=1)\ntrain_2 = kobe_2.sample(frac=0.6, random_state=1)\n# Select anything not in the training set and put it in the testing set.\ntest = kobe.loc[~kobe.index.isin(train.index)] \ntest_2 = kobe_2.loc[~kobe_2.index.isin(train_2.index)] ",
"seperating features and class in both test and train sets",
"columns = kobe.columns.tolist()\ncolumns = [c for c in columns if c not in [\"shot_made_flag\",\"team_id\",\"team_name\"]]\nkobe_train_x =train[columns]\nkobe_test_x =test[columns]\nkobe_train_y=train['shot_made_flag']\nkobe_test_y=test['shot_made_flag']\nprint(kobe_train_x.shape)\nprint(kobe_test_x.shape)\nprint(kobe_train_y.shape)\nprint(kobe_test_y.shape)",
"getting best parameters\ndo not run this section as the best set of parameters is already found",
"def optimization(depth, n_est,l_r):\n maxacc=0\n best_depth=0\n best_n_est=0\n best_l_r=0\n for i in range(1,depth):\n for j in n_est:\n for k in l_r: \n gbm = xgb.XGBClassifier(max_depth=i, n_estimators=j, learning_rate=k).fit(kobe_train_x, kobe_train_y)\n predicted = gbm.predict(kobe_test_x)\n key=str(i)+\"_\"+str(j)+\"_\"+str(k)\n accu=accuracy_score(kobe_test_y, predicted)\n if(accu>maxacc):\n maxacc=accu\n best_depth=i\n best_n_est=j\n best_l_r=k\n print(maxkey+\" \"+str(maxacc))\n return(best_depth,best_n_est,best_l_r)\n\nn_est=[5,10,20,50,100,150,200,250,300,350,400,450,500,550,600,650,700,750,800,850,900,950,1000]\ndepth=10\nl_r = [0.0001, 0.001, 0.01,0.05, 0.1, 0.2, 0.3]\nbest_depth,best_n_est,best_l_r=optimization(depth,n_est,l_r)",
"creating model with best parameter combination and reporting metrics",
"#hard coded the best features\ngbm = xgb.XGBClassifier(max_depth=4, n_estimators=600, learning_rate=0.01).fit(kobe_train_x, kobe_train_y) \npredicted = gbm.predict(kobe_test_x)\n# summarize the fit of the model\nprint(metrics.classification_report(kobe_test_y, predicted))\nprint(\"Confusion Matrix\")\nprint(metrics.confusion_matrix(kobe_test_y, predicted))\naccuracy=accuracy_score(kobe_test_y, predicted)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))",
"creating a test file with predicted results to visualize",
"test_2['predicted']=predicted\ntest_2.to_csv(path_or_buf='test_with_predictions.csv', sep=',')\ntest_2.head(10)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Dataweekends/pyladies_intro_to_data_science | Iris Flowers Workshop.ipynb | mit | [
"Separating Flowers\nThis notebook explores a classic Machine Learning Dataset: the Iris flower dataset\nTutorial goals\n\nExplore the dataset\nBuild a simple predictive modeling\nIterate and improve your score\n\nHow to follow along:\ngit clone https://github.com/dataweekends/pyladies_intro_to_data_science\n\ncd pyladies_intro_to_data_science\n\nipython notebook\n\nWe start by importing the necessary libraries:",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"1) Explore the dataset\nNumerical exploration\n\nLoad the csv file into memory using Pandas\nDescribe each attribute\nis it discrete?\nis it continuous?\nis it a number?\n\n\nIdentify the target\nCheck if any values are missing\n\nLoad the csv file into memory using Pandas",
"df = pd.read_csv('iris-2-classes.csv')",
"What's the content of df ?",
"df.iloc[[0,1,98,99]]",
"Describe each attribute (is it discrete? is it continuous? is it a number? is it text?)",
"df.info()",
"Quick stats on the features",
"df.describe()",
"Identify the target\nWhat are we trying to predict?\nah, yes... the type of Iris flower!",
"df['iris_type'].value_counts()",
"Check if any values are missing",
"df.info()",
"Mental notes so far:\n\nDataset contains 100 entries\n1 Target column (iris_type)\n4 Numerical Features\nNo missing values\n\nVisual exploration\n\nDistribution of Sepal Length, influence on target:",
"df[df['iris_type']=='virginica']['sepal_length_cm'].plot(kind='hist', bins = 10, range = (4,7),\n alpha = 0.3, color = 'b')\ndf[df['iris_type']=='versicolor']['sepal_length_cm'].plot(kind='hist', bins = 10, range = (4,7),\n alpha = 0.3, color = 'g')\nplt.title('Distribution of Sepal Length', size = '20')\nplt.xlabel('Sepal Length (cm)', size = '20')\nplt.ylabel('Number of flowers', size = '20')\nplt.legend(['Virginica', 'Versicolor'])\nplt.show()",
"Two features combined, scatter plot:",
"plt.scatter(df[df['iris_type']== 'virginica']['petal_length_cm'].values,\n df[df['iris_type']== 'virginica']['sepal_length_cm'].values, label = 'Virginica', c = 'b', s = 40)\nplt.scatter(df[df['iris_type']== 'versicolor']['petal_length_cm'].values,\n df[df['iris_type']== 'versicolor']['sepal_length_cm'].values, label = 'Versicolor', c = 'r', marker='s',s = 40)\nplt.legend(['virginica', 'versicolor'], loc = 2)\nplt.title('Iris Flowers', size = '20')\nplt.xlabel('Petal Length (cm)', size = '20')\nplt.ylabel('Sepal Length (cm)', size = '20')\nplt.show()",
"Ok, so, the flowers seem to have different characteristics\nLet's build a simple model to test that\nDefine a new target column called target like this:\n- if iris_type = 'virginica' ===> target = 1\n- otherwise target = 0",
"df['target'] = df['iris_type'].map({'virginica': 1, 'versicolor': 0})\n\nprint df[['iris_type', 'target']].head(2)\nprint\nprint df[['iris_type', 'target']].tail(2)",
"Define simplest model as benchmark\nThe simplest model is a model that predicts 0 for everybody, i.e. all versicolor.\nHow good is it?",
"df['target'].value_counts()",
"If I predict every flower is Versicolor, I'm correct 50% of the time\nWe need to do better than that\nDefine features (X) and target (y) variables",
"X = df[['sepal_length_cm', 'sepal_width_cm',\n 'petal_length_cm', 'petal_width_cm']]\ny = df['target']",
"Initialize a decision Decision Tree model",
"from sklearn.tree import DecisionTreeClassifier\n\nmodel = DecisionTreeClassifier(random_state=0)\nmodel ",
"Split the features and the target into a Train and a Test subsets.\nRatio should be 70/30",
"from sklearn.cross_validation import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, \n test_size = 0.3, random_state=0)",
"Train the model",
"model.fit(X_train, y_train)",
"Calculate the model score",
"my_score = model.score(X_test, y_test)\n\nprint \"Classification Score: %0.2f\" % my_score",
"Print the confusion matrix",
"from sklearn.metrics import confusion_matrix\n\ny_pred = model.predict(X_test)\n\nprint \"\\n=======confusion matrix==========\"\nprint confusion_matrix(y_test, y_pred)",
"3) Iterate and improve\nStart from:\n> python iris_starter_script.py\n\nIt's a basic pipeline. How can you improve the score? Try:\n- Changing the model parameters\n\nUsing a different model\n\nNext Steps: try separating 3 classes instead of 2 (iris.csv provided)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jArumugam/python-notes | P12Advanced Python Objects - Test.ipynb | mit | [
"Advanced Python Objects Test\nAdvanced Numbers\nProblem 1: Convert 1024 to binary and hexadecimal representation:",
"print hex(1024)",
"Problem 2: Round 5.23222 to two decimal places",
"print round(5.23222,2)",
"Advanced Strings\nProblem 3: Check if every letter in the string s is lower case",
"s = 'hello how are you Mary, are you feeling okay?'\n\nretVal = 1\nfor word in s.split(): \n print word\n for item in word:\n # print item\n if not item.islower():\n # print item\n print 'The string has Uppercase characters'\n retVal = 0\n break\nprint retVal\n\ns.islower()",
"Problem 4: How many times does the letter 'w' show up in the string below?",
"s = 'twywywtwywbwhsjhwuwshshwuwwwjdjdid'\ns.count('w')",
"Advanced Sets\nProblem 5: Find the elements in set1 that are not in set2:",
"set1 = {2,3,1,5,6,8}\nset2 = {3,1,7,5,6,8}\n\nset1.difference(set2)",
"Problem 6: Find all elements that are in either set:",
"set1.intersection(set2)",
"Advanced Dictionaries\nProblem 7: Create this dictionary:\n{0: 0, 1: 1, 2: 8, 3: 27, 4: 64}\n using dictionary comprehension.",
"{ val:val**3 for val in xrange(0,5)}",
"Advanced Lists\nProblem 8: Reverse the list below:",
"l = [1,2,3,4] \nl[::-1]",
"Problem 9: Sort the list below",
"l = [3,4,2,5,1]\nsorted(l)",
"Great Job!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jegibbs/phys202-2015-work | assignments/assignment04/TheoryAndPracticeEx02.ipynb | mit | [
"Theory and Practice of Visualization Exercise 2\nImports",
"from IPython.display import Image",
"Violations of graphical excellence and integrity\nFind a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information.\n\nCNN\nFox News\nTime\n\nUpload the image for the visualization to this directory and display the image inline in this notebook.",
"# Add your filename and uncomment the following line:\nImage(filename='TheoryAndPracticeEx02graph.png')",
"Describe in detail the ways in which the visualization violates graphical integrity and excellence:\nLooking at this graph, I have no idea what it is trying to say; only one axis is labeled. According to the very small print at the bottom, it is showing \"GDP % quarterly change.\"\nThe title of the article is \"U.S. economy looks weaker, as GDP data is revised.\" I do not get that from this graph. In fact, it shows growth from last quarter.\nThis graph has so few data points that it cannot possibly have graphical integrity.\nIt clearly violates graphical excellence because it takes a lot to figure out what it is trying to say."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nikbearbrown/Deep_Learning | NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb | mit | [
"Credit Card fraud detection using Autoencoders\nIt's Sunday morning, it's quiet and you wake up with a big smile on your face. Today is going to be a great day! Except, your phone rings, rather \"internationally\". You pick it up slowly and hear something really bizarre - \"Bonjour, je suis Michele. Oops, sorry. I am Michele, your personal bank agent.\". What could possibly be so urgent for someone from Switzerland to call you at this hour? \"Did you authorize a transaction for $3,358.65 for 100 copies of Diablo 3?\" Immediately, you start thinking of ways to explain why you did that to your loved one. \"No, I didn't !?\". Michele's answer is quick and to the point - \"Thank you, we're on it\". Whew, that was close! But how did Michele knew that this transaction was suspicious? After all, you did order 10 new smartphones from that same bank account, last week - Michele didn't call then.\nAnnual global fraud losses reached 21.8 billion dollars in 2015, according to Nilson Report.\nProbably you feel very lucky if you are a fraud. About every 12 cents per $100 were stolen in the US during the same year. Our friend Michele might have a serious problem to solve here.\nHere, we will train an Autoencoder Neural Network (implemented in Keras) in unsupervised (or semi-supervised) fashion for Anomaly Detection in credit card transaction data. The trained model will be evaluated on pre-labeled and anonymized dataset.\nSetup",
"import pandas as pd\nimport numpy as np\nimport pickle\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport tensorflow as tf\nimport seaborn as sns\nfrom pylab import rcParams\nfrom sklearn.model_selection import train_test_split\nfrom keras.models import Model, load_model\nfrom keras.layers import Input, Dense\nfrom keras.callbacks import ModelCheckpoint, TensorBoard\nfrom keras import regularizers\n\n%matplotlib inline\n\nsns.set(style='whitegrid', palette='muted', font_scale=1.5)\n\nrcParams['figure.figsize'] = 14, 8\n\nRANDOM_SEED = 42\nLABELS = [\"Normal\", \"Fraud\"]\n",
"Loading the data\nThe dataset we're going to use can be downloaded from Kaggle. It contains data about credit card transactions that occurred during a period of two days, with 492 frauds out of 284,807 transactions.\nAll variables in the dataset are numerical. The data has been transformed using PCA transformation(s) due to privacy reasons. The two features that haven't been changed are Time and Amount. Time contains the seconds elapsed between each transaction and the first transaction in the dataset.",
"df = pd.read_csv(\"creditcard.csv\")\n",
"Exploration",
"df.shape",
"31 columns, 2 of which are Time and Amount. The rest are output from the PCA transformation. Let's check for missing values:",
"df.isnull().values.any()\n\ncount_classes = pd.value_counts(df['Class'], sort = True)\ncount_classes.plot(kind = 'bar', rot=0)\nplt.title(\"Transaction class distribution\")\nplt.xticks(range(2), LABELS)\nplt.xlabel(\"Class\")\nplt.ylabel(\"Frequency\");\n",
"We have a highly imbalanced dataset on our hands. Normal transactions overwhelm the fraudulent ones by a large margin. Let's look at the two types of transactions:",
"frauds = df[df.Class == 1]\nnormal = df[df.Class == 0]\n\nfrauds.shape \n\nplt.hist(normal.Amount, bins = 100)\nplt.xlim([0,20000])\nplt.ylim([0,10000])\nplt.tight_layout()",
"Let's have a more graphical representation:",
"f, axes = plt.subplots(nrows = 2, ncols = 1, sharex = True)\n\naxes[0].hist(normal.Amount, bins = 100)\naxes[0].set_xlim([0,20000])\naxes[0].set_ylim([0,10000])\naxes[0].set_title('Normal')\n\n\naxes[1].hist(frauds.Amount, bins = 50)\naxes[1].set_xlim([0,10000])\naxes[1].set_ylim([0,200])\naxes[1].set_title('Frauds')\n",
"Autoencoders\nLets get started with autoencoders and we optimize the parameters of our Autoencoder model in such way that a special kind of error -Rreconstruction Error is minimized. In practice, the traditional squared error is often used:\n$$\\textstyle L(x,x') = ||\\, x - x'||^2$$\nPreparing the data\nFirst, let's drop the Time column (not going to use it) and use the scikit's StandardScaler on the Amount. The scaler removes the mean and scales the values to unit variance:",
"from sklearn.preprocessing import StandardScaler\n\ndata = df.drop(['Time'], axis=1)\n\ndata['Amount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))\n",
"Training our Autoencoder is gonna be a bit different from what we are used to. Let's say you have a dataset containing a lot of non fraudulent transactions at hand. You want to detect any anomaly on new transactions. We will create this situation by training our model on the normal transactions, only. Reserving the correct class on the test set will give us a way to evaluate the performance of our model. We will reserve 20% of our data for testing:",
"X_train, X_test = train_test_split(data, test_size=0.2, random_state=RANDOM_SEED)\nX_train = X_train[X_train.Class == 0]\nX_train = X_train.drop(['Class'], axis=1)\n\ny_test = X_test['Class']\nX_test = X_test.drop(['Class'], axis=1)\n\nX_train = X_train.values\nX_test = X_test.values\n\n\nX_train.shape",
"Building the model\nOur Autoencoder uses 4 fully connected layers with 14, 7, 7 and 29 neurons respectively. The first two layers are used for our encoder, the last two go for the decoder. Additionally, L1 regularization will be used during training:",
"input_dim = X_train.shape[1]\nencoding_dim = 32\n\n\ninput_layer = Input(shape=(input_dim, ))\n\nencoder = Dense(encoding_dim, activation=\"relu\", \n activity_regularizer=regularizers.l1(10e-5))(input_layer)\nencoder = Dense(int(encoding_dim / 2), activation=\"sigmoid\")(encoder)\n\ndecoder = Dense(int(encoding_dim / 2), activation='sigmoid')(encoder)\ndecoder = Dense(input_dim, activation='relu')(decoder)\n\nautoencoder = Model(inputs=input_layer, outputs=decoder)",
"Let's train our model for 200 epochs with a batch size of 32 samples and save the best performing model to a file. The ModelCheckpoint provided by Keras is really handy for such tasks. Additionally, the training progress will be exported in a format that TensorBoard understands.",
"import h5py as h5py\n\n\n\nnb_epoch = 100\nbatch_size = 32\n\nautoencoder.compile(optimizer='adam', \n loss='mean_squared_error', \n metrics=['accuracy'])\n\ncheckpointer = ModelCheckpoint(filepath=\"model.h5\",\n verbose=0,\n save_best_only=True)\ntensorboard = TensorBoard(log_dir='./logs',\n histogram_freq=0,\n write_graph=True,\n write_images=True)\n\nhistory = autoencoder.fit(X_train, X_train,\n epochs=nb_epoch,\n batch_size=batch_size,\n shuffle=True,\n validation_data=(X_test, X_test),\n verbose=1, callbacks=[checkpointer, tensorboard]).history\n\nautoencoder = load_model('model.h5')",
"Evaluation",
"plt.plot(history['loss'])\nplt.plot(history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper right');",
"The reconstruction error on our training and test data seems to converge nicely. Is it low enough? Let's have a closer look at the error distribution:",
"predictions = autoencoder.predict(X_test)\n\n\nmse = np.mean(np.power(X_test - predictions, 2), axis=1)\nerror_df = pd.DataFrame({'reconstruction_error': mse,\n 'true_class': y_test})\nerror_df.describe()",
"Reconstruction error without fraud",
"fig = plt.figure()\nax = fig.add_subplot(111)\nnormal_error_df = error_df[(error_df['true_class']== 0) & (error_df['reconstruction_error'] < 10)]\n_ = ax.hist(normal_error_df.reconstruction_error.values, bins=10)",
"Reconstruction error with fraud",
"fig = plt.figure()\nax = fig.add_subplot(111)\nfraud_error_df = error_df[error_df['true_class'] == 1]\n_ = ax.hist(fraud_error_df.reconstruction_error.values, bins=10)",
"Prediction\nOur model is a bit different this time. It doesn't know how to predict new values. But we don't need that. In order to predict whether or not a new/unseen transaction is normal or fraudulent, we'll calculate the reconstruction error from the transaction data itself. If the error is larger than a predefined threshold, we'll mark it as a fraud (since our model should have a low error on normal transactions). Let's pick that value:",
"threshold = 2.9",
"And see how well we're dividing the two types of transactions:",
"groups = error_df.groupby('true_class')\nfig, ax = plt.subplots()\n\nfor name, group in groups:\n ax.plot(group.index, group.reconstruction_error, marker='o', ms=3.5, linestyle='',\n label= \"Fraud\" if name == 1 else \"Normal\")\nax.hlines(threshold, ax.get_xlim()[0], ax.get_xlim()[1], colors=\"r\", zorder=100, label='Threshold')\nax.legend()\nplt.title(\"Reconstruction error for different classes\")\nplt.ylabel(\"Reconstruction error\")\nplt.xlabel(\"Data point index\")\nplt.show();",
"That chart might be a bit deceiving. Let's have a look at the confusion matrix:",
"from sklearn.metrics import (confusion_matrix)\n\ny_pred = [1 if e > threshold else 0 for e in error_df.reconstruction_error.values]\nconf_matrix = confusion_matrix(error_df.true_class, y_pred)\n\nplt.figure(figsize=(12, 12))\nsns.heatmap(conf_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt=\"d\");\nplt.title(\"Confusion matrix\")\nplt.ylabel('True class')\nplt.xlabel('Predicted class')\nplt.show()",
"Conclusion\nWe've created a very simple Deep Autoencoder in Keras that can reconstruct what non fraudulent transactions looks like. Think about it, we gave a lot of one-class examples (normal transactions) to a model and it learned (somewhat) how to discriminate whether or not new examples belong to that same class. Isn't that cool? Our dataset was kind of magical, though. We really don't know what the original features look like.\nKeras gave us very clean and easy to use API to build a non-trivial Deep Autoencoder."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: HAMMOZ-CONSORTIUM\nSource ID: SANDBOX-3\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:03\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CyberCRI/dataanalysis-herocoli-redmetrics | v1.52.2/Data mining/ruleAssociationMining.ipynb | cc0-1.0 | [
"%run dataFormating.ipynb",
"What subsets of scientific questions tend to be answered correctly by the same subjects?\nMining",
"from orangecontrib.associate.fpgrowth import * \nimport pandas as pd\nfrom numpy import *\n\nquestions = correctedScientific.columns\ncorrectedScientificText = [[] for _ in range(correctedScientific.shape[0])]\nfor q in questions:\n for index in range(correctedScientific.shape[0]):\n r = correctedScientific.index[index]\n if correctedScientific.loc[r, q]:\n correctedScientificText[index].append(q)\n#correctedScientificText\n\nlen(correctedScientificText)\n\n# Get frequent itemsets with support > 25%\n# run time < 1 min\nsupport = 0.20\nitemsets = frequent_itemsets(correctedScientificText, math.floor(len(correctedScientificText) * support))\n#dict(itemsets)\n\n# Generate rules according to confidence, confidence > 85 %\n# run time < 5 min\nconfidence = 0.80\nrules = association_rules(dict(itemsets), confidence)\n#list(rules)\n\n# Transform rules generator into a Dataframe\nrulesDataframe = pd.DataFrame([(ant, cons, supp, conf) for ant, cons, supp, conf in rules])\nrulesDataframe.rename(columns = {0:\"antecedants\", 1:\"consequents\", 2:\"support\", 3:\"confidence\"}, inplace=True)\nrulesDataframe.head()\n\n# Save the mined rules to file\nrulesDataframe.to_csv(\"results/associationRulesMiningSupport\"+str(support)+\"percentsConfidence\"+str(confidence)+\"percents.csv\")",
"Search for interesting rules\nInteresting rules are more likely to be the ones with highest confidence, the highest lift or with a bigger consequent set. Pairs can also be especially interesting",
"# Sort rules by confidence\nconfidenceSortedRules = rulesDataframe.sort_values(by = [\"confidence\", \"support\"], ascending=[False, False])\nconfidenceSortedRules.head(50)\n\n# Sort rules by size of consequent set\nrulesDataframe[\"consequentSize\"] = rulesDataframe[\"consequents\"].apply(lambda x: len(x))\nconsequentSortedRules = rulesDataframe.sort_values(by = [\"consequentSize\", \"confidence\", \"support\"], ascending=[False, False, False])\nconsequentSortedRules.head(50)\n\n# Select only pairs (rules with antecedent and consequent of size one)\n# Sort pairs according to confidence\nrulesDataframe[\"fusedRule\"] = rulesDataframe[[\"antecedants\", \"consequents\"]].apply(lambda x: frozenset().union(*x), axis=1)\nrulesDataframe[\"ruleSize\"] = rulesDataframe[\"fusedRule\"].apply(lambda x: len(x))\npairRules = rulesDataframe.sort_values(by=[\"ruleSize\", \"confidence\", \"support\"], ascending=[True, False, False])\npairRules.head(30)\n\ncorrectedScientific.columns\n\n# Sort questions by number of apparition in consequents\nfor q in scientificQuestions:\n rulesDataframe[q+\"c\"] = rulesDataframe[\"consequents\"].apply(lambda x: 1 if q in x else 0)\noccurenceInConsequents = rulesDataframe.loc[:,scientificQuestions[0]+\"c\":scientificQuestions[-1]+\"c\"].sum(axis=0)\n\noccurenceInConsequents.sort_values(inplace=True, ascending=False)\noccurenceInConsequents\n\n# Sort questions by number of apparition in antecedants\nfor q in scientificQuestions:\n rulesDataframe[q+\"a\"] = rulesDataframe[\"antecedants\"].apply(lambda x: 1 if q in x else 0)\noccurenceInAntecedants = rulesDataframe.loc[:,scientificQuestions[0]+\"a\":scientificQuestions[-1]+\"a\"].sum(axis=0)\noccurenceInAntecedants.sort_values(inplace=True, ascending=False)\noccurenceInAntecedants\n\nsortedPrePostProgression = pd.read_csv(\"../../data/sortedPrePostProgression.csv\")\nsortedPrePostProgression.index = sortedPrePostProgression.iloc[:,0]\nsortedPrePostProgression = sortedPrePostProgression.drop(sortedPrePostProgression.columns[0], axis = 1)\ndel sortedPrePostProgression.index.name\nsortedPrePostProgression.loc['occ_ant',:] = 0\nsortedPrePostProgression.loc['occ_csq',:] = 0\nsortedPrePostProgression\n\nfor questionA, occsA in enumerate(occurenceInAntecedants):\n questionVariableName = occurenceInAntecedants.index[questionA][:-1]\n question = globals()[questionVariableName]\n questionC = questionVariableName + \"c\"\n sortedPrePostProgression.loc['occ_ant',question] = occsA\n occsC = occurenceInConsequents.loc[questionC]\n sortedPrePostProgression.loc['occ_csq',question] = occsC\n #print(questionVariableName+\"='\"+question+\"'\")\n #print(\"\\t\"+questionVariableName+\"a=\"+str(occsA)+\",\"+questionC+\"=\"+str(occsC))\n #print()\nsortedPrePostProgression.T"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
raoyvn/deep-learning | first-neural-network/Your_first_neural_network.ipynb | mit | [
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n # self.activation_function = sigmoid # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n def sigmoid(x):\n return 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation here\n self.activation_function = sigmoid\n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n learnrate = self.lr\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer, since activation function is f(x) = x\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y-final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error # since derivative for identity function is 1\n \n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = np.dot( self.weights_hidden_to_output,output_error_term)\n\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n\n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term * X[:, None]\n # Weight step (hidden to output)\n delta_weights_h_o += output_error_term * hidden_outputs[: , None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += learnrate * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += learnrate * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\niterations = 10000\nlearning_rate = 0.3\nhidden_nodes = 12\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nModel is failing to predict accurately around holidays"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CoreSecurity/pysap | docs/fileformats/SAPCAR.ipynb | gpl-2.0 | [
"SAP CAR\nThe following subsections show a graphical representation of the file format portions and how to generate them.\nFirst we need to perform some setup to import the packet classes:",
"from pysap.SAPCAR import *\nfrom IPython.display import display",
"SAPCAR Archive version 2.00\nWe first create a temporary file and compress it inside an archive file:",
"with open(\"some_file\", \"w\") as fd:\n fd.write(\"Some string to compress\")\n\nf0 = SAPCARArchive(\"archive_file.car\", mode=\"wb\", version=SAPCAR_VERSION_200)\nf0.add_file(\"some_file\")",
"The file is comprised of the following main structures:\nSAPCAR Archive Header",
"f0._sapcar.canvas_dump()",
"SAPCAR Entry Header",
"f0._sapcar.files0[0].canvas_dump()",
"SAPCAR Data Block",
"f0._sapcar.files0[0].blocks[0].canvas_dump()",
"SAPCAR Compressed Data",
"f0._sapcar.files0[0].blocks[0].compressed.canvas_dump()",
"SAPCAR Archive version 2.01",
"f1 = SAPCARArchive(\"archive_file.car\", mode=\"wb\", version=SAPCAR_VERSION_201)\nf1.add_file(\"some_file\")",
"The file is comprised of the following main structures:\nSAPCAR Archive Header",
"f1._sapcar.canvas_dump()",
"SAPCAR Entry Header",
"f1._sapcar.files1[0].canvas_dump()",
"SAPCAR Data Block",
"f1._sapcar.files1[0].blocks[0].canvas_dump()",
"SAPCAR Compressed data",
"f1._sapcar.files1[0].blocks[0].compressed.canvas_dump()\n\nfrom os import remove\nremove(\"some_file\")\nremove(\"archive_file.car\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
random-forests/tensorflow-workshop | archive/zurich/solutions/02_quickdraw_solution.ipynb | apache-2.0 | [
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport numpy as np\nimport tensorflow as tf",
"Boilerplate for graph visualization",
"# This is for graph visualization.\n\nfrom IPython.display import clear_output, Image, display, HTML\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = \"<stripped %d bytes>\"%size\n return strip_def\n\ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n\n iframe = \"\"\"\n <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '"'))\n display(HTML(iframe))",
"Load the data\nRun 00_download_data.ipynb if you haven't already",
"DATA_DIR = '../data/'\ndata_filename = os.path.join(DATA_DIR, \"zoo.npz\")\ndata = np.load(open(data_filename))\n\ntrain_data = data['arr_0']\ntrain_labels = data['arr_1']\ntest_data = data['arr_2']\ntest_labels = data['arr_3']\ndel data\nprint(\"Data shapes: \", test_data.shape, test_labels.shape, train_data.shape, train_labels.shape)",
"Create a simple classifier with low-level TF Ops",
"tf.reset_default_graph()\n\ninput_dimension = train_data.shape[1] # 784 = 28*28 pixels\noutput_dimension = train_labels.shape[1] # 10 classes\n\nbatch_size = 32\nhidden1_units = 128\n\ndata_batch = tf.placeholder(\"float\", shape=[None, input_dimension], name=\"data\")\nlabel_batch = tf.placeholder(\"float\", shape=[None, output_dimension], name=\"labels\")\n\nweights_1 = tf.Variable(\n tf.truncated_normal(\n [input_dimension, hidden1_units], \n stddev=1.0 / np.sqrt(float(input_dimension))),\n name='weights_1')\n\n# Task: Add Bias to first layer\n# Task: Use Cross-Entropy instead of Squared Loss\n\n# SOLUTION: Create biases variable.\nbiases_1 = tf.Variable(\n tf.truncated_normal(\n [hidden1_units], \n stddev=1.0 / np.sqrt(float(hidden1_units))),\n name='biases_1')\n\nweights_2 = tf.Variable(\n tf.truncated_normal(\n [hidden1_units, output_dimension], \n stddev=1.0 / np.sqrt(float(hidden1_units))),\n name='weights_2')\n\n# SOLUTION: Add the bias term to the first layer\nwx_b = tf.add(tf.matmul(data_batch, weights_1), biases_1)\nhidden_activations = tf.nn.relu(wx_b)\noutput_activations = tf.nn.tanh(tf.matmul(hidden_activations, weights_2))\n\n# SOLUTION: Replace the l2 loss with softmax cross entropy.\nwith tf.name_scope(\"loss\"):\n # loss = tf.nn.l2_loss(label_batch - output_activations)\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(\n labels=label_batch, \n logits=output_activations))\n\nshow_graph(tf.get_default_graph().as_graph_def())\n",
"We can run this graph by feeding in batches of examples using a feed_dict. The keys of the feed_dict are placeholders we've defined previously.\nThe first argument of session.run is the tensor that we're computing. Only parts of the graph required to produce this value will be executed.",
"with tf.Session() as sess:\n init = tf.global_variables_initializer()\n sess.run(init)\n \n random_indices = np.random.permutation(train_data.shape[0])\n for i in range(1000):\n batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size\n batch_indices = random_indices[batch_start_idx:batch_start_idx + batch_size]\n batch_loss = sess.run(\n loss, \n feed_dict = {\n data_batch : train_data[batch_indices,:],\n label_batch : train_labels[batch_indices,:]\n })\n if (i + 1) % 100 == 0:\n print(\"Loss at iteration {}: {}\".format(i+1, batch_loss))",
"No learning yet but we get the losses per batch.\nWe need to add an optimizer to the graph.",
"# Task: Replace GradientDescentOptimizer with AdagradOptimizer and a 0.1 learning rate.\n# learning_rate = 0.005\n# updates = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)\n\n# SOLUTION: Replace GradientDescentOptimizer\nlearning_rate = 0.1\nupdates = tf.train.AdagradOptimizer(learning_rate).minimize(loss)\n\nwith tf.Session() as sess:\n init = tf.global_variables_initializer()\n sess.run(init)\n \n random_indices = np.random.permutation(train_data.shape[0])\n n_epochs = 10 # how often do to go through the training data\n max_steps = train_data.shape[0]*n_epochs // batch_size\n for i in range(max_steps):\n batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size\n batch_indices = random_indices[batch_start_idx:batch_start_idx+batch_size]\n batch_loss, _ = sess.run(\n [loss, updates], \n feed_dict = {\n data_batch : train_data[batch_indices,:],\n label_batch : train_labels[batch_indices,:]\n })\n\n if i % 200 == 0 or i == max_steps - 1:\n random_indices = np.random.permutation(train_data.shape[0])\n print(\"Batch-Loss at iteration {}: {}\".format(i, batch_loss))\n\n test_predictions = sess.run(\n output_activations, \n feed_dict = {\n data_batch : test_data,\n label_batch : test_labels\n })\n wins = np.argmax(test_predictions, axis=1) == np.argmax(test_labels, axis=1)\n print(\"Accuracy on test: {}%\".format(100*np.mean(wins)))",
"Loss going down, Accuracy going up! \\o/\nNotice how batch loss differs between batches.\nModel wrapped in a custom estimator\nIn TensorFlow, we can make it easier to experiment with different models when we separately define a model_fn and an input_fn.",
"tf.reset_default_graph()\n\n# Model parameters.\nbatch_size = 32\nhidden1_units = 128\nlearning_rate = 0.005\ninput_dimension = train_data.shape[1] # 784 = 28*28 pixels\noutput_dimension = train_labels.shape[1] # 6 classes\nn_epochs = 10 # how often do to go through the training data\n\n\ndef input_fn(data, labels):\n input_images = tf.constant(data, shape=data.shape, verify_shape=True, dtype=tf.float32)\n input_labels = tf.constant(labels, shape=labels.shape, verify_shape=True, dtype=tf.float32)\n image, label = tf.train.slice_input_producer(\n [input_images, input_labels],\n num_epochs=n_epochs)\n dataset_dict = dict(images=image, labels=label)\n batch_dict = tf.train.batch(\n dataset_dict, batch_size, allow_smaller_final_batch=True)\n batch_labels = batch_dict.pop('labels')\n return batch_dict, batch_labels\n\n\ndef model_fn(features, targets, mode, params):\n # 1. Configure the model via TensorFlow operations (same as above)\n weights_1 = tf.Variable(\n tf.truncated_normal(\n [input_dimension, hidden1_units],\n stddev=1.0 / np.sqrt(float(input_dimension))))\n weights_2 = tf.Variable(\n tf.truncated_normal(\n [hidden1_units, output_dimension],\n stddev=1.0 / np.sqrt(float(hidden1_units))))\n hidden_activations = tf.nn.relu(tf.matmul(features['images'], weights_1))\n output_activations = tf.matmul(hidden_activations, weights_2)\n \n # 2. Define the loss function for training/evaluation\n loss = tf.reduce_mean(tf.nn.l2_loss(targets - output_activations))\n \n # 3. Define the training operation/optimizer\n train_op = tf.contrib.layers.optimize_loss(\n loss=loss,\n global_step=tf.contrib.framework.get_global_step(),\n learning_rate=learning_rate,\n optimizer=\"SGD\")\n \n # 4. Generate predictions\n predictions_dict = {\n \"classes\": tf.argmax(input=output_activations, axis=1),\n \"probabilities\": tf.nn.softmax(output_activations, name=\"softmax_tensor\"), \n \"logits\": output_activations,\n }\n \n # Optional: Define eval metric ops; here we add an accuracy metric.\n is_correct = tf.equal(tf.argmax(input=targets, axis=1),\n tf.argmax(input=output_activations, axis=1))\n accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))\n eval_metric_ops = { \"accuracy\": accuracy}\n\n # 5. Return predictions/loss/train_op/eval_metric_ops in ModelFnOps object\n return tf.contrib.learn.ModelFnOps(\n mode=mode,\n predictions=predictions_dict,\n loss=loss,\n train_op=train_op,\n eval_metric_ops=eval_metric_ops)\n\n\ncustom_model = tf.contrib.learn.Estimator(model_fn=model_fn)\n\n# Train and evaluate the model.\ndef evaluate_model(model, input_fn):\n for i in range(6):\n max_steps = train_data.shape[0]*n_epochs // batch_size\n model.fit(input_fn=lambda: input_fn(train_data, train_labels), steps=max_steps)\n print(model.evaluate(input_fn=lambda: input_fn(test_data, test_labels),\n steps=150))\n\n\nevaluate_model(custom_model, input_fn)",
"Custom model, simplified with tf.layers\nInstead of doing the matrix multiplications and everything ourselves, we can use tf.layers to simplify the definition.",
"tf.reset_default_graph()\n\n# Model parameters.\nbatch_size = 32\nhidden1_units = 128\nlearning_rate = 0.005\ninput_dimension = train_data.shape[1] # 784 = 28*28 pixels\noutput_dimension = train_labels.shape[1] # 6 classes\n\ndef layers_custom_model_fn(features, targets, mode, params):\n # 1. Configure the model via TensorFlow operations (using tf.layers). Note how\n # much simpler this is compared to defining the weight matrices and matrix\n # multiplications by hand.\n hidden_layer = tf.layers.dense(inputs=features['images'], units=hidden1_units, activation=tf.nn.relu)\n output_layer = tf.layers.dense(inputs=hidden_layer, units=output_dimension, activation=tf.nn.relu)\n \n # 2. Define the loss function for training/evaluation\n loss = tf.losses.mean_squared_error(labels=targets, predictions=output_layer)\n \n # 3. Define the training operation/optimizer\n train_op = tf.contrib.layers.optimize_loss(\n loss=loss,\n global_step=tf.contrib.framework.get_global_step(),\n learning_rate=learning_rate,\n optimizer=\"SGD\")\n \n # 4. Generate predictions\n predictions_dict = {\n \"classes\": tf.argmax(input=output_layer, axis=1),\n \"probabilities\": tf.nn.softmax(output_layer, name=\"softmax_tensor\"), \n \"logits\": output_layer,\n }\n \n # Define eval metric ops; we can also use a pre-defined function here.\n accuracy = tf.metrics.accuracy(\n labels=tf.argmax(input=targets, axis=1),\n predictions=tf.argmax(input=output_layer, axis=1))\n eval_metric_ops = {\"accuracy\": accuracy}\n\n # 5. Return predictions/loss/train_op/eval_metric_ops in ModelFnOps object\n return tf.contrib.learn.ModelFnOps(\n mode=mode,\n predictions=predictions_dict,\n loss=loss,\n train_op=train_op,\n eval_metric_ops=eval_metric_ops)\n\n\nlayers_custom_model = tf.contrib.learn.Estimator(\n model_fn=layers_custom_model_fn)\n\n# Train and evaluate the model.\nevaluate_model(layers_custom_model, input_fn)",
"Model using canned estimators\nInstead of defining our own DNN classifier, TensorFlow supplies a number of canned estimators that can save a lot of work.",
"tf.reset_default_graph()\n\n# Model parameters.\nhidden1_units = 128\nlearning_rate = 0.005\ninput_dimension = train_data.shape[1] # 784 = 28*28 pixels\noutput_dimension = train_labels.shape[1] # 6 classes\n\n# Our model can be defined using just three simple lines...\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\nimages_column = tf.contrib.layers.real_valued_column(\"images\")\n# Task: Use the DNNClassifier Estimator to create the model in 1 line.\n# SOLUTION: DNNClassifier can be used to efficiently (in lines of code) create the model.\ncanned_model = tf.contrib.learn.DNNClassifier(\n feature_columns=[images_column],\n hidden_units=[hidden1_units],\n n_classes=output_dimension,\n activation_fn=tf.nn.relu,\n optimizer=optimizer)\n\n# Potential exercises: play with model parameters, e.g. add dropout\n\n# We need to change the input_fn so that it returns integers representing the classes instead of one-hot vectors.\ndef class_input_fn(data, labels):\n input_images = tf.constant(\n data, shape=data.shape, verify_shape=True, dtype=tf.float32)\n # The next two lines are different.\n class_labels = np.argmax(labels, axis=1)\n input_labels = tf.constant(\n class_labels, shape=class_labels.shape, verify_shape=True, dtype=tf.int32)\n image, label = tf.train.slice_input_producer(\n [input_images, input_labels], num_epochs=n_epochs)\n dataset_dict = dict(images=image, labels=label)\n batch_dict = tf.train.batch(\n dataset_dict, batch_size, allow_smaller_final_batch=True)\n batch_labels = batch_dict.pop('labels')\n return batch_dict, batch_labels\n\n# Train and evaluate the model.\nevaluate_model(canned_model, class_input_fn)",
"Using Convolutions",
"import tensorflow as tf\ntf.reset_default_graph()\n\ninput_dimension = train_data.shape[1] # 784 = 28*28 pixels\noutput_dimension = train_labels.shape[1] # 6 classes\nbatch_size = 32\n\ndata_batch = tf.placeholder(\"float\", shape=[None, input_dimension])\nlabel_batch = tf.placeholder(\"float\", shape=[None, output_dimension])\n\ndef weight_variable(shape):\n initial = tf.truncated_normal(shape, stddev=0.1)\n return tf.Variable(initial)\n\ndef bias_variable(shape):\n initial = tf.constant(0.1, shape=shape)\n return tf.Variable(initial)\n\ndef conv2d(x, W):\n return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n\ndef max_pool_2x2(x):\n return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1], padding='SAME')\n\n# Task: convert the batch_size x num_pixels (784) input to batch_size, height (28), width(28), channels\n# SOLUTION: reshape the input. We only have a single color channel.\nimage_batch = tf.reshape(data_batch, [-1, 28, 28, 1])\n\nW_conv1 = weight_variable([5, 5, 1, 32])\nb_conv1 = bias_variable([32])\n\nh_conv1 = tf.nn.relu(conv2d(image_batch, W_conv1) + b_conv1)\nh_pool1 = max_pool_2x2(h_conv1)\n\nW_conv2 = weight_variable([5, 5, 32, 48])\nb_conv2 = bias_variable([48])\n\nh_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)\nh_pool2 = max_pool_2x2(h_conv2)\n\nW_fc1 = weight_variable([7 * 7 * 48, 256])\nb_fc1 = bias_variable([256])\n\nh_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*48])\nh_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)\n\n# Task: add dropout to fully connected layer. Add a variable to turn dropout off in eval.\n# SOLUTION: add placeholder variable to deactivate dropout (keep_prob=1.0) in eval.\nkeep_prob = tf.placeholder(tf.float32)\n# SOLUTION: add dropout to fully connected layer.\nh_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n\nW_fc2 = weight_variable([256, output_dimension])\nb_fc2 = bias_variable([output_dimension])\n\noutput_activations = tf.matmul(h_fc1_drop, W_fc2) + b_fc2\n\nloss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=label_batch, \n logits=output_activations))\n\n# Solution: Switch from GradientDescentOptimizer to AdamOptimizer\n# learning_rate = 0.001\n# updates = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)\nlearning_rate = 0.001\nupdates = tf.train.AdamOptimizer(learning_rate).minimize(loss)\n\nwith tf.Session() as sess:\n init = tf.global_variables_initializer()\n sess.run(init)\n \n random_indices = np.random.permutation(train_data.shape[0])\n n_epochs = 5 # how often to go through the training data\n max_steps = train_data.shape[0]*n_epochs // batch_size\n for i in range(max_steps):\n batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size\n batch_indices = random_indices[batch_start_idx:batch_start_idx+batch_size]\n batch_loss, _ = sess.run(\n [loss, updates], \n feed_dict = {\n data_batch : train_data[batch_indices,:],\n label_batch : train_labels[batch_indices,:],\n # SOLUTION: Dropout active during training\n keep_prob : 0.5})\n if i % 100 == 0 or i == max_steps - 1:\n random_indices = np.random.permutation(train_data.shape[0])\n print(\"Batch-Loss at iteration {}/{}: {}\".format(i, max_steps-1, batch_loss))\n \n test_predictions = sess.run(\n output_activations,\n feed_dict = {\n data_batch : test_data,\n label_batch : test_labels,\n # SOLUTION: No dropout during eval\n keep_prob : 1.0\n })\n wins = np.argmax(test_predictions, axis=1) == np.argmax(test_labels, axis=1)\n print(\"Accuracy on test: {}%\".format(100*np.mean(wins)))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sampathweb/movie-sentiment-analysis | 02-logisitc-regression-intro.ipynb | mit | [
"Objective\n\nOverview of ML Model Build Process\nLogistic Regression Introduction\nModel Evaluations",
"from __future__ import print_function # Python 2/3 compatibility\n\nfrom IPython.display import Image\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"Model Building Process",
"Image(\"images/model-pipeline.png\")",
"Dataset",
"centers = np.array([[0, 0]] * 100 + [[1, 1]] * 100)\nnp.random.seed(42)\nX = np.random.normal(0, 0.2, (200, 2)) + centers\ny = np.array([0] * 100 + [1] * 100)\n\nplt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu)\nplt.colorbar();\n\nX[:5]\n\ny[:5], y[-5:]",
"Logistic Regression - Model\nTake a weighted sum of the features and add a bias term to get the logit.\nSqash this weighted sum to arange between 0-1 via a Sigmoid function.\n\nSigmoid Function\n\n<img src=\"images/sigmoid.png\",width=500>\n$$f(x) = \\frac{e^x}{1+e^x}$$",
"Image(\"images/logistic-regression.png\")\n\n## Build the Model\n\nfrom sklearn.linear_model import LogisticRegression\n\n## Step 1 - Instantiate the Model with Hyper Parameters (We don't have any here)\nmodel = LogisticRegression()\n\n## Step 2 - Fit the Model\nmodel.fit(X, y)\n\n## Step 3 - Evaluate the Model\nmodel.score(X, y)\n\ndef plot_decision_boundaries(model, X, y):\n pred_labels = model.predict(X)\n plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu,\n vmin=0.0, vmax=1)\n xx = np.linspace(-1, 2, 100)\n\n w0, w1 = model.coef_[0]\n bias = model.intercept_\n yy = -w0 / w1 * xx - bias / w1\n plt.plot(xx, yy, 'k')\n plt.axis((-1,2,-1,2))\n plt.colorbar()\n\nplot_decision_boundaries(model, X, y)",
"Dataset - Take 2",
"centers = np.array([[0, 0]] * 100 + [[1, 1]] * 100)\nnp.random.seed(42)\nX = np.random.normal(0, 0.5, (200, 2)) + centers\ny = np.array([0] * 100 + [1] * 100)\n\nplt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu)\nplt.colorbar();\n\n# Instantiate, Fit, Evalaute\nmodel = LogisticRegression()\nmodel.fit(X, y)\nprint(model.score(X, y))\n\ny_pred = model.predict(X)\n\nplot_decision_boundaries(model, X, y)",
"Other Evaluation Methods\n\nConfusion Matrix",
"from sklearn.metrics import confusion_matrix\n\ncm = confusion_matrix(y, y_pred)\ncm\n\npd.crosstab(y, y_pred, rownames=['Actual'], colnames=['Predicted'], margins=True)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jsmanrique/grimoirelab-personal-utils | Light Git index generator.ipynb | mit | [
"Git Index Generator\nThis notebook generates a ElasticSearch (ES) index with information about git (commits, files, lines added, lines removed, commit authors) for a given list of git repositories defined in a settings.yml file.\nLet's start by importing the utils python script, setting up the connection to the ES server and defining some variables",
"import utils\nutils.logging.basicConfig(level=utils.logging.INFO)\n\"\"\" You can comment previous line if you don't want logging information\n\"\"\"\n\nsettings = utils.read_config_file('settings.yml')\nes = utils.establish_connection(settings['es_host'])",
"Let's define an ES index mapping for the data that will be uploaded to the ES server",
"MAPPING_GIT = {\n \"mappings\": {\n \"item\": {\n \"properties\": {\n \"date\": {\n \"type\": \"date\",\n \"format\" : \"E MMM d HH:mm:ss yyyy Z\",\n \"locale\" : \"US\"\n },\n \"commit\": {\"type\": \"keyword\"},\n \"author\": {\"type\": \"keyword\"},\n \"domain\": {\"type\": \"keyword\"},\n \"file\": {\"type\": \"keyword\"},\n \"added\": {\"type\": \"integer\"},\n \"removed\": {\"type\": \"integer\"},\n \"repository\": {\"type\": \"keyword\"}\n }\n }\n }\n}",
"Let's give a name to the index to be created, and create it.\nNote: utils.create_ES_index() removes any existing index with the given name before creating it",
"index_name = 'git'\nutils.create_ES_index(es, index_name, MAPPING_GIT)",
"Let's import the git backend from Perceval",
"from perceval.backends.core.git import Git",
"For each repository in the settings file, let's get its data, create a summary object with the desired information and upload data to the ES server using ES bulk API.",
"for repo_url in settings['git']:\n \n repo_name = repo_url.split('/')[-1]\n repo = Git(uri=repo_url, gitpath='/tmp/'+repo_name)\n \n utils.logging.info('Go for {}'.format(repo_name))\n \n items = []\n bulk_size = 10000\n \n for commit in repo.fetch():\n \n author_name = commit['data']['Author'].split('<')[0][:-1]\n author_domain = commit['data']['Author'].split('@')[-1][:-1]\n \n for file in commit['data']['files']:\n if 'added' not in file.keys() or file['added'] == '-':\n file['added'] = 0\n if 'removed' not in file.keys() or file['removed'] == '-':\n file['removed'] = 0\n\n summary = {\n 'date': commit['data']['AuthorDate'],\n 'commit': commit['data']['commit'],\n 'author': author_name,\n 'domain': author_domain,\n 'file': file['file'],\n 'added': file['added'],\n 'removed': file['removed'],\n 'repository': repo_name\n }\n \n items.append({'_index': index_name, '_type': 'item', '_source': summary})\n \n if len(items) > bulk_size:\n utils.helpers.bulk(es, items)\n items = []\n utils.logging.info('{} items uploaded'.format(bulk_size))\n \n if len(items) != 0:\n utils.helpers.bulk(es, items)\n utils.logging.info('Remaining {} items uploaded'.format(len(items)))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
materialsvirtuallab/ceng114 | grades/statsw2016.ipynb | bsd-2-clause | [
"Overview\nThis is a generalized notebook for computing grade statistics from the Ted Grade Center.",
"#The usual imports\nfrom __future__ import division\nimport math\nfrom collections import OrderedDict\n\nfrom pandas import read_csv\nimport numpy as np\n\nfrom pymatgen.util.plotting_utils import get_publication_quality_plot\nfrom monty.string import remove_non_ascii\n\nimport prettyplotlib as ppl\nfrom prettyplotlib import brewer2mpl\nimport matplotlib.pyplot as plt\n\ncolors = brewer2mpl.get_map('Set1', 'qualitative', 8).mpl_colors\n\n%matplotlib inline\n\n# Define lower grade cutoffs in terms of number of standard deviations from mean.\ngrade_cutoffs = OrderedDict()\n#grade_cutoffs[\"A+\"] = 1.5\n#grade_cutoffs[\"A\"] = 1\ngrade_cutoffs[\"A\"] = 0.75\ngrade_cutoffs[\"B+\"] = 0.5\ngrade_cutoffs[\"B\"] = -0.25\ngrade_cutoffs[\"B-\"] = -0.5\ngrade_cutoffs[\"C+\"] = -0.75\ngrade_cutoffs[\"C\"] = -1\ngrade_cutoffs[\"C-\"] = -2\ngrade_cutoffs[\"F\"] = float(\"-inf\")",
"Load data from exported CSV from Ted Full Grade Center. Some sanitization is performed to remove non-ascii characters and cruft",
"def load_data(filename):\n d = read_csv(filename)\n d.columns = [remove_non_ascii(c) for c in d.columns]\n d.columns = [c.split(\"[\")[0].strip().strip(\"\\\"\") for c in d.columns]\n d[\"Weighted Total\"] = [float(i.strip(\"%\")) for i in d[\"Weighted Total\"]]\n print(d.columns)\n return d\n\nd = load_data(\"gc_CENG114_WI16_Ong_fullgc_2016-03-15-19-58-36.csv\")\n\ndef bar_plot(dframe, data_key, offset=0):\n \"\"\"\n Creates a historgram of the results.\n \n Args:\n dframe: DataFrame which is imported from CSV.\n data_key: Specific column to plot\n offset: Allows an offset for each grade. Defaults to 0.\n \n Returns:\n dict of cutoffs, {grade: (lower, upper)}\n \"\"\"\n data = dframe[data_key]\n d = filter(lambda x: (not np.isnan(x)), list(data))\n N = len(d)\n print N\n heights, bins = np.histogram(d, bins=20, range=(0, 100))\n bins = list(bins)\n bins.pop(-1)\n import matplotlib.pyplot as plt\n fig, ax = plt.subplots(1)\n ppl.bar(ax, bins, heights, width=5, color=colors[0], grid='y')\n plt = get_publication_quality_plot(12, 8, plt)\n plt.xlabel(\"Score\")\n plt.ylabel(\"Number of students\")\n #print len([d for d in data if d > 90])\n mean = data.mean(0)\n sigma = data.std()\n maxy = np.max(heights)\n prev_cutoff = 100\n cutoffs = {}\n grade = [\"A\", \"B+\", \"B\", \"B-\", \"C+\", \"C\", \"C-\", \"F\"]\n for grade, cutoff in grade_cutoffs.items():\n if cutoff == float(\"-inf\"):\n cutoff = 0\n else:\n cutoff = max(0, mean + cutoff * sigma) + offset\n plt.plot([cutoff] * 2, [0, maxy], 'k--')\n plt.annotate(\"%.1f\" % cutoff, [cutoff, maxy - 1], fontsize=18, horizontalalignment='left', rotation=45)\n n = len([d for d in data if cutoff <= d < prev_cutoff])\n print \"Grade %s (%.1f-%.1f): %d (%.2f%%)\" % (grade, cutoff, prev_cutoff, n, n*1.0/N*100)\n plt.annotate(grade, [(cutoff + prev_cutoff) / 2, maxy], fontsize=18, horizontalalignment='center')\n cutoffs[grade] = (cutoff, prev_cutoff)\n prev_cutoff = cutoff\n \n plt.ylim([0, maxy * 1.1])\n plt.annotate(\"$\\mu = %.1f$\\n$\\sigma = %.1f$\\n$max=%.1f$\" % (mean, sigma, data.max()), xy=(10, 7), fontsize=30)\n title = data_key.split(\"[\")[0].strip()\n plt.title(title, fontsize=30)\n plt.tight_layout()\n plt.savefig(\"%s.png\" % title)\n return cutoffs\n\nfor c in d.columns:\n if \"PS\" in c or \"Midterm\" in c or \"Final\" in c:\n if not all(np.isnan(d[c])):\n print c\n bar_plot(d, c)",
"Overall grade\nOverall points and assign overall grade.",
"cutoffs = bar_plot(d, \"Weighted Total\", offset=-2)\n\nprint cutoffs\n\ndef assign_grade(pts):\n for g, c in cutoffs.items():\n if c[0] < pts <= c[1]:\n return g\n\n#d = load_data(\"gc_CENG114_WI16_Ong_fullgc_2016-03-21-15-47-06.csv\") #use revised gc\n \nd[\"Final_Assigned_Egrade\"] = map(assign_grade, d[\"Weighted Total\"])\nd.to_csv(\"Overall grades_OLD.csv\")\nprint(\"Written!\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ML4DS/ML4all | P5.Data preprocessing/Intro5_DataNormalization_professor.ipynb | mit | [
"Data preprocessing methods: Normalization\nNotebook version:\n\n* 1.0 (Sep 15, 2020) - First version\n* 1.1 (Sep 15, 2021) - Exercises\n\nAuthors: Jesús Cid Sueiro ([email protected])",
"# Some libraries that will be used along the notebook.\nimport numpy as np\nimport matplotlib.pyplot as plt",
"1. Data preprocessing\n1.1. The dataset.\nA key component of any data processing method or any machine learning algorithm is the dataset, i.e., the set of data that will be the input to the method or algorithm. \nThe dataset collects information extracted from a population (of objects, entities, individuals,...). For instance, we can measure the weight and height of students from a class and collect this information in a dataset ${\\cal S} = {{\\bf x}k, k=0, \\ldots, K-1}$ where $K$ is the number of students, and each sample is a 2 dimensional vector, ${\\bf x}_k= (x{k0}, x_{k1})$, with the height and the weight in the first and the second component, respectively. These components are usually called features. In other datasets, the number of features can be arbitrarily large.\n1.1. Data preprocessing\nThe aim of data preprocessing methods is to transform the data into a form that is ready to apply machine learning algorithms. This may include:\n\nData normalization: transform the individual features to ensure a proper range of variation\nData imputation: assign values to features that may be missed for some data samples\nFeature extraction: transform the original data to compute new features that are more appropiate for a specific prediction task\nDimensionality reduction: remove features that are not relevant for the prediction task.\nOutlier removal: remove samples that may contain errors and are not reliable for the prediction task.\nClustering: partition the data into smaller subsets, that could be easier to process.\n\nIn this notebook we will focus on data normalization.\n2. Data normalization\nAll samples in the dataset can be arranged by rows in a $K \\times m$ data matrix ${\\bf X}$, where $m$ is the number of features (i.e. the dimension of the vector space containing the data). Each one of the $m$ data features may represent variables of very different nature (e.g. time, distance, price, volume, pixel intensity,...). Thus, the scale and the range of variation of each feature can be completely different.\nAs an illustration, consider the 2-dimensional dataset in the figure",
"from sklearn.datasets import make_blobs\nX, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=0.60)\nX = X @ np.array([[30, 4], [-8, 1]]) + np.array([90, 10])\n\nplt.figure(figsize=(12, 3))\nplt.scatter(X[:, 0], X[:, 1], s=50);\nplt.axis('equal')\nplt.xlabel('$x_0$')\nplt.ylabel('$x_1$')\nplt.show()",
"We can see that the first data feature ($x_0$) has a much large range of variation than the second ($x_1$). In practice, this may be problematic: the convergence properties of some machine learning algorithms may depend critically on the feature distributions and, in general, features sets ranging over similar scales use to offer a better performance.\nFor this reason, transforming the data in order to get similar range of variations for all features is desirable. This can be done in several ways.\n2.1. Standard scaling.\nA common normalization method consists on applying an affine transformation\n$$\n{\\bf t}_k = {\\bf D}({\\bf x}_k - {\\bf m})\n$$\nwhere ${\\bf D}$ is a diagonal matrix, in such a way that the transformed dataset ${\\cal S}' = {{\\bf t}_k, k=0, \\ldots, K-1}$ has zero sample mean, i.e.,\n$$\n\\frac{1}{K} \\sum_{k=0}^{K-1} {\\bf t}_k = 0\n$$\nand unit sample variance, i.e., \n$$\n\\frac{1}{K} \\sum_{k=0}^{K-1} t_{ki}^2 = 1\n$$\nIt is not difficult to verify that this can be done by taking ${\\bf m}$ equal to the sample mean\n$$\n{\\bf m} = \\frac{1}{K} \\sum_{k=0}^{K-1} {\\bf x}_k\n$$\nand taking the diagonal components of ${\\bf D}$ equal to the inverse of the standard deviation of each feature, i.e.,\n$$\nd_{ii} = \\frac{1}{\\sqrt{\\frac{1}{K} \\sum_{k=0}^{K-1} (x_{ki} - m_i)^2}}\n$$\nUsing the data matrix ${\\bf X}$ and the broadcasting property of the basic mathematical operators in Python, the implementation of this normalization is straightforward.\nExercise 1: Apply a standard scaling to the data matrix. To do so:\n\nCompute the mean, and store it in variable m (you can use method mean from numpy)\nCompute the standard deviation of each feature, and store the result in variable s (you can use method std from numpy)\nTake advangate of the broadcasting property to normalize the data matrix in a single line of code. Save the result in variable T.",
"# Compute the sample mean\n# m = <FILL IN>\nm = np.mean(X, axis=0) # Compute the sample mean\nprint(f'The sample mean is m = {m}')\n\n# Compute the standard deviation of each feature\n# s = <FILL IN>\ns = np.std(X, axis=0) # Compute the standard deviation of each feature\n\n# Normalize de data matrix\n# T = <FILL IN>\nT = (X-m)/s # Normalize",
"We can test if the transformed features have zero-mean and unit variance:",
"# Testing mean\nprint(f\"- The mean of the transformed features are: {np.mean(T, axis=0)}\")\nprint(f\"- The standard deviation of the transformed features are: {np.std(T, axis=0)}\")",
"(note that the results can deviate from 0 or 1 due to finite precision errors)",
"# Now you can verify if your solution satisfies\nplt.figure(figsize=(4, 4))\nplt.scatter(T[:, 0], T[:, 1], s=50);\nplt.axis('equal')\nplt.xlabel('$x_0$')\nplt.ylabel('$x_1$')\nplt.show()",
"2.1.1. Implementation in sklearn\nThe sklearn package contains a method to perform the standard scaling over a given data matrix.",
"from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nscaler.fit(X)\nprint(f'The sample mean is m = {scaler.mean_}')\n\nT2 = scaler.transform(X)\nplt.figure(figsize=(4, 4))\nplt.scatter(T2[:, 0], T2[:, 1], s=50);\nplt.axis('equal')\nplt.xlabel('$x_0$')\nplt.ylabel('$x_1$')\nplt.show()",
"Note that, once we have defined the scaler object in Python, you can apply the scaling transformation to other datasets. This will be useful in further topics, when the dataset may be split in several matrices and we may be interested in defining the transformation using some matrix, and apply it to others\n2.2. Other normalizations.\nThe are some alternatives to the standard scaling that may be interesting for some datasets. Here we show some of them, available at the preprocessing module in sklearn:\n\npreprocessing.MaxAbsScaler: Scale each feature by its maximum absolute value. As a result, all feature values will lie in the interval [-1, 1].\npreprocessing.MinMaxScaler: Transform features by scaling each feature to a given range. Also, all feature values will lie in the specified interval.\npreprocessing.Normalizer: Normalize samples individually to unit norm. That is, it applies the transformation ${\\bf t}_k = \\frac{1}{\\|{\\bf x}_k\\|} {\\bf x}_k$\npreprocessing.PowerTransformer: Apply a power transform featurewise to make data more Gaussian-like.\npreprocessing.QuantileTransformer: Transform features using quantile information. The transformed features follow a specific target distribution (uniform or normal). \npreprocessing.RobustScaler: Scale features using statistics that are robust to outliers. This way, anomalous values in one or very few samples cannot have a strong influence in the normalization.\n\nYou can find more detailed explanation of these transformations sklearn documentation.\nExercise 2: Use sklearn to transform the data matrix X into a matrix T24such that the minimum feature value is 2 and the maximum is 4.\n(Hint: select and import the appropriate preprocessing module from sklearn an follow the same steps used in the code cell above for the scandard scaler)",
"# Write your solution here\n# <SOL>\nfrom sklearn.preprocessing import MinMaxScaler\nscaler = MinMaxScaler(feature_range=(2, 4))\nscaler.fit(X)\nT24 = scaler.transform(X)\n# </SOL>\n\n# We can visually check that the transformed data features lie in the selected range.\nplt.figure(figsize=(4, 4))\nplt.scatter(T24[:, 0], T24[:, 1], s=50);\nplt.axis('equal')\nplt.xlabel('$x_0$')\nplt.ylabel('$x_1$')\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/test-institute-1/cmip6/models/sandbox-3/toplevel.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-1\nSource ID: SANDBOX-3\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:43\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-3', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session10/Day0/TooBriefVisualization.ipynb | mit | [
"import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib notebook",
"Introduction to Visualization:\nDensity Estimation and Data Exploration\nVersion 0.1\nThere are many flavors of data analysis that fall under the \"visualization\" umbrella in astronomy. Today, by way of example, we will focus on 2 basic problems.\n\nBy AA Miller \n16 September 2017\nProblem 1) Density Estimation\nStarting with 2MASS and SDSS and extending through LSST, we are firmly in an era where data and large statistical samples are cheap. With this explosion in data volume comes a problem: we do not know the underlying probability density function (PDF) of the random variables measured via our observations. Hence - density estimation: an attempt to recover the unknown PDF from observations. In some cases theory can guide us to a parametric form for the PDF, but more often than not such guidance is not available. \nThere is a common, simple, and very familiar tool for density estimation: histograms. \nBut there is also a problem:\nHISTOGRAMS LIE!\nWe will \"prove\" this to be the case in a series of examples. For this exercise, we will load the famous Linnerud data set, which tested 20 middle aged men by measuring the number of chinups, situps, and jumps they could do in order to compare these numbers to their weight, pulse, and waist size. To load the data (just chinups for now) we will run the following:\nfrom sklearn.datasets import load_linnerud\nlinnerud = load_linnerud()\nchinups = linnerud.data[:,0]",
"from sklearn.datasets import load_linnerud\n\nlinnerud = load_linnerud()\nchinups = linnerud.data[:,0]",
"Problem 1a \nPlot the histogram for the number of chinups using the default settings in pyplot.",
"plt.hist( # complete",
"Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers.\nProblem 1b \nUsing the same data make 2 new histograms: (i) one with 5 bins (bins = 5), and (ii) one with the bars centered on the left bin edges (align = \"left\").\nHint - if overplotting the results, you may find it helpful to use the histtype = \"step\" option",
"plt.hist( # complete\n# complete",
"These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups. \nWhat if we instead allow the bin width to vary and require the same number of points in each bin? You can determine the bin edges for bins with 5 sources using the following command:\nbins = np.append(np.sort(chinups)[::5], np.max(chinups))\n\nProblem 1c \nPlot a histogram with variable width bins, each with the same number of points.\nHint - setting normed = True will normalize the bin heights so that the PDF integrates to 1.",
"# complete\nplt.hist(# complete",
"Ending the lie \nEarlier I stated that histograms lie. One simple way to combat this lie: show all the data. Displaying the original data points allows viewers to somewhat intuit the effects of the particular bin choices that have been made (though this can also be cumbersome for very large data sets, which these days is essentially all data sets). The standard for showing individual observations relative to a histogram is a \"rug plot,\" which shows a vertical tick (or other symbol) at the location of each source used to estimate the PDF.\nProblem 1d Execute the cell below to see an example of a rug plot.",
"plt.hist(chinups, histtype = 'step')\n\n# this is the code for the rug plot\nplt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4)",
"Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug \"whiskers\" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect. \nTo recap, histograms are not ideal for density estimation for the following reasons: \n\nThey introduce discontinuities that are not present in the data\nThey are strongly sensitive to user choices ($N_\\mathrm{bins}$, bin centering, bin grouping), without any mathematical guidance to what these choices should be\nThey are difficult to visualize in higher dimensions\n\nHistograms are useful for generating a quick representation of univariate data, but for the reasons listed above they should never be used for analysis. Most especially, functions should not be fit to histograms given how greatly the number of bins and bin centering affects the output histogram.\nOkay - so if we are going to rail on histograms this much, there must be a better option. There is: Kernel Density Estimation (KDE), a nonparametric form of density estimation whereby a normalized kernel function is convolved with the discrete data to obtain a continuous estimate of the underlying PDF. As a rule, the kernel must integrate to 1 over the interval $-\\infty$ to $\\infty$ and be symmetric. There are many possible kernels (gaussian is highly popular, though Epanechnikov, an inverted parabola, produces the minimal mean square error). \nKDE is not completely free of the problems we illustrated for histograms above (in particular, both a kernel and the width of the kernel need to be selected), but it does manage to correct a number of the ills. We will now demonstrate this via a few examples using the scikit-learn implementation of KDE: KernelDensity, which is part of the sklearn.neighbors module. \nNote There are many implementations of KDE in Python, and Jake VanderPlas has put together an excellent description of the strengths and weaknesses of each. We will use the scitkit-learn version as it is in many cases the fastest implementation.\nTo demonstrate the basic idea behind KDE, we will begin by representing each point in the dataset as a block (i.e. we will adopt the tophat kernel). Borrowing some code from Jake, we can estimate the KDE using the following code:\nfrom sklearn.neighbors import KernelDensity\ndef kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):\n kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)\n kde_skl.fit(data[:, np.newaxis])\n log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)\n\n return np.exp(log_pdf)\n\nThe two main options to set are the bandwidth and the kernel.",
"# execute this cell\nfrom sklearn.neighbors import KernelDensity\ndef kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):\n kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)\n kde_skl.fit(data[:, np.newaxis])\n log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)\n\n return np.exp(log_pdf)",
"Problem 1e \nPlot the KDE of the PDF for the number of chinups middle aged men can do using a bandwidth of 0.1 and a tophat kernel.\nHint - as a general rule, the grid should be smaller than the bandwidth when plotting the PDF.",
"grid = # complete\nPDFtophat = kde_sklearn( # complete\nplt.plot( # complete",
"In this representation, each \"block\" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks. \nProblem 1f \nPlot the KDE of the PDF for the number of chinups middle aged men can do using bandwidths of 1 and 5 and a tophat kernel. How do the results differ from the histogram plots above?",
"PDFtophat1 = # complete\n\n# complete\n# complete\n# complete",
"It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels. \nProblem 1g Plot the KDE of the PDF for the number of chinups middle aged men can do using a gaussian and Epanechnikov kernel. How do the results differ from the histogram plots above? \nHint - you will need to select the bandwidth. The examples above should provide insight into the useful range for bandwidth selection. You may need to adjust the values to get an answer you \"like.\"",
"PDFgaussian = # complete\nPDFepanechnikov = # complete",
"So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used: \n$$h = 1.059 \\sigma n^{-1/5}$$\nwhere $h$ is the bandwidth, $\\sigma$ is the standard deviation of the samples, and $n$ is the total number of samples. Note - in situations with bimodal or more complicated distributions, this rule of thumb can lead to woefully inaccurate PDF estimates. The most general way to estimate the choice of bandwidth is via cross validation (we will cover cross-validation later today). \nWhat about multidimensional PDFs? It is possible using many of the Python implementations of KDE to estimate multidimensional PDFs, though it is very very important to beware the curse of dimensionality in these circumstances.\nProblem 2) Data Exploration\nNow a more open ended topic: data exploration. In brief, data exploration encompases a large suite of tools (including those discussed above) to examine data that live in large dimensional spaces. There is no single best method or optimal direction for data exploration. Instead, today we will introduce some of the tools available via python. \nAs an example we will start with a basic line plot - and examine tools beyond matplotlib.",
"x = np.arange(0, 6*np.pi, 0.1)\ny = np.cos(x)\n\nplt.plot(x,y, lw = 2)\nplt.xlabel('X')\nplt.ylabel('Y')\nplt.xlim(0, 6*np.pi)",
"Seaborn\nSeaborn is a plotting package that enables many useful features for exploration. In fact, a lot of the functionality that we developed above can readily be handled with seaborn.\nTo begin, we will make the same plot that we created in matplotlib.",
"import seaborn as sns\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nax.plot(x,y, lw = 2)\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_xlim(0, 6*np.pi)",
"These plots look identical, but it is possible to change the style with seaborn. \nseaborn has 5 style presets: darkgrid, whitegrid, dark, white, and ticks. You can change the preset using the following: \nsns.set_style(\"whitegrid\")\n\nwhich will change the output for all subsequent plots. Note - if you want to change the style for only a single plot, that can be accomplished with the following: \nwith sns.axes_style(\"dark\"):\n\nwith all ploting commands inside the with statement. \nProblem 3a \nRe-plot the sine curve using each seaborn preset to see which you like best - then adopt this for the remainder of the notebook.",
"sns.set_style( # complete\n# complete",
"The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial. \nHere we load the default:",
"# default color palette\n\ncurrent_palette = sns.color_palette()\nsns.palplot(current_palette)",
"which we will now change to colorblind, which is clearer to those that are colorblind.",
"# set palette to colorblind\nsns.set_palette(\"colorblind\")\n\ncurrent_palette = sns.color_palette()\nsns.palplot(current_palette)",
"Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.\nNote - for those familiar with pandas seaborn is designed to integrate easily and directly with pandas DataFrame objects. In the example below the Iris data are loaded into a DataFrame. iPython notebooks also display the DataFrame data in a nice readable format.",
"iris = sns.load_dataset(\"iris\")\niris",
"Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the distplot function.\nProblem 3b \nPlot the distribution of petal lengths for the Iris data set.",
"# note - hist, kde, and rug all set to True, set to False to turn them off \nwith sns.axes_style(\"dark\"):\n sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True)",
"Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, seaborn makes it very easy to produce handy summary plots. \nAt this point, we are familiar with basic scatter plots in matplotlib.\nProblem 3c \nMake a matplotlib scatter plot showing the Iris petal length against the Iris petal width.",
"plt.scatter( # complete",
"Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below:",
"with sns.axes_style(\"darkgrid\"):\n xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)\n yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000)\n\n plt.scatter(xexample, yexample)",
"Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above).",
"# hexbin w/ bins = \"log\" returns the log of counts/bin\n# mincnt = 1 displays only hexpix with at least 1 source present\nwith sns.axes_style(\"darkgrid\"):\n plt.hexbin(xexample, yexample, bins = \"log\", cmap = \"viridis\", mincnt = 1)\n plt.colorbar()",
"While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function.",
"with sns.axes_style(\"darkgrid\"):\n sns.kdeplot(xexample, yexample,shade=False)",
"This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set. \nSuppose we want to see univariate distributions in addition to the scatter plot? This is certainly possible with matplotlib and you can find examples on the web, however, with seaborn this is really easy.",
"sns.jointplot(x=iris['petal_length'], y=iris['petal_width'])",
"But! Histograms and scatter plots can be problematic as we have discussed many times before. \nProblem 3d \nRe-create the plot above but set kind='kde' to produce density estimates of the distributions.",
"sns.jointplot( # complete",
"That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.)",
"sns.pairplot(iris[[\"sepal_length\", \"sepal_width\", \"petal_length\", \"petal_width\"]])",
"For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'.",
"sns.pairplot(iris, vars = [\"sepal_length\", \"sepal_width\", \"petal_length\", \"petal_width\"],\n hue = \"species\", diag_kind = 'kde')",
"Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously.",
"g = sns.PairGrid(iris, vars = [\"sepal_length\", \"sepal_width\", \"petal_length\", \"petal_width\"],\n hue = \"species\", diag_sharey=False)\ng.map_lower(sns.kdeplot)\ng.map_upper(plt.scatter, edgecolor='white')\ng.map_diag(sns.kdeplot, lw=3)",
"Note - one disadvantage to the plot above is that the contours do not share the same color scheme as the KDE estimates and the scatter plot. I have not been able to figure out how to change this in a satisfactory way. (One potential solution is detailed here, however, it is worth noting that this solution restricts your color choices to a maximum of ~5 unless you are a colormaps wizard, and I am not.)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nwhidden/ND101-Deep-Learning | batch-norm/Batch_Normalization_Exercises.ipynb | mit | [
"Batch Normalization – Practice\nBatch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.\nThis is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:\n1. Complicated enough that training would benefit from batch normalization.\n2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.\n3. Simple enough that the architecture would be easy to understand without additional resources.\nThis notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.\n\nBatch Normalization with tf.layers.batch_normalization\nBatch Normalization with tf.nn.batch_normalization\n\nThe following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.",
"import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True, reshape=False)",
"Batch Normalization using tf.layers.batch_normalization<a id=\"example_1\"></a>\nThis version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization \nWe'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.\nThis version of the function does not include batch normalization.",
"\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer",
"We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.\nThis version of the function does not include batch normalization.",
"\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)\n return conv_layer",
"Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). \nThis cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.",
"\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)",
"With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)\nUsing batch normalization, you'll be able to train this same network to over 90% in that same number of batches.\nAdd batch normalization\nWe've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. \nIf you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.\nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.",
"def fully_connected(prev_layer, num_units, is_training):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, use_bias = False, activation=None)\n layer = tf.layers.batch_normalization(layer, training = is_training)\n layer = tf.nn.relu(layer)\n return layer",
"TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.",
"def conv_layer(prev_layer, layer_depth, is_training):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias = False, activation=None)\n conv_layer = tf.layers.batch_normalization(conv_layer, training = is_training)\n conv_layer = tf.nn.relu(conv_layer)\n return conv_layer",
"TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.",
"def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # training boolean\n is_training = tf.placeholder(tf.bool)\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i, is_training)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100, is_training)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n # Tell TensorFlow to update the population statistics while training\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training: False})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training: False})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels,\n is_training: False})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n is_training: False})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)",
"With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.\nBatch Normalization using tf.nn.batch_normalization<a id=\"example_2\"></a>\nMost of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.\nThis version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.\nOptional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. \nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.",
"def fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer",
"TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.",
"def conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n\n in_channels = prev_layer.get_shape().as_list()[3]\n out_channels = layer_depth*4\n \n weights = tf.Variable(\n tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))\n \n bias = tf.Variable(tf.zeros(out_channels))\n\n conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n conv_layer = tf.nn.relu(conv_layer)\n\n return conv_layer",
"TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.",
"def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)",
"Once again, the model with batch normalization should reach an accuracy over 90%. There are plenty of details that can go wrong when implementing at this low level, so if you got it working - great job! If not, do not worry, just look at the Batch_Normalization_Solutions notebook to see what went wrong."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/messy-consortium/cmip6/models/sandbox-1/atmoschem.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: MESSY-CONSORTIUM\nSource ID: SANDBOX-1\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:10\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
searchs/bigdatabox | pulsar_stars.ipynb | mit | [
"Data Science samples",
"import pandas as pd\nimport numpy as np\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split, cross_val_score\n\ndef cleanup_names(name):\n return str(name).strip().replace(\" \",\"_\")",
"Classifiers and Regressors\nPulsar stars",
"col_names = [\"Mean of the integrated profile\",\"Standard deviation of the integrated profile\",\n\"Excess kurtosis of the integrated profile\",\n\"Skewness of the integrated profile\",\n\"Mean of the DM-SNR curve\",\n\"Standard deviation of the DM-SNR curve\",\n\"Excess kurtosis of the DM-SNR curve\",\n\"Skewness of the DM-SNR curve\",\n\"Class\"]\n\ncol_names = [cleanup_names(col) for col in col_names]\n\nprint(col_names)\n\ndf = pd.read_csv(\"HTRU_2.csv\", sep=\",\", names=col_names)\n\ndf.info()",
"Mean of the integrated profile.\nStandard deviation of the integrated profile.\nExcess kurtosis of the integrated profile.\nSkewness of the integrated profile.\nMean of the DM-SNR curve.\nStandard deviation of the DM-SNR curve.\nExcess kurtosis of the DM-SNR curve.\nSkewness of the DM-SNR curve.\nClass",
"df.head(3)\n\n# df.info()\nlen(df)\n\nfrom sklearn.linear_model import LogisticRegression\n\nX = df.iloc[:, 0:8]\ny = df.iloc[:,8]\n\ndef clf_model(model):\n clf = model\n \n scores = cross_val_score(clf, X, y)\n print(f\"Scores: {scores}\")\n print(f\"Mean Score: {scores.mean()}\")\n\nclf_model(LogisticRegression())\n\nfrom sklearn.naive_bayes import GaussianNB\n\nclf_model(GaussianNB())\n\nfrom sklearn.neighbors import KNeighborsClassifier\nclf_model(KNeighborsClassifier())\n\nfrom sklearn.tree import DecisionTreeClassifier\nclf_model(DecisionTreeClassifier())\n\nfrom sklearn.ensemble import RandomForestClassifier\nclf_model(RandomForestClassifier())\n\ndf.Mean_of_the_integrated_profile.count()\n\ndf.Class.count()\n\ndf[df.Class == 1].Class.count()\n\ndf[df.Class == 1].Class.count()/df.Class.count()\n\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\n\nX_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.25)\n\ndef confusion(model):\n clf = model\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n print(f'Confusion Matrix: {y_test, y_pred}')\n print(f'Classification Report: {classification_report(y_test,y_pred)}')\n return clf\n\nconfusion(LogisticRegression())\n\nconfusion(RandomForestClassifier())\n\nfrom sklearn.ensemble import AdaBoostClassifier\nclf_model(AdaBoostClassifier())\n\nconfusion(AdaBoostClassifier())",
"Customer Churn",
"churnDF = pd.read_csv(\"CHURN.csv\")\n\nchurnDF.Churn.head(3)\n\nchurnDF['Churn'] = churnDF['Churn']. \\\nreplace(to_replace=['No', 'Yes'], value=[0,1])\n\nchurnDF.Churn.head()\n\nlen(churnDF.columns)\n\nX = churnDF.iloc[:, 0:20]\ny = churnDF.iloc[:,20]\n\nX = pd.get_dummies(X)\n\ndef clf_models(model, cv=3):\n clf = model\n \n scores = cross_val_score(clf, X, y, cv=cv)\n print(f\"Scores: {scores}\")\n print(f\"Mean Score: {scores.mean()}\")\n\nclf_models(RandomForestClassifier())\n\nclf_models(KNeighborsClassifier())\n\nclf_models(LinearRegression())\n\nclf_models(AdaBoostClassifier())\n# GaussianNB\n# DecisionTree\n# X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.25)\n\nclf_models(GaussianNB())\n\nclf_models(DecisionTreeClassifier())\n\nX_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.25)\n\n# def confusion_churn(model):\n# clf = model\n# clf.fit(X_train, y_train)\n# y_pred = clf.predict(X_test)\n# print(f'Confusion Matrix: {y_test, y_pred}')\n# print(f'Classification Report: {classification_report(y_test,y_pred)}')\n# return clf\n\nconfusion(AdaBoostClassifier(n_estimators=250))\n\nconfusion(RandomForestClassifier())\n\nclf_models(LogisticRegression())\n\nconfusion(LogisticRegression())"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
transcranial/keras-js | notebooks/layers/convolutional/Conv2DTranspose.ipynb | mit | [
"import numpy as np\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers.convolutional import Conv2DTranspose\nfrom keras import backend as K\nimport json\nfrom collections import OrderedDict\n\ndef format_decimal(arr, places=6):\n return [round(x * 10**places) / 10**places for x in arr]\n\nDATA = OrderedDict()",
"Conv2DTranspose\n[convolutional.Conv2DTranspose.0] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=False",
"data_in_shape = (4, 4, 2)\nconv = Conv2DTranspose(4, (3,3), strides=(1,1), \n padding='valid', data_format='channels_last',\n activation='linear', use_bias=False)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(150)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('W shape:', weights[0].shape)\nprint('W:', format_decimal(weights[0].ravel().tolist()))\n# print('b shape:', weights[1].shape)\n# print('b:', format_decimal(weights[1].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.Conv2DTranspose.0'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.Conv2DTranspose.1] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=True",
"data_in_shape = (4, 4, 2)\nconv = Conv2DTranspose(4, (3,3), strides=(1,1), \n padding='valid', data_format='channels_last',\n activation='linear', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(151)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('W shape:', weights[0].shape)\nprint('W:', format_decimal(weights[0].ravel().tolist()))\nprint('b shape:', weights[1].shape)\nprint('b:', format_decimal(weights[1].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.Conv2DTranspose.1'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.Conv2DTranspose.2] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='valid', data_format='channels_last', activation='relu', use_bias=True",
"data_in_shape = (4, 4, 2)\nconv = Conv2DTranspose(4, (3,3), strides=(2,2), \n padding='valid', data_format='channels_last',\n activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(152)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('W shape:', weights[0].shape)\nprint('W:', format_decimal(weights[0].ravel().tolist()))\nprint('b shape:', weights[1].shape)\nprint('b:', format_decimal(weights[1].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.Conv2DTranspose.2'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.Conv2DTranspose.3] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True",
"data_in_shape = (4, 4, 2)\nconv = Conv2DTranspose(4, (3,3), strides=(1,1), \n padding='same', data_format='channels_last',\n activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(153)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('W shape:', weights[0].shape)\nprint('W:', format_decimal(weights[0].ravel().tolist()))\nprint('b shape:', weights[1].shape)\nprint('b:', format_decimal(weights[1].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.Conv2DTranspose.3'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.Conv2DTranspose.4] 5 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', activation='relu', use_bias=True",
"data_in_shape = (4, 4, 2)\nconv = Conv2DTranspose(5, (3,3), strides=(2,2), \n padding='same', data_format='channels_last',\n activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(154)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('W shape:', weights[0].shape)\nprint('W:', format_decimal(weights[0].ravel().tolist()))\nprint('b shape:', weights[1].shape)\nprint('b:', format_decimal(weights[1].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.Conv2DTranspose.4'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.Conv2DTranspose.5] 3 2x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True",
"data_in_shape = (4, 4, 2)\nconv = Conv2DTranspose(3, (2,3), strides=(1,1), \n padding='same', data_format='channels_last',\n activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(155)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('W shape:', weights[0].shape)\nprint('W:', format_decimal(weights[0].ravel().tolist()))\nprint('b shape:', weights[1].shape)\nprint('b:', format_decimal(weights[1].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.Conv2DTranspose.5'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"export for Keras.js tests",
"import os\n\nfilename = '../../../test/data/layers/convolutional/Conv2DTranspose.json'\nif not os.path.exists(os.path.dirname(filename)):\n os.makedirs(os.path.dirname(filename))\nwith open(filename, 'w') as f:\n json.dump(DATA, f)\n\nprint(json.dumps(DATA))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kaggle/learntools | notebooks/ethics/raw/ex4.ipynb | apache-2.0 | [
"In the tutorial, you learned about different ways of measuring fairness of a machine learning model. In this exercise, you'll train a few models to approve (or deny) credit card applications and analyze fairness. Don't worry if you're new to coding: this exercise assumes no programming knowledge.\nIntroduction\nWe work with a synthetic dataset of information submitted by credit card applicants. \nTo load and preview the data, run the next code cell. When the code finishes running, you should see a message saying the data was successfully loaded, along with a preview of the first five rows of the data.",
"# Set up feedback system\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.ethics.ex4 import *\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n# Load the data, separate features from target\ndata = pd.read_csv(\"../input/synthetic-credit-card-approval/synthetic_credit_card_approval.csv\")\nX = data.drop([\"Target\"], axis=1)\ny = data[\"Target\"]\n\n# Break into training and test sets\nX_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)\n\n# Preview the data\nprint(\"Data successfully loaded!\\n\")\nX_train.head()",
"The dataset contains, for each applicant:\n- income (in the Income column),\n- the number of children (in the Num_Children column),\n- whether the applicant owns a car (in the Own_Car column, the value is 1 if the applicant owns a car, and is else 0), and\n- whether the applicant owns a home (in the Own_Housing column, the value is 1 if the applicant owns a home, and is else 0)\nWhen evaluating fairness, we'll check how the model performs for users in different groups, as identified by the Group column: \n- The Group column breaks the users into two groups (where each group corresponds to either 0 or 1).\n- For instance, you can think of the column as breaking the users into two different races, ethnicities, or gender groupings. If the column breaks users into different ethnicities, 0 could correspond to a non-Hispanic user, while 1 corresponds to a Hispanic user. \nRun the next code cell without changes to train a simple model to approve or deny individuals for a credit card. The output shows the performance of the model.",
"from sklearn import tree\nfrom sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay\nimport matplotlib.pyplot as plt\n\n# Train a model and make predictions\nmodel_baseline = tree.DecisionTreeClassifier(random_state=0, max_depth=3)\nmodel_baseline.fit(X_train, y_train)\npreds_baseline = model_baseline.predict(X_test)\n\n# Function to plot confusion matrix\ndef plot_confusion_matrix(estimator, X, y_true, y_pred, display_labels=[\"Deny\", \"Approve\"],\n include_values=True, xticks_rotation='horizontal', values_format='',\n normalize=None, cmap=plt.cm.Blues):\n cm = confusion_matrix(y_true, y_pred, normalize=normalize)\n disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=display_labels)\n return cm, disp.plot(include_values=include_values, cmap=cmap, xticks_rotation=xticks_rotation,\n values_format=values_format)\n\n# Function to evaluate the fairness of the model\ndef get_stats(X, y, model, group_one, preds):\n \n y_zero, preds_zero, X_zero = y[group_one==False], preds[group_one==False], X[group_one==False]\n y_one, preds_one, X_one = y[group_one], preds[group_one], X[group_one]\n \n print(\"Total approvals:\", preds.sum())\n print(\"Group A:\", preds_zero.sum(), \"({}% of approvals)\".format(round(preds_zero.sum()/sum(preds)*100, 2)))\n print(\"Group B:\", preds_one.sum(), \"({}% of approvals)\".format(round(preds_one.sum()/sum(preds)*100, 2)))\n \n print(\"\\nOverall accuracy: {}%\".format(round((preds==y).sum()/len(y)*100, 2)))\n print(\"Group A: {}%\".format(round((preds_zero==y_zero).sum()/len(y_zero)*100, 2)))\n print(\"Group B: {}%\".format(round((preds_one==y_one).sum()/len(y_one)*100, 2)))\n \n cm_zero, disp_zero = plot_confusion_matrix(model, X_zero, y_zero, preds_zero)\n disp_zero.ax_.set_title(\"Group A\")\n cm_one, disp_one = plot_confusion_matrix(model, X_one, y_one, preds_one)\n disp_one.ax_.set_title(\"Group B\")\n \n print(\"\\nSensitivity / True positive rate:\")\n print(\"Group A: {}%\".format(round(cm_zero[1,1] / cm_zero[1].sum()*100, 2)))\n print(\"Group B: {}%\".format(round(cm_one[1,1] / cm_one[1].sum()*100, 2)))\n \n# Evaluate the model \nget_stats(X_test, y_test, model_baseline, X_test[\"Group\"]==1, preds_baseline)",
"The confusion matrices above show how the model performs on some test data. We also print additional information (calculated from the confusion matrices) to assess fairness of the model. For instance,\n- The model approved 38246 people for a credit card. Of these individuals, 8028 belonged to Group A, and 30218 belonged to Group B.\n- The model is 94.56% accurate for Group A, and 95.02% accurate for Group B. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the accuracy is (39723+7528)/(39723+500+2219+7528).\n- The true positive rate (TPR) for Group A is 77.23%, and the TPR for Group B is 98.03%. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the TPR is 7528/(7528+2219).\n1) Varieties of fairness\nConsider three different types of fairness covered in the tutorial:\n- Demographic parity: Which group has an unfair advantage, with more representation in the group of approved applicants? (Roughly 50% of applicants are from Group A, and 50% of applicants are from Group B.)\n- Equal accuracy: Which group has an unfair advantage, where applicants are more likely to be correctly classified? \n- Equal opportunity: Which group has an unfair advantage, with a higher true positive rate?",
"# Check your answer (Run this code cell to get credit!)\nq_1.check()",
"Run the next code cell without changes to visualize the model.",
"def visualize_model(model, feature_names, class_names=[\"Deny\", \"Approve\"], impurity=False):\n plot_list = tree.plot_tree(model, feature_names=feature_names, class_names=class_names, impurity=impurity)\n [process_plot_item(item) for item in plot_list]\n\ndef process_plot_item(item):\n split_string = item.get_text().split(\"\\n\")\n if split_string[0].startswith(\"samples\"):\n item.set_text(split_string[-1])\n else:\n item.set_text(split_string[0])\n\nplt.figure(figsize=(20, 6))\nplot_list = visualize_model(model_baseline, feature_names=X_train.columns)",
"The flowchart shows how the model makes decisions:\n- Group <= 0.5 checks what group the applicant belongs to: if the applicant belongs to Group A, then Group <= 0.5 is true.\n- Entries like Income <= 80210.5 check the applicant's income.\nTo follow the flow chart, we start at the top and trace a path depending on the details of the applicant. If the condition is true at a split, then we move down and to the left branch. If it is false, then we move to the right branch.\nFor instance, consider an applicant in Group B, who has an income of 75k. Then, \n- We start at the top of the flow chart. the applicant has an income of 75k, so Income <= 80210.5 is true, and we move to the left.\n- Next, we check the income again. Since Income <= 71909.5 is false, we move to the right.\n- The last thing to check is what group the applicant belongs to. The applicant belongs to Group B, so Group <= 0.5 is false, and we move to the right, where the model has decided to approve the applicant.\n2) Understand the baseline model\nBased on the visualization, how can you explain one source of unfairness in the model?\nHint: Consider the example applicant, but change the group membership from Group B to Group A (leaving all other characteristics the same). Is this slightly different applicant approved or denied by the model?",
"# Check your answer (Run this code cell to get credit!)\nq_2.check()",
"Next, you decide to remove group membership from the training data and train a new model. Do you think this will make the model treat the groups more equally?\nRun the next code cell to see how this new group unaware model performs.",
"# Create new dataset with gender removed\nX_train_unaware = X_train.drop([\"Group\"],axis=1)\nX_test_unaware = X_test.drop([\"Group\"],axis=1)\n\n# Train new model on new dataset\nmodel_unaware = tree.DecisionTreeClassifier(random_state=0, max_depth=3)\nmodel_unaware.fit(X_train_unaware, y_train)\n\n# Evaluate the model\npreds_unaware = model_unaware.predict(X_test_unaware)\nget_stats(X_test_unaware, y_test, model_unaware, X_test[\"Group\"]==1, preds_unaware)",
"3) Varieties of fairness, part 2\nHow does this model compare to the first model you trained, when you consider demographic parity, equal accuracy, and equal opportunity? Once you have an answer, run the next code cell.",
"# Check your answer (Run this code cell to get credit!)\nq_3.check()",
"You decide to train a third potential model, this time with the goal of having each group have even representation in the group of approved applicants. (This is an implementation of group thresholds, which you can optionally read more about here.) \nRun the next code cell without changes to evaluate this new model.",
"# Change the value of zero_threshold to hit the objective\nzero_threshold = 0.11\none_threshold = 0.99\n\n# Evaluate the model\ntest_probs = model_unaware.predict_proba(X_test_unaware)[:,1]\npreds_approval = (((test_probs>zero_threshold)*1)*[X_test[\"Group\"]==0] + ((test_probs>one_threshold)*1)*[X_test[\"Group\"]==1])[0]\nget_stats(X_test, y_test, model_unaware, X_test[\"Group\"]==1, preds_approval)",
"4) Varieties of fairness, part 3\nHow does this final model compare to the previous models, when you consider demographic parity, equal accuracy, and equal opportunity?",
"# Check your answer (Run this code cell to get credit!)\nq_4.check()",
"This is only a short exercise to explore different types of fairness, and to illustrate the tradeoff that can occur when you optimize for one type of fairness over another. We have focused on model training here, but in practice, to really mitigate bias, or to make ML systems fair, we need to take a close look at every step in the process, from data collection to releasing a final product to users. \nFor instance, if you take a close look at the data, you'll notice that on average, individuals from Group B tend to have higher income than individuals from Group A, and are also more likely to own a home or a car. Knowing this will prove invaluable to deciding what fairness criterion you should use, and to inform ways to achieve fairness. (For instance, it would likely be a bad aproach, if you did not remove the historical bias in the data and then train the model to get equal accuracy for each group.)\nIn this course, we intentionally avoid taking an opinionated stance on how exactly to minimize bias and ensure fairness in specific projects. This is because the correct answers continue to evolve, since AI fairness is an active area of research. This lesson was a hands-on introduction to the topic, and you can continue your learning by reading blog posts from the Partnership on AI or by following conferences like the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT).\nKeep going\nContinue to learn how to use model cards to make machine learning models transparent to large audiences."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
JoseGuzman/myIPythonNotebooks | Optimization/Polynomial regression.ipynb | gpl-2.0 | [
"%pylab inline",
"<H1>Polynomial fit</H1>",
"# generate some data\nnp.random.seed(2)\nxdata = np.random.normal(3.0, 1.0, 100)\nydata = np.random.normal(50.0, 30.0, 100) / xdata\n\n\nplt.plot(xdata, ydata, 'ko',ms=2);\n\n# lets fit to a polynomial function of degree 8\n# and plot all together\nf = np.poly1d( np.polyfit(xdata, ydata, 8) )\nx = np.linspace(np.min(xdata), np.max(xdata), 100)\nplt.plot(xdata, ydata, 'ko', ms=2)\nplt.plot(x,f(x), 'red');\n\n# compute r**2\nfrom sklearn.metrics import r2_score\nr2_score(ydata, f(xdata))",
"The r2_score is not very good and the large degree of the polynomial suggest an overfitting. The r2_score alone\ncannot not say which fitting is the best.",
"# find the best polynomial\nmypoly = dict()\nfor n in range(1, 10):\n f = np.poly1d( np.polyfit(xdata, ydata, n) )\n mypoly[n] = r2_score(ydata, f(xdata))\n print 'Pol. deg %d -> %f' %(n, mypoly[n])",
"<H2>Trial and test method</H2>\nTo avoid overfitting, we'll split the data in two - 80% of it will be used for \"training\" our model, and the other 20% for testing it.",
"#we'll select 80% of the data to train\nxtrain = xdata[:80]\nytrain = ydata[:80]\n\nxtest = xdata[80:]\nytest = ydata[80:]\nprint(len(xtrain), len(xtest))\nplt.plot(xtrain, ytrain, 'ro', ms=2)\nplt.xlim(0,7), plt.ylim(0,200)\nplt.title('Train');\n\nplt.plot(xtest, ytest, 'bo', ms=2)\nplt.xlim(0,7), plt.ylim(0,200)\nplt.title('Test');\n\nf = np.poly1d(np.polyfit(xtrain, ytrain, 8))\n\nplt.plot(xtrain, ytrain, 'ko', ms=2)\nplt.plot(x, f(x),'r')\nplt.title('Train');\n\nplt.plot(xtest, ytest, 'ko', ms=2)\nplt.plot(x, f(x),'r')\nplt.title('Test');\n\nr2_score(ytrain, f(xtrain)), r2_score(ytest, f(xtest))",
"the r2_score value of the test value is telling us that this fit is not very good",
"# let's compute train and test for all polynomial\n# find the best polynomial\nr2_test, r2_train = list(), list()\npolydeg = range(1,15)\nfor n in polydeg:\n f = np.poly1d( np.polyfit(xtrain, ytrain, n) )\n r2train = r2_score(ytrain, f(xtrain))\n r2_train.append(r2train)\n r2test = r2_score(ytest, f(xtest))\n r2_test.append(r2test)\n print 'Pol. deg %2d -> r2(train) = %2.4f, r2(test) = %2.4f' %(n,r2train, r2test)\n \n\n",
"Looking at the r2_scores of the test value, we can resolve that a fitting with a polynomial degree of six is the best",
"plt.plot(polydeg, r2_train, color='gray')\nplt.bar(polydeg, r2_test, color='red', alpha=.4)\nplt.xlim(1, 15);\n\n# the best is to fit with a polynomial of degree 6\nf = np.poly1d(np.polyfit(xdata,ydata,6))\n\nplt.plot(xdata,ydata, 'ko', ms=2)\nplt.plot(x,f(x),'red');"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sujitpal/polydlot | src/pytorch/10-cumsum-prediction.ipynb | apache-2.0 | [
"Cumulative Sum Prediction\nThis is the fifth toy example from Jason Brownlee's Long Short Term Memory Networks with Python. It demonstrates the solution to a sequence-to-sequence (aka seq2seq) prediction problem. Per section 10.2 of the book:\n\nThe problem is defined as a sequence of random values between 0 and 1. This sequence is taken as input for the problem with each number provided once per time step. A binary label (0 or 1) is associated with each input. The output values are initially all 0. Once the cumulative sum of the input values in the sequence exceeds a threshold, then the output value flips from 0 to 1. A threshold of one quarter (1/4) of the sequence length is used, so for a sequence of length 10, the threshold is 2.5.\nWe will frame the problem to make the best use of the Bidirectional LSTM architecture.\nThe output sequence will be produced after the entire input sequence has been fed into the\nmodel. Technically, this means this is a sequence-to-sequence prediction problem that requires\na many-to-many prediction model. It is also the case that the input and output sequences have\nthe same number of time steps (length).",
"from __future__ import division, print_function\nfrom sklearn.metrics import accuracy_score, confusion_matrix\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport shutil\n%matplotlib inline\n\nDATA_DIR = \"../../data\"\nMODEL_FILE = os.path.join(DATA_DIR, \"torch-10-cumsum-predict-{:d}.model\")\n\nTRAIN_SIZE = 7500\nVAL_SIZE = 100\nTEST_SIZE = 500\n\nSEQ_LENGTH = 10\nEMBED_SIZE = 1\n\nBATCH_SIZE = 32\nNUM_EPOCHS = 10\nLEARNING_RATE = 1e-3",
"Prepare Data",
"def generate_sequence(seq_len):\n xs = np.random.random(seq_len)\n ys = np.array([0 if x < 2.5 else 1 for x in np.cumsum(xs).tolist()])\n return xs, ys\n\nX, Y = generate_sequence(SEQ_LENGTH)\nprint(X)\nprint(Y)\n\ndef generate_data(seq_len, num_seqs):\n xseq, yseq = [], []\n for i in range(num_seqs):\n X, Y = generate_sequence(seq_len)\n xseq.append(X)\n yseq.append(Y)\n return np.expand_dims(np.array(xseq), axis=2), np.array(yseq)\n\nXtrain, Ytrain = generate_data(SEQ_LENGTH, TRAIN_SIZE)\nXval, Yval = generate_data(SEQ_LENGTH, VAL_SIZE)\nXtest, Ytest = generate_data(SEQ_LENGTH, TEST_SIZE)\n\nprint(Xtrain.shape, Ytrain.shape, Xval.shape, Yval.shape, Xtest.shape, Ytest.shape)",
"Define Network\nThe sequence length for the input and output sequences are the same size. Our network follows the model built (using Keras) in the book. Unlike the typical encoder-decoder LSTM architecture that is used for most seq2seq problems, here we have a single LSTM followed by a FCN layer at each timestep of its output. Each FCN returns a binary 0/1 output, which is concatenated to produce the predicted result.",
"class CumSumPredictor(nn.Module):\n \n def __init__(self, seq_len, input_dim, hidden_dim, output_dim):\n super(CumSumPredictor, self).__init__()\n self.seq_len = seq_len\n self.hidden_dim = hidden_dim\n self.output_dim = output_dim\n # network layers\n self.enc_lstm = nn.LSTM(input_dim, hidden_dim, 1, batch_first=True, \n bidirectional=True)\n self.fcn = nn.Linear(hidden_dim * 2, output_dim) # bidirectional input\n self.fcn_relu = nn.ReLU()\n self.fcn_softmax = nn.Softmax()\n \n def forward(self, x):\n if torch.cuda.is_available():\n h = (Variable(torch.randn(2, x.size(0), self.hidden_dim).cuda()),\n Variable(torch.randn(2, x.size(0), self.hidden_dim).cuda()))\n else:\n h = (Variable(torch.randn(2, x.size(0), self.hidden_dim)),\n Variable(torch.randn(2, x.size(0), self.hidden_dim)))\n\n x, h = self.enc_lstm(x, h) # encoder LSTM\n x_fcn = Variable(torch.zeros(x.size(0), self.seq_len, self.output_dim))\n for i in range(self.seq_len): # decoder LSTM -> fcn for each timestep\n x_fcn[:, i, :] = self.fcn_softmax(self.fcn_relu(self.fcn(x[:, i, :])))\n x = x_fcn \n return x\n \nmodel = CumSumPredictor(SEQ_LENGTH, EMBED_SIZE, 50, 2)\nif torch.cuda.is_available():\n model.cuda()\nprint(model)\n\n# size debugging\nprint(\"--- size debugging ---\")\ninp = Variable(torch.randn(BATCH_SIZE, SEQ_LENGTH, EMBED_SIZE))\noutp = model(inp)\nprint(outp.size())\n\nloss_fn = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)",
"Train Network",
"def compute_accuracy(pred_var, true_var):\n if torch.cuda.is_available():\n ypred = pred_var.cpu().data.numpy()\n ytrue = true_var.cpu().data.numpy()\n else:\n ypred = pred_var.data.numpy()\n ytrue = true_var.data.numpy()\n pred_nums, true_nums = [], []\n for i in range(pred_var.size(0)): # for each row of output\n pred_nums.append(int(\"\".join([str(x) for x in ypred[i].tolist()]), 2))\n true_nums.append(int(\"\".join([str(x) for x in ytrue[i].tolist()]), 2))\n return pred_nums, true_nums, accuracy_score(pred_nums, true_nums)\n\n\nhistory = []\nfor epoch in range(NUM_EPOCHS):\n \n num_batches = Xtrain.shape[0] // BATCH_SIZE\n shuffled_indices = np.random.permutation(np.arange(Xtrain.shape[0]))\n train_loss, train_acc = 0., 0.\n \n for bid in range(num_batches):\n \n # extract one batch of data\n Xbatch_data = Xtrain[shuffled_indices[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]]\n Ybatch_data = Ytrain[shuffled_indices[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]]\n Xbatch = Variable(torch.from_numpy(Xbatch_data).float())\n Ybatch = Variable(torch.from_numpy(Ybatch_data).long())\n if torch.cuda.is_available():\n Xbatch = Xbatch.cuda()\n Ybatch = Ybatch.cuda()\n \n # initialize gradients\n optimizer.zero_grad()\n\n # forward\n loss = 0.\n Ybatch_ = model(Xbatch)\n for i in range(Ybatch.size(1)):\n loss += loss_fn(Ybatch_[:, i, :], Ybatch[:, i])\n \n # backward\n loss.backward()\n\n train_loss += loss.data[0]\n \n _, ybatch_ = Ybatch_.max(2)\n _, _, acc = compute_accuracy(ybatch_, Ybatch)\n train_acc += acc\n \n optimizer.step()\n \n # compute training loss and accuracy\n train_loss /= num_batches\n train_acc /= num_batches\n \n # compute validation loss and accuracy\n val_loss, val_acc = 0., 0.\n num_val_batches = Xval.shape[0] // BATCH_SIZE\n for bid in range(num_val_batches):\n # data\n Xbatch_data = Xval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]\n Ybatch_data = Yval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]\n Xbatch = Variable(torch.from_numpy(Xbatch_data).float())\n Ybatch = Variable(torch.from_numpy(Ybatch_data).long())\n if torch.cuda.is_available():\n Xbatch = Xbatch.cuda()\n Ybatch = Ybatch.cuda()\n\n loss = 0.\n Ybatch_ = model(Xbatch)\n for i in range(Ybatch.size(1)):\n loss += loss_fn(Ybatch_[:, i, :], Ybatch[:, i])\n val_loss += loss.data[0]\n\n _, ybatch_ = Ybatch_.max(2)\n _, _, acc = compute_accuracy(ybatch_, Ybatch)\n val_acc += acc\n \n val_loss /= num_val_batches\n val_acc /= num_val_batches\n \n torch.save(model.state_dict(), MODEL_FILE.format(epoch+1))\n print(\"Epoch {:2d}/{:d}: loss={:.3f}, acc={:.3f}, val_loss={:.3f}, val_acc={:.3f}\"\n .format((epoch+1), NUM_EPOCHS, train_loss, train_acc, val_loss, val_acc))\n \n history.append((train_loss, val_loss, train_acc, val_acc))\n\nlosses = [x[0] for x in history]\nval_losses = [x[1] for x in history]\naccs = [x[2] for x in history]\nval_accs = [x[3] for x in history]\n\nplt.subplot(211)\nplt.title(\"Accuracy\")\nplt.plot(accs, color=\"r\", label=\"train\")\nplt.plot(val_accs, color=\"b\", label=\"valid\")\nplt.legend(loc=\"best\")\n\nplt.subplot(212)\nplt.title(\"Loss\")\nplt.plot(losses, color=\"r\", label=\"train\")\nplt.plot(val_losses, color=\"b\", label=\"valid\")\nplt.legend(loc=\"best\")\n\nplt.tight_layout()\nplt.show()",
"Evaluate Network",
"saved_model = CumSumPredictor(SEQ_LENGTH, EMBED_SIZE, 50, 2)\nsaved_model.load_state_dict(torch.load(MODEL_FILE.format(NUM_EPOCHS)))\nif torch.cuda.is_available():\n saved_model.cuda()\n\nylabels, ypreds = [], []\nnum_test_batches = Xtest.shape[0] // BATCH_SIZE\nfor bid in range(num_test_batches):\n Xbatch_data = Xtest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]\n Ybatch_data = Ytest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]\n Xbatch = Variable(torch.from_numpy(Xbatch_data).float())\n Ybatch = Variable(torch.from_numpy(Ybatch_data).long())\n if torch.cuda.is_available():\n Xbatch = Xbatch.cuda()\n Ybatch = Ybatch.cuda()\n\n Ybatch_ = saved_model(Xbatch)\n _, ybatch_ = Ybatch_.max(2)\n\n pred_nums, true_nums, _ = compute_accuracy(ybatch_, Ybatch)\n ylabels.extend(true_nums)\n ypreds.extend(pred_nums)\n\nprint(\"Test accuracy: {:.3f}\".format(accuracy_score(ylabels, ypreds)))\n\nXbatch_data = Xtest[0:10]\nYbatch_data = Ytest[0:10]\nXbatch = Variable(torch.from_numpy(Xbatch_data).float())\nYbatch = Variable(torch.from_numpy(Ybatch_data).long())\nif torch.cuda.is_available():\n Xbatch = Xbatch.cuda()\n Ybatch = Ybatch.cuda()\n\nYbatch_ = saved_model(Xbatch)\n_, ybatch_ = Ybatch_.max(2)\n\nif torch.cuda.is_available():\n ybatch__data = ybatch_.cpu().data.numpy()\nelse:\n ybatch__data = ybatch_.data.numpy()\n\nfor i in range(Ybatch_data.shape[0]):\n label = Ybatch_data[i]\n pred = ybatch__data[i]\n correct = \"True\" if np.array_equal(label, pred) else \"False\"\n print(\"y={:s}, yhat={:s}, correct={:s}\".format(str(label), str(pred), correct))\n\nfor i in range(NUM_EPOCHS):\n os.remove(MODEL_FILE.format(i + 1))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mklokocka/seminator | notebooks/bSCC.ipynb | gpl-3.0 | [
"import spot\nfrom spot.jupyter import display_inline\nfrom spot.seminator import seminator, ViaTGBA\nspot.setup()",
"Effect of the Bottom-SCC optimisation on semi-deterministic automata\nThe orange states below form deterministic bottom SCCs. After processing by Seminator, they are both in the 1st (violet) and 2nd (green) component. Simplifications cannot merge these duplicates as one is accepting and one is not. In fact, we do not need the copy in the first component as there is no non-determinism and so there is nothing to wait for. We have to make every edge entering such SCC as a cut-edge.",
"def example(**opts):\n in_a = spot.translate(\"(FGp2 R !p2) | GFp1\")\n in_a.highlight_states([3,4], 2).set_name(\"input\")\n # Note: the pure=True option disables all optimizations that are usually on by default.\n out_a = seminator(in_a, pure=True, postprocess=False, highlight=True, **opts)\n out_a.set_name(\"output\")\n simp_a = seminator(in_a, pure=True, postprocess=True, highlight=True, **opts)\n simp_a.set_name(\"simplified output\")\n display_inline(in_a, out_a, simp_a, per_row=3, show=\".vn\")\n \nexample()",
"Enabling the bottom-SCC optimization simplifies the output automata as follows:",
"example(bscc_avoid=True)",
"Cut-deterministic automata\nThe same idea can be applied to cut-deterministic automata. Removing the states 3 and 4 from the fist part of the cut-deterministic automaton would remove state ${3}$ and would merge the states ${1,3,4}$ and ${1,3}$.",
"example(cut_det=True)\n\nexample(cut_det=True, bscc_avoid=True)",
"Exension to semi-deterministic SCCs\nWe can avoid more than bottom SCC. In fact, we can avoid all SCCs that are already good for semi-deterministic automata (semi-deterministic SCC). SCC $C$ is semi-deterministic if $C$ and all successors of $C$ are deterministic. This is ilustrated on the following example and states 1 and 5.",
"def example2(**opts):\n in_a = spot.translate('G((((a & b) | (!a & !b)) & (GF!b U !c)) | (((!a & b) | (a & !b)) & (FGb R c)))')\n spot.highlight_nondet_states(in_a, 1)\n in_a.set_name(\"input\")\n options = { \"cut_det\": True, \"highlight\": True, \"jobs\": ViaTGBA, \"skip_levels\": True, \"pure\": True, **opts}\n out_a = seminator(in_a, **options, postprocess=False)\n out_a.set_name(\"output\")\n simp_a = seminator(in_a, **options, postprocess=True)\n simp_a.set_name(\"simplified output\")\n display_inline(in_a, out_a, simp_a, show=\".nhs\")\n\nexample2()\n\nexample2(bscc_avoid=True)",
"Reusing the semi-deterministic components with TGBA acceptance\nIn the previous example we have saved several states by not including the semi-deterministic components in the 1st part of the result. However, we still got 6 (and 5 after postprocessing) states out of the 3 deterministic states $1, 5$, and $6$. This can be tackled by reusing the semi-deterministic components as they are. This immediately leads to a TGBA on the output and we have to adress this in the parts which still rely on breakpoint construction. The edges that are accepting will now carry all the marks that are needed (as they do in the original automaton anyway).",
"example2(powerset_on_cut=True, reuse_deterministic=True)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DigitalSlideArchive/HistomicsTK | docs/examples/polygon_merger_using_rtree.ipynb | apache-2.0 | [
"Merging polygons (general purpose)\nOverview:\nThis notebook describes how to merge annotations that are generated in piecewise when annotating a large structure, or that arise in an annotation study when one user adds annotations to another user's work as corrections. In these cases there is a collection of annotations that overlap and need to be merged without any regular or predictable interfaces.\nThe example presented below addresses this case using an R-tree algorithm that identifies merging candidates without exhuastive search. While this approach can also merge annotations generated by tiled analysis it is slower than the alternative.\nThis extends on some of the work described in Amgad et al, 2019:\nMohamed Amgad, Habiba Elfandy, Hagar Hussein, ..., Jonathan Beezley, Deepak R Chittajallu, David Manthey, David A Gutman, Lee A D Cooper, Structured crowdsourcing enables convolutional segmentation of histology images, Bioinformatics, 2019, btz083\nHere is a sample result:\n\nImplementation summary\nThis algorithm merges annotations in coordinate space, which means it can merge very large structures without encountering memory issues. The algorithm works as follows:\n\n\nIdentify contours that that have the same label (e.g. tumor)\n\n\nAdd bounding boxes from these contours to an R-tree. The R-tree implementation used here is modified from here and uses k-means clustering to balance the tree.\n\n\nStarting from the bottom of the tree, merge all contours from leafs that belong to the same nodes.\n\n\nMove one level up the hierarchy, each time incorporating merged contours from nodes that share a common parent. This is repeated until there is one merged contour at the root node. The contours are first dilated slightly to make sure any small gaps are filled in the merged result, then are eroded by the same factor after merging.\n\n\nSave the coordinates from each merged polygon in a new pandas DataFrame.\n\n\nThis process ensures that the number of comparisons is << n^2. This is very important since algorithm complexity plays a key role as whole slide images may contain tens of thousands of annotated structures.\nWhere to look?\n|_ histomicstk/\n |_annotations_and_masks/\n |_polygon_merger_v2.py\n |_tests/\n |_ test_polygon_merger.py",
"import os\nimport sys\nCWD = os.getcwd()\nsys.path.append(os.path.join(CWD, '..', '..', 'histomicstk', 'annotations_and_masks'))\nimport girder_client\nfrom histomicstk.annotations_and_masks.polygon_merger_v2 import Polygon_merger_v2\nfrom histomicstk.annotations_and_masks.masks_to_annotations_handler import (\n get_annotation_documents_from_contours, _discard_nonenclosed_background_group)\nfrom histomicstk.annotations_and_masks.annotation_and_mask_utils import parse_slide_annotations_into_tables\n\n## 1. Connect girder client and set parameters",
"1. Connect girder client and set parameters",
"APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'\nSOURCE_SLIDE_ID = '5d5d6910bd4404c6b1f3d893'\nPOST_SLIDE_ID = '5d586d76bd4404c6b1f286ae'\n\ngc = girder_client.GirderClient(apiUrl=APIURL)\n# gc.authenticate(interactive=True)\ngc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')\n\n# get and parse slide annotations into dataframe\nslide_annotations = gc.get('/annotation/item/' + SOURCE_SLIDE_ID)\n_, contours_df = parse_slide_annotations_into_tables(slide_annotations)",
"2. Polygon merger\nThe Polygon_merger_v2() is the top level function for performing the merging.",
"print(Polygon_merger_v2.__doc__)\n\nprint(Polygon_merger_v2.__init__.__doc__)",
"Required arguments for initialization\nThe only required argument is a dataframe of contours merge.",
"contours_df.head()",
"3. Initialize and run the merger",
"# init & run polygon merger\npm = Polygon_merger_v2(contours_df, verbose=1)\npm.unique_groups.remove(\"roi\")\npm.run()",
"NOTE:\nThe following steps are only \"aesthetic\", and just ensure the contours look nice when posted to Digital Slide Archive for viewing with GeoJS.",
"# add colors (aesthetic)\nfor group in pm.unique_groups:\n cs = contours_df.loc[contours_df.loc[:, \"group\"] == group, \"color\"]\n pm.new_contours.loc[\n pm.new_contours.loc[:, \"group\"] == group, \"color\"] = cs.iloc[0]\n\n# get rid of nonenclosed stroma (aesthetic)\npm.new_contours = _discard_nonenclosed_background_group(\n pm.new_contours, background_group=\"mostly_stroma\")",
"This is the result",
"pm.new_contours.head()",
"4. Visualize results on HistomicsTK",
"# deleting existing annotations in target slide (if any)\nexisting_annotations = gc.get('/annotation/item/' + POST_SLIDE_ID)\nfor ann in existing_annotations:\n gc.delete('/annotation/%s' % ann['_id'])\n\n# get list of annotation documents\nannotation_docs = get_annotation_documents_from_contours(\n pm.new_contours.copy(), separate_docs_by_group=True,\n docnamePrefix='test',\n verbose=False, monitorPrefix=POST_SLIDE_ID + \": annotation docs\")\n\n# post annotations to slide -- make sure it posts without errors\nfor annotation_doc in annotation_docs:\n resp = gc.post(\n \"/annotation?itemId=\" + POST_SLIDE_ID, json=annotation_doc)",
"Now you can go to HistomicsUI and confirm that the posted annotations make sense\nand correspond to tissue boundaries and expected labels."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lionell/laboratories | decision_theory/lab2.ipynb | mit | [
"import numpy as np\nfrom scipy.stats import rankdata",
"Clusterized ranking",
"M = np.array([\n [5, 3, 1, 2, 8, 4, 6, 7],\n [5, 4, 3, 1, 8, 2, 6, 7],\n [1, 7, 5, 4, 8, 2, 3, 6],\n [6, 4, 2.5, 2.5, 8, 1, 7, 5],\n [8, 2, 4, 6, 3, 5, 1, 7],\n [5, 6, 4, 3, 2, 1, 7, 8],\n [6, 1, 2, 3, 5, 4, 8, 7],\n [5, 1, 3, 2, 7, 4, 6, 8],\n [6, 1, 3, 2, 5, 4, 7, 8],\n [5, 3, 2, 1, 8, 4, 6, 7],\n [7, 1, 3, 2, 6, 4, 5, 8],\n [1, 6, 5, 3, 8, 4, 2, 7]\n])\nn, m = M.shape",
"Here is how we find average ranking.",
"average_rank = rankdata(np.average(M, axis=0))\naverage_rank",
"And this way we can get median ranking.",
"median_rank = rankdata(np.median(M, axis=0))\nmedian_rank",
"Next we need to compute kernel of disagreement.",
"adj = np.zeros((m, m), dtype=np.bool)\nkernel = []\nfor i in range(m):\n for j in range(i + 1, m):\n if (average_rank[i] - average_rank[j])*(median_rank[i] - median_rank[j]) < 0:\n kernel.append([i, j])\n adj[i][j] = adj[j][i] = True\nkernel",
"Now that we have a graph of the disagreement, we can easily find a full component via Depth First Search.",
"def dfs(i, used):\n if i in used:\n return []\n used.add(i)\n \n res = [i]\n for j in range(m):\n if adj[i][j]:\n res += dfs(j, used)\n return res",
"Last thing to do, is to iterate in the correct order, and don't forget to print a whole cluster when needed.",
"order = sorted(range(m), key=lambda i: (average_rank[i], median_rank[i]))\norder\n\nresult = []\nused = set()\nfor i in order:\n cluster = dfs(i, used)\n if len(cluster) > 0:\n result.append(cluster)\nresult",
"Kemeny distance",
"rankings = np.array([\n [[1], [2, 3], [4], [5], [6, 7]],\n [[1, 3], [4], [2], [5], [7], [6]],\n [[1], [4], [2], [3], [6], [5], [7]],\n [[1], [2, 4], [3], [5], [7], [6]],\n [[2], [3], [4], [5], [1], [6], [7]],\n [[1], [3], [2], [5], [6], [7], [4]],\n [[1], [5], [3], [4], [2], [6], [7]]\n])\nn = rankings.shape[0]",
"We need to be able to build relation matrix out of the ranking.",
"def build(x):\n n = sum(map(lambda r: len(r), x)) # Total amount of objects\n m = np.zeros((n, n), dtype=np.bool)\n for r in x:\n for i in r:\n for j in range(n):\n if not m[j][i - 1] or j + 1 in r:\n m[i - 1][j] = True\n return m",
"Now we can calculate Kemedy distances between each two rankings.",
"dist = np.zeros((n, n))\nfor i in range(n):\n for j in range(n):\n dist[i][j] = np.sum(build(rankings[i]) ^ build(rankings[j]))\ndist",
"Let's find Kemeny median for the ranks.",
"median = np.argmin(np.sum(dist, axis=1))\nrankings[median]"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PBrockmann/ipython_ferretmagic | notebooks/ferretmagic_06_InteractWidget.ipynb | mit | [
"<hr>\nPatrick BROCKMANN - LSCE (Climate and Environment Sciences Laboratory)<br>\n<img align=\"left\" width=\"40%\" src=\"http://www.lsce.ipsl.fr/Css/img/banniere_LSCE_75.png\" ><br><br>\n<hr>\n\nUpdated: 2019/11/13\nLoad the ferret extension",
"%load_ext ferretmagic",
"\"Classic\" use with cell magic",
"%%ferret -s 600,400\nset text/font=arial\nuse monthly_navy_winds.cdf\nshow data/full\nplot uwnd[i=@ave,j=@ave,l=@sbx:12]",
"Explore interactive widgets",
"from ipywidgets import interact\n\n@interact(var=['uwnd','vwnd'], smooth=(1, 20), vrange=(0.5,5,0.5))\ndef plot(var='uwnd', smooth=5, vrange=1) :\n %ferret_run -s 600,400 'ppl color 6, 70, 70, 70; plot/grat=(dash,color=6)/vlim=-%(vrange)s:%(vrange)s %(var)s[i=@ave,j=@ave], %(var)s[i=@ave,j=@ave,l=@sbx:%(smooth)s]' % locals()\n",
"Another example with a map",
"# The line of code to make interactive\n%ferret_run -q -s 600,400 'cancel mode logo; \\\n ppl color 6, 70, 70, 70; \\\n shade/grat=(dash,color=6) %(var)s[l=%(lstep)s] ; \\\n go land' % {'var':'uwnd','lstep':'3'}\n\nimport ipywidgets as widgets\nfrom ipywidgets import interact\n\nplay = widgets.Play(\n value=1,\n min=1,\n max=10,\n step=1,\n description=\"Press play\",\n disabled=False\n)\nslider = widgets.IntSlider(\n min=1,\n max=10\n)\nwidgets.jslink((play, 'value'), (slider, 'value'))\na=widgets.HBox([play, slider])\n\n@interact(var=['uwnd','vwnd'], lstep=slider, lstep1=play)\ndef plot(var='uwnd', lstep=1, lstep1=1) :\n %ferret_run -q -s 600,400 'cancel mode logo; \\\n ppl color 6, 70, 70, 70; \\\n shade/grat=(dash,color=6)/lev=(-inf)(-10,10,2)(inf)/pal=mpl_Div_PRGn.spk %(var)s[l=%(lstep)s] ; \\\n go land' % locals()",
"More informations on ipython widgets from\n* https://github.com/ipython/ipywidgets"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mjlong/openmc | docs/source/pythonapi/examples/mgxs-part-i.ipynb | mit | [
"This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:\n\nGeneral equations for scalar-flux averaged multi-group cross sections\nCreation of multi-group cross sections for an infinite homogeneous medium\nUse of tally arithmetic to manipulate multi-group cross sections\n\nNote: This Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data. We recommend using Pandas >v0.15.0 or later since OpenMC's Python API leverages the multi-indexing feature included in the most recent releases of Pandas.\nIntroduction to Multi-Group Cross Sections (MGXS)\nMany Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use multi-group cross sections defined over discretized energy bins or energy groups. An example of U-235's continuous-energy fission cross section along with a 16-group cross section computed for a light water reactor spectrum is displayed below.",
"from IPython.display import Image\nImage(filename='images/mgxs.png', width=350)",
"A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications.\nBefore proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.\nIntroductory Notation\nThe continuous real-valued microscopic cross section may be denoted $\\sigma_{n,x}(\\mathbf{r}, E)$ for position vector $\\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\\Phi(\\mathbf{r},E)$ for position $\\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.\nSpatial and Energy Discretization\nThe energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \\in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation.\nMulti-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \\in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization.\nGeneral Scalar-Flux Weighted MGXS\nThe multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section $\\sigma_{n,x,k,g}$ as follows:\n$$\\sigma_{n,x,k,g} = \\frac{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\sigma_{n,x}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\Phi(\\mathbf{r},E')}$$\nThis scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for most multi-group cross sections, including total, absorption, and fission reaction types. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell or universe) define the bounds of integration for both numerator and denominator.\nMulti-Group Scattering Matrices\nThe general multi-group cross section $\\sigma_{n,x,k,g}$ is a vector of $G$ values for each energy group $g$. The equation presented above only discretizes the energy of the incoming neutron and neglects the outgoing energy of the neutron (if any). Hence, this formulation must be extended to account for the outgoing energy of neutrons in the discretized scattering matrix cross section used by deterministic neutron transport codes. \nWe denote the incoming and outgoing neutron energy groups as $g$ and $g'$ for the microscopic scattering matrix cross section $\\sigma_{n,s}(\\mathbf{r},E)$. As before, spatial homogenization and energy condensation are used to find the multi-group scattering matrix cross section $\\sigma_{n,s,k,g \\to g'}$ as follows:\n$$\\sigma_{n,s,k,g\\rightarrow g'} = \\frac{\\int_{E_{g'}}^{E_{g'-1}}\\mathrm{d}E''\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\sigma_{n,s}(\\mathbf{r},E'\\rightarrow E'')\\Phi(\\mathbf{r},E')}{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\Phi(\\mathbf{r},E')}$$\nThis scalar flux-weighted multi-group microscopic scattering matrix is computed using OpenMC tallies with both energy in and energy out filters.\nMulti-Group Fission Spectrum\nThe energy spectrum of neutrons emitted from fission is denoted by $\\chi_{n}(\\mathbf{r},E' \\rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\\chi_{n}(\\mathbf{r},E)$ with outgoing energy $E$.\nUnlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\\sigma_{n,f}(\\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\\nu_{n}(\\mathbf{r},E)$. The multi-group fission spectrum $\\chi_{n,k,g}$ is then the probability of fission neutrons emitted into energy group $g$. \nSimilar to before, spatial homogenization and energy condensation are used to find the multi-group fission spectrum $\\chi_{n,k,g}$ as follows:\n$$\\chi_{n,k,g'} = \\frac{\\int_{E_{g'}}^{E_{g'-1}}\\mathrm{d}E''\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\chi_{n}(\\mathbf{r},E'\\rightarrow E'')\\nu_{n}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\nu_{n}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}$$\nThe fission production-weighted multi-group fission spectrum is computed using OpenMC tallies with both energy in and energy out filters.\nThis concludes our brief overview on the methodology to compute multi-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.\nGenerate Input Files",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nimport openmc\nimport openmc.mgxs as mgxs\n\n%matplotlib inline",
"First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.",
"# Instantiate some Nuclides\nh1 = openmc.Nuclide('H-1')\no16 = openmc.Nuclide('O-16')\nu235 = openmc.Nuclide('U-235')\nu238 = openmc.Nuclide('U-238')\nzr90 = openmc.Nuclide('Zr-90')",
"With the nuclides we defined, we will now create a material for the homogeneous medium.",
"# Instantiate a Material and register the Nuclides\ninf_medium = openmc.Material(name='moderator')\ninf_medium.set_density('g/cc', 5.)\ninf_medium.add_nuclide(h1, 0.028999667)\ninf_medium.add_nuclide(o16, 0.01450188)\ninf_medium.add_nuclide(u235, 0.000114142)\ninf_medium.add_nuclide(u238, 0.006886019)\ninf_medium.add_nuclide(zr90, 0.002116053)",
"With our material, we can now create a MaterialsFile object that can be exported to an actual XML file.",
"# Instantiate a MaterialsFile, register all Materials, and export to XML\nmaterials_file = openmc.MaterialsFile()\nmaterials_file.default_xs = '71c'\nmaterials_file.add_material(inf_medium)\nmaterials_file.export_to_xml()",
"Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.",
"# Instantiate boundary Planes\nmin_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)\nmax_x = openmc.XPlane(boundary_type='reflective', x0=0.63)\nmin_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)\nmax_y = openmc.YPlane(boundary_type='reflective', y0=0.63)",
"With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.",
"# Instantiate a Cell\ncell = openmc.Cell(cell_id=1, name='cell')\n\n# Register bounding Surfaces with the Cell\ncell.region = +min_x & -max_x & +min_y & -max_y\n\n# Fill the Cell with the Material\ncell.fill = inf_medium",
"OpenMC requires that there is a \"root\" universe. Let us create a root universe and add our square cell to it.",
"# Instantiate Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(cell)",
"We now must create a geometry that is assigned a root universe, put the geometry into a GeometryFile object, and export it to XML.",
"# Create Geometry and set root Universe\nopenmc_geometry = openmc.Geometry()\nopenmc_geometry.root_universe = root_universe\n\n# Instantiate a GeometryFile\ngeometry_file = openmc.GeometryFile()\ngeometry_file.geometry = openmc_geometry\n\n# Export to \"geometry.xml\"\ngeometry_file.export_to_xml()",
"Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.",
"# OpenMC simulation parameters\nbatches = 50\ninactive = 10\nparticles = 2500\n\n# Instantiate a SettingsFile\nsettings_file = openmc.SettingsFile()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True, 'summary': True}\nbounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nsettings_file.set_source_space('fission', bounds)\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()",
"Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.",
"# Instantiate a 2-group EnergyGroups object\ngroups = mgxs.EnergyGroups()\ngroups.group_edges = np.array([0., 0.625e-6, 20.])",
"We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class:\n\nTotalXS\nTransportXS\nAbsorptionXS\nCaptureXS\nFissionXS\nNuFissionXS\nScatterXS\nNuScatterXS\nScatterMatrixXS\nNuScatterMatrixXS\nChi\n\nThese classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. In this case, let's create the multi-group total, absorption and scattering cross sections with our 2-group structure.",
"# Instantiate a few different sections\ntotal = mgxs.TotalXS(domain=cell, domain_type='cell', groups=groups)\nabsorption = mgxs.AbsorptionXS(domain=cell, domain_type='cell', groups=groups)\nscattering = mgxs.ScatterXS(domain=cell, domain_type='cell', groups=groups)",
"Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Absorption object as follows.",
"absorption.tallies",
"The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a TalliesFile object to generate the \"tallies.xml\" input file for OpenMC.",
"# Instantiate an empty TalliesFile\ntallies_file = openmc.TalliesFile()\n\n# Add total tallies to the tallies file\nfor tally in total.tallies.values():\n tallies_file.add_tally(tally)\n\n# Add absorption tallies to the tallies file\nfor tally in absorption.tallies.values():\n tallies_file.add_tally(tally)\n\n# Add scattering tallies to the tallies file\nfor tally in scattering.tallies.values():\n tallies_file.add_tally(tally)\n \n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()",
"Now we a have a complete set of inputs, so we can go ahead and run our simulation.",
"# Run OpenMC\nexecutor = openmc.Executor()\nexecutor.run_simulation()",
"Tally Data Processing\nOur simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.",
"# Load the last statepoint file\nsp = openmc.StatePoint('statepoint.50.h5')",
"In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the openmc.mgxs module to properly process the tally data. We first create a Summary object and link it with the statepoint.",
"# Load the summary file and link it with the statepoint\nsu = openmc.Summary('summary.h5')\nsp.link_with_summary(su)",
"The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.",
"# Load the tallies from the statepoint into each MGXS object\ntotal.load_from_statepoint(sp)\nabsorption.load_from_statepoint(sp)\nscattering.load_from_statepoint(sp)",
"Voila! Our multi-group cross sections are now ready to rock 'n roll!\nExtracting and Storing MGXS Data\nLet's first inspect our total cross section by printing it to the screen.",
"total.print_xs()",
"Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a \"derived\" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.",
"df = scattering.get_pandas_dataframe()\ndf.head(10)",
"Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.",
"absorption.export_xs_data(filename='absorption-xs', format='excel')",
"The following code snippet shows how to export all three MGXS to the same HDF5 binary data store.",
"total.build_hdf5_store(filename='mgxs', append=True)\nabsorption.build_hdf5_store(filename='mgxs', append=True)\nscattering.build_hdf5_store(filename='mgxs', append=True)",
"Comparing MGXS with Tally Arithmetic\nFinally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a \"derived\" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the TotalXS is equal to the sum of the AbsorptionXS and ScatterXS objects.",
"# Use tally arithmetic to compute the difference between the total, absorption and scattering\ndifference = total.xs_tally - absorption.xs_tally - scattering.xs_tally\n\n# The difference is a derived tally which can generate Pandas DataFrames for inspection\ndifference.get_pandas_dataframe()",
"Similarly, we can use tally arithmetic to compute the ratio of AbsorptionXS and ScatterXS to the TotalXS.",
"# Use tally arithmetic to compute the absorption-to-total MGXS ratio\nabsorption_to_total = absorption.xs_tally / total.xs_tally\n\n# The absorption-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection\nabsorption_to_total.get_pandas_dataframe()\n\n# Use tally arithmetic to compute the scattering-to-total MGXS ratio\nscattering_to_total = scattering.xs_tally / total.xs_tally\n\n# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection\nscattering_to_total.get_pandas_dataframe()",
"Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity.",
"# Use tally arithmetic to ensure that the absorption- and scattering-to-total MGXS ratios sum to unity\nsum_ratio = absorption_to_total + scattering_to_total\n\n# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection\nsum_ratio.get_pandas_dataframe()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.24/_downloads/5ac2a3ff8baa6aba4bf6dd1d047703e2/spm_faces_dataset_sgskip.ipynb | bsd-3-clause | [
"%matplotlib inline",
"From raw data to dSPM on SPM Faces dataset\nRuns a full pipeline using MNE-Python:\n- artifact removal\n- averaging Epochs\n- forward model computation\n- source reconstruction using dSPM on the contrast : \"faces - scrambled\"\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>This example does quite a bit of processing, so even on a\n fast machine it can take several minutes to complete.</p></div>",
"# Authors: Alexandre Gramfort <[email protected]>\n# Denis Engemann <[email protected]>\n#\n# License: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import spm_face\nfrom mne.preprocessing import ICA, create_eog_epochs\nfrom mne import io, combine_evoked\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\nprint(__doc__)\n\ndata_path = spm_face.data_path()\nsubjects_dir = data_path + '/subjects'",
"Load and filter data, set up epochs",
"raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'\n\nraw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run\n# Here to save memory and time we'll downsample heavily -- this is not\n# advised for real data as it can effectively jitter events!\nraw.resample(120., npad='auto')\n\npicks = mne.pick_types(raw.info, meg=True, exclude='bads')\nraw.filter(1, 30, method='fir', fir_design='firwin')\n\nevents = mne.find_events(raw, stim_channel='UPPT001')\n\n# plot the events to get an idea of the paradigm\nmne.viz.plot_events(events, raw.info['sfreq'])\n\nevent_ids = {\"faces\": 1, \"scrambled\": 2}\n\ntmin, tmax = -0.2, 0.6\nbaseline = None # no baseline as high-pass is applied\nreject = dict(mag=5e-12)\n\nepochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks,\n baseline=baseline, preload=True, reject=reject)\n\n# Fit ICA, find and remove major artifacts\nica = ICA(n_components=0.95, max_iter='auto', random_state=0)\nica.fit(raw, decim=1, reject=reject)\n\n# compute correlation scores, get bad indices sorted by score\neog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject)\neog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908')\nica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on\nica.plot_components(eog_inds) # view topographic sensitivity of components\nica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar\nica.plot_overlay(eog_epochs.average()) # inspect artifact removal\nica.apply(epochs) # clean data, default in place\n\nevoked = [epochs[k].average() for k in event_ids]\n\ncontrast = combine_evoked(evoked, weights=[-1, 1]) # Faces - scrambled\n\nevoked.append(contrast)\n\nfor e in evoked:\n e.plot(ylim=dict(mag=[-400, 400]))\n\nplt.show()\n\n# estimate noise covarariance\nnoise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk',\n rank=None)",
"Visualize fields on MEG helmet",
"# The transformation here was aligned using the dig-montage. It's included in\n# the spm_faces dataset and is named SPM_dig_montage.fif.\ntrans_fname = data_path + ('/MEG/spm/SPM_CTF_MEG_example_faces1_3D_'\n 'raw-trans.fif')\n\nmaps = mne.make_field_map(evoked[0], trans_fname, subject='spm',\n subjects_dir=subjects_dir, n_jobs=1)\n\nevoked[0].plot_field(maps, time=0.170)",
"Look at the whitened evoked daat",
"evoked[0].plot_white(noise_cov)",
"Compute forward model",
"src = data_path + '/subjects/spm/bem/spm-oct-6-src.fif'\nbem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'\nforward = mne.make_forward_solution(contrast.info, trans_fname, src, bem)",
"Compute inverse solution",
"snr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = 'dSPM'\n\ninverse_operator = make_inverse_operator(contrast.info, forward, noise_cov,\n loose=0.2, depth=0.8)\n\n# Compute inverse solution on contrast\nstc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None)\n# stc.save('spm_%s_dSPM_inverse' % contrast.comment)\n\n# Plot contrast in 3D with mne.viz.Brain if available\nbrain = stc.plot(hemi='both', subjects_dir=subjects_dir, initial_time=0.170,\n views=['ven'], clim={'kind': 'value', 'lims': [3., 6., 9.]})\n# brain.save_image('dSPM_map.png')"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
darkomen/TFG | medidas/11082015/Análisis de datos.ipynb | cc0-1.0 | [
"Análisis de los datos obtenidos\nUso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 11 de Agosto del 2015\nLos datos del experimento:\n* Hora de inicio: 14:27\n* Hora final : 15:08\n* Filamento extruido: 537cm\n* $T: 150ºC$\n* $V_{min} tractora: 1.5 mm/s$\n* $V_{max} tractora: 3.4 mm/s$\n* Los incrementos de velocidades en las reglas del sistema experto son las mismas.",
"#Importamos las librerías utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerías\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Abrimos el fichero csv con los datos de la muestra\ndatos = pd.read_csv('1119703.CSV')\n\n%pylab inline\n\n#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ncolumns = ['Diametro X', 'RPM TRAC']\n\n#Mostramos un resumen de los datos obtenidoss\ndatos[columns].describe()\n#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]",
"Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica",
"datos.ix[:, \"Diametro X\":\"Diametro Y\"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')\n#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')\n\ndatos.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')",
"En el boxplot, se ve como la mayoría de los datos están por encima de la media (primer cuartil). Se va a tratar de bajar ese porcentaje. La primera aproximación que vamos a realizar será la de hacer mayores incrementos al subir la velocidad en los tramos que el diámetro se encuentre entre $1.80mm$ y $1.75 mm$(caso 5) haremos incrementos de $d_v2$ en lugar de $d_v1$\nComparativa de Diametro X frente a Diametro Y para ver el ratio del filamento",
"plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')",
"Filtrado de datos\nLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.",
"datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]\n\n#datos_filtrados.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')",
"Representación de X/Y",
"plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')",
"Analizamos datos del ratio",
"ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']\nratio.describe()\n\nrolling_mean = pd.rolling_mean(ratio, 50)\nrolling_std = pd.rolling_std(ratio, 50)\nrolling_mean.plot(figsize=(12,6))\n# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)\nratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))",
"Límites de calidad\nCalculamos el número de veces que traspasamos unos límites de calidad. \n$Th^+ = 1.85$ and $Th^- = 1.65$",
"Th_u = 1.85\nTh_d = 1.65\n\ndata_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |\n (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]\n\ndata_violations.describe()\n\ndata_violations.plot(subplots=True, figsize=(12,12))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
netmanchris/PYHPEIMC | examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb | apache-2.0 | [
"Serial Numbers, How I love thee...\nNo one really like serial numbers, but keeping track of them is one of the \"brushing your teeth\" activities that everyone needs to take care of. It's like eating your brussel sprouts. Or listening to your mom. You're just better of if you do it quickly as it just gets more painful over time.\nNot only is it just good hygene, but you may be subject to regulations, like eRate in the United States where you have to be able to report on the location of any device by serial number at any point in time. \n\nTrust me, having to play hide-and-go seek with an SSH session is not something you want to do when government auditors are looking for answers.\n\nI'm sure you've already guessed what I'm about to say, but I\"ll say it anyway...\n\nThere's an API for that!!!\n\nHPE IMC base platform has a great network assets function that automatically gathers all the details of your various devices, assuming of course they support RFC 4133, otherwise known as the Entity MIB. On the bright side, most vendors have chosen to support this standards based MIB, so chances are you're in good shape. \nAnd if they don't support it, they really should. You should ask them. Ok?\nSo without further ado, let's get started.\nImporting the required libraries\nI'm sure you're getting used to this part, but it's import to know where to look for these different functions. In this case, we're going to look at a new library that is specifically designed to deal with network assets, including serial numbers.",
"from pyhpeimc.auth import *\nfrom pyhpeimc.plat.netassets import *\nimport csv\n\n\nauth = IMCAuth(\"http://\", \"10.101.0.203\", \"8080\", \"admin\", \"admin\")\n\nciscorouter = get_dev_asset_details('10.101.0.1', auth.creds, auth.url)",
"How many assets in a Cisco Router?\nAs some of you may have heard, HPE IMC is a multi-vendor tool and offers support for many of the common devices you'll see in your daily travels. \nIn this example, we're going to use a Cisco 2811 router to showcase the basic function.\nRouters, like chassis switches have multiple components. As any one who's ever been the ~~victem~~ owner of a Smartnet contract, you'll know that you have individual components which have serial numbers as well and all of them have to be reported for them to be covered. So let's see if we managed to grab all of those by first checking out how many individual items we got back in the asset list for this cisco router.",
"len(ciscorouter)",
"What's in the box???\nNow we know that we've got an idea of how many assets are in here, let's take a look to see exactly what's in one of the asset records to see if there's anything useful in here.",
"ciscorouter[0]",
"What can we do with this?\nWith some basic python string manipulation we could easily print out some of the attributes that we want into what could easily turn into a nicely formated report. \nAgain realise that the example below is just a subset of what's available in the JSON above. If you want more, just add it to the list.",
"for i in ciscorouter:\n print (\"Device Name: \" + i['deviceName'] + \" Device Model: \" + i['model'] +\n \"\\nAsset Name is: \" + i['name'] + \" Asset Serial Number is: \" + \n i['serialNum']+ \"\\n\")",
"Why not just write that to disk?\nAlthough we could go directly to the formated report without a lot of extra work, we would be losing a lot of data which we may have use for later. Instead why don't we export all the available data from the JSON above into a CSV file which can be later opened in your favourite spreadsheet viewer and manipulated to your hearst content.\nPretty cool, no?",
"keys = ciscorouter[0].keys()\nwith open('ciscorouter.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, keys)\n dict_writer.writeheader()\n dict_writer.writerows(ciscorouter)",
"Reading it back\nNow we'll read it back from disk to make sure it worked properly. When working with data like this, I find it useful to think about who's going to be consuming the data. For example, when looking at this remember this is a CSV file which can be easily opened in python, or something like Microsoft Excel to manipuate further. It's not realy intended to be read by human beings in this particular format. You'll need another program to consume and munge the data first to turn it into something human consumable.",
"with open('ciscorouter.csv') as file:\n print (file.read())",
"What about all my serial numbers at once?\nThat's a great question! I'm glad you asked. One of the most beautiful things about learning to automate things like asset gathering through an API is that it's often not much more work to do something 1000 times than it is to do it a single time. \nThis time instead of using the get_dev_asset_details function that we used above which gets us all the assets associated with a single device, let's grab ALL the devices at once.",
"all_assets = get_dev_asset_details_all(auth.creds, auth.url)\n\nlen (all_assets)",
"That's a lot of assets!\nExactly why we automate things. Now let's write the all_assets list to disk as well. \n**note for reasons unknown to me at this time, although the majority of the assets have 27 differnet fields, a few of them actually have 28 different attributes. Something I'll have to dig into later.",
"keys = all_assets[0].keys()\nwith open('all_assets.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, keys)\n dict_writer.writeheader()\n dict_writer.writerows(all_assets)",
"Well That's not good....\nSo it looks like there are a few network assets that have a different number of attributes than the first one in the list. We'll write some quick code to figure out how big of a problem this is.",
"print (\"The length of the first items keys is \" + str(len(keys)))\nfor i in all_assets:\n if len(i) != len(all_assets[0].keys()):\n print (\"The length of index \" + str(all_assets.index(i)) + \" is \" + str(len(i.keys())))",
"Well that's not so bad\nIt looks like the items which don't have exactly 27 attribues have exactly 28 attributes. So we'll just pick one of the longer ones to use as the headers for our CSV file and then run the script again.\nFor this one, I'm going to ask you to trust me that the file is on disk and save us all the trouble of having to print out 1013 seperate assets into this blog post.",
"keys = all_assets[879].keys()\nwith open ('all_assets.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, keys)\n dict_writer.writeheader()\n dict_writer.writerows(all_assets)",
"What's next?\nSo now that we've got all of our assets into a CSV file which is easily consumable by something like Excel, you can now chose what to do with the data.\nFor me it's interesting to see how vendors internalyl instrument their boxes. Some have serial numbers on power supplies and fans, some don't. Some use the standard way of doing things. Some don't. \nFrom an operations perspective, not all gear is created equal and it's nice to understand what's supported when trying to make a purchasing choice for something you're going to have to live with for the next few years. \nIf you're looking at your annual SMARTnet upgrade, at least you've now got a way to easily audit all of your discovered environment and figure out what line cards need to be tied to a particualr contract.\nOr you could just look at another vendor who makes your life easier. Entirely your choice. \n@netmanchris"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/thu/cmip6/models/sandbox-2/land.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: THU\nSource ID: SANDBOX-2\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jurgjn/relmapping | annot/notebooks/Fig2S3_import_Gu2012.ipynb | gpl-2.0 | [
"%run ~/relmapping/annot/notebooks/__init__.ipynb\ndef vp(fp): return os.path.join('annot/Fig2S3_tss/', fp) # \"verbose path\"",
"http://www.sciencedirect.com/science/article/pii/S0092867412014080\n\nTable S1. trans-Splice Sites, Transcription Start Sites, and csRNA Loci for Protein-Coding Genes and Transcription Start Sites for pri-miRNAs, Related to Figure 2. Analysis of C. elegans CapSeq and CIP-TAP, containing lists of trans-splice sites, transcription start sites, and sense and antisense csRNAs derived from protein coding genes. Also included is the list of the transcription start sites for pri-miRNAs.\nFor C. elegans analysis, reads were mapped to the genome (WormBase release WS215)",
"#!cd ~/relmapping/wget; wget -m --no-parent https://ars.els-cdn.com/content/image/1-s2.0-S0092867412014080-mmc1.xlsx\nfp_ = 'wget/ars.els-cdn.com/content/image/1-s2.0-S0092867412014080-mmc1_B._TS_sites_for_protein_genes.csv'\ndf_ = pd.read_csv(fp_, skiprows=11)\ndf_['assigned_to_an_annotation'] = df_['transcript'].map(lambda x: x == x)\nprint('%d records, ~20,000 not assigned to an annotation:' % (len(df_)))\nprint(df_['assigned_to_an_annotation'].value_counts())\n\ndf_.head()",
"Using a cutoff of one CapSeq read per 10 million total reads, and a requirement for a YR motif, our CapSeq data predicted approximately 64,000 candidate TS sites genome wide (Table S1B).",
"print(df_['transcript type'].value_counts())\nm_ = df_['transcript type'] == \"coding\"\ndf_ = df_.loc[m_].reset_index(drop=True)\nprint('%d records with annotated as \"coding\"' % (len(df_.query('transcript == transcript')),))\n\n# Raw (Gu et al., 2012) TSS sites (=many assigned to multiple transcripts)\ndf_gu = pd.DataFrame()\ndf_gu['chrom'] = 'chr' + df_['chromosome']\ndf_gu['start'] = df_['start']\ndf_gu['end'] = df_['start'] + 1\ndf_gu['name'] = df_['transcript']\ndf_gu['score'] = df_['reads']\ndf_gu['strand'] = df_['strand']\ndf_gu = df_gu.sort_values(['chrom', 'start', 'end', 'start']).reset_index(drop=True)\n\nfp_ = vp('Gu2012_tss.bed')\nwrite_gffbed(fp_,\n chrom = df_gu['chrom'],\n start = df_gu['start'],\n end = df_gu['end'],\n name = df_gu['name'],\n strand = df_gu['strand'],\n score = df_gu['score'],\n)\n!wc -l {fp_}\n\n# Collapse TSS annotations by chrom/start/end/strand (raw TSS assignments are to all \"compatible\" transcripts)\nfp_ = vp('Gu2012_tss_unique.bed')\ndf_gu.groupby(['chrom', 'start', 'end', 'strand']).agg({\n 'name': lambda l: os.path.commonprefix(list(l)).rstrip('.'),#lambda l: ','.join(sorted(set(l))),\n 'score': np.sum,\n})\\\n.reset_index().sort_values(['chrom', 'start', 'end', 'strand'])[['chrom', 'start', 'end', 'name', 'score', 'strand']]\\\n.to_csv(fp_, sep='\\t', index=False, header=False)\n!wc -l {fp_}\n\n# Cluster TSS annotations using single-linkage, strand-specific, using a distance cutoff of 50\ndf_gu_cluster50_ = BedTool.from_dataframe(df_gu).cluster(d=50, s=True).to_dataframe()\ndf_gu_cluster50_.columns = ('chrom', 'start', 'end', 'transcript_id', 'score', 'strand', 'cluster_id')\n\nfp_ = vp('Gu2012_tss_clustered.bed')\ndf_gu_cluster50 = df_gu_cluster50_.groupby('cluster_id').agg({\n 'chrom': lambda s: list(set(s))[0],\n 'start': np.min,\n 'end': np.max,\n 'transcript_id': lambda l: os.path.commonprefix(list(l)).rstrip('.'),#lambda l: ','.join(sorted(set(l))),\n 'score': np.sum,\n 'strand': lambda s: list(set(s))[0],\n})\\\n.sort_values(['chrom', 'start', 'end', 'strand']).reset_index(drop=True)\ndf_gu_cluster50.to_csv(fp_, sep='\\t', index=False, header=False)\n!wc -l {fp_}\n\n# Overlaps to TSS clusters\ndf_regl_ = regl_Apr27(flank_len=150)[['chrom', 'start', 'end', 'annot']]\n\ngv = yp.GenomicVenn2(\n BedTool.from_dataframe(df_regl_),\n BedTool.from_dataframe(df_gu_cluster50[yp.NAMES_BED3]),\n label_a='Accessible sites',\n label_b='(Gu et al., 2012)\\nTSS clusters',\n)\n\nplt.figure(figsize=(12,6)).subplots_adjust(wspace=0.5)\nplt.subplot(1,2,1)\ngv.plot()\n\nplt.subplot(1,2,2)\nannot_count_ = gv.df_a_with_b['name'].value_counts()[config['annot']]\nannot_count_.index = [\n 'coding_promoter',\n 'pseudogene_promoter',\n 'unknown_promoter',\n 'putative_enhancer',\n 'non-coding_RNA',\n '\\n\\nother_element'\n]\n#plt.title('Annotation of %d accessible sites that overlap a TSS from (Gu et al., 2012)' % (len(gv.df_a_with_b),))\nplt.pie(\n annot_count_.values,\n labels = ['%s (%d)' % (l, c) for l, c in annot_count_.iteritems()],\n colors=[yp.RED, yp.ORANGE, yp.YELLOW, yp.GREEN, '0.4', yp.BLUE],\n counterclock=False,\n startangle=70,\n autopct='%.1f%%',\n);\nplt.gca().set_aspect('equal')\n#plt.savefig('annot/Fig2S5_tss/Gu2012_annot.pdf', bbox_inches='tight', transparent=True)\nannot_count_\n\ndf_regl_ = regl_Apr27(flank_len=150)[['chrom', 'start', 'end', 'annot']]\n\ngv = yp.GenomicVenn2(\n BedTool.from_dataframe(df_regl_),\n BedTool.from_dataframe(df_gu_cluster50[yp.NAMES_BED3]),\n label_a='Accessible sites',\n label_b='(Gu et al., 2012)\\nTSS clusters',\n)\n\nplt.figure(figsize=(8,4)).subplots_adjust(wspace=0.2)\nplt.subplot(1,2,1)\nv = gv.plot(style='compact')\nv.get_patch_by_id('10').set_color(yp.RED)\nv.get_patch_by_id('01').set_color(yp.GREEN)\nv.get_patch_by_id('11').set_color(yp.YELLOW)\n\nplt.subplot(1,2,2)\nd_reduced_ = collections.OrderedDict([\n ('coding_promoter', 'coding_promoter, pseudogene_promoter'),\n ('pseudogene_promoter', 'coding_promoter, pseudogene_promoter'),\n ('unknown_promoter', 'unknown_promoter'),\n ('putative_enhancer', 'putative_enhancer'),\n ('non-coding_RNA', 'other_element, non-coding_RNA'),\n ('other_element', 'other_element, non-coding_RNA'),\n])\n\nd_colour_ = collections.OrderedDict([\n ('coding_promoter, pseudogene_promoter', yp.RED),\n ('unknown_promoter', yp.YELLOW),\n ('putative_enhancer', yp.GREEN),\n ('other_element, non-coding_RNA', yp.BLUE),\n])\n\ngv.df_a_with_b['name_reduced'] = [*map(lambda a: d_reduced_[a], gv.df_a_with_b['name'])]\nannot_count_ = gv.df_a_with_b['name_reduced'].value_counts()[d_colour_.keys()]\n\n#plt.title('Annotation of %d accessible sites that overlap a TSS from (Chen et al., 2013)' % (len(gv.df_a_with_b),))\n(patches, texts) = plt.pie(\n annot_count_.values,\n labels = yp.pct_(annot_count_.values),\n colors=d_colour_.values(),\n counterclock=False,\n startangle=45,\n);\nplt.gca().set_aspect('equal')\n#plt.savefig(vp('Gu2012_annot.pdf'), bbox_inches='tight', transparent=True)\nplt.savefig('annot_Apr27/Fig2S3D_Gu2012_annot.pdf', bbox_inches='tight', transparent=True)\n\n#fp_ = 'annot/Fig2S4_TSS/Gu2012_not_atac.bed'\n#gv.df_b_only.to_csv(fp_, header=False, sep='\\t', index=False)\n#!wc -l {fp_}"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy | v0.12/_downloads/0c4dbfdebeb6fcd2f5364a69f0c6d4a8/Skew-T_Layout.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Skew-T with Complex Layout\nCombine a Skew-T and a hodograph using Matplotlib's GridSpec layout capability.",
"import matplotlib.gridspec as gridspec\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import add_metpy_logo, Hodograph, SkewT\nfrom metpy.units import units",
"Upper air data can be obtained using the siphon package, but for this example we will use\nsome of MetPy's sample data.",
"col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n\ndf = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),\n skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n\n# Drop any rows with all NaN values for T, Td, winds\ndf = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'\n ), how='all').reset_index(drop=True)",
"We will pull the data out of the example dataset into individual variables and\nassign units.",
"p = df['pressure'].values * units.hPa\nT = df['temperature'].values * units.degC\nTd = df['dewpoint'].values * units.degC\nwind_speed = df['speed'].values * units.knots\nwind_dir = df['direction'].values * units.degrees\nu, v = mpcalc.wind_components(wind_speed, wind_dir)\n\n# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nadd_metpy_logo(fig, 630, 80, size='large')\n\n# Grid for plots\ngs = gridspec.GridSpec(3, 3)\nskew = SkewT(fig, rotation=45, subplot=gs[:, :2])\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Good bounds for aspect ratio\nskew.ax.set_xlim(-30, 40)\n\n# Create a hodograph\nax = fig.add_subplot(gs[0, -1])\nh = Hodograph(ax, component_range=60.)\nh.add_grid(increment=20)\nh.plot(u, v)\n\n# Show the plot\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
topix-hackademy/pandas-for-dummies | 01_SERIES/CSV-Reader.ipynb | mit | [
"Read Data From CSV\nMethod:\nread_csv",
"import pandas as pd\n\nasd = pd.read_csv(\"data/input.csv\")\nprint type(asd) \nasd.head()\n# This is a Dataframe because we have multiple columns!",
"To create a Series we need to set the column (using usecols) to use and set the parameter squeeze to True.",
"data = pd.read_csv(\"data/input.csv\", usecols=[\"name\"], squeeze=True)\nprint type(data)\ndata.head()\ndata.index",
"If the input file has only 1 column we don't need to provide the usecols argument.",
"data = pd.read_csv(\"data/input_with_one_column.csv\", squeeze=True)\nprint type(data)\n\n# HEAD\nprint data.head(2), \"\\n\"\n# TAIL\nprint data.tail()",
"On Series we can perform classic python operation using Built-In Functions!",
"list(data)\n\ndict(data)\n\nmax(data)\n\nmin(data)\n\ndir(data)\n\ntype(data)\n\nsorted(data)\n\ndata = pd.read_csv(\"data/input_with_two_column.csv\", index_col=\"name\", squeeze=True)\ndata.head()\n\n\ndata[[\"Alex\", \"asd\"]]\n\ndata[\"Alex\":\"Vale\"]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Quadrocube/rep | howto/04-howto-folding.ipynb | apache-2.0 | [
"About\nThis notebook demonstrates stacking machine learning algorithm - folding, which physics use in their analysis.",
"%pylab inline",
"Loading data",
"import numpy, pandas\nfrom rep.utils import train_test_split\nfrom sklearn.metrics import roc_auc_score\n\nsig_data = pandas.read_csv('toy_datasets/toyMC_sig_mass.csv', sep='\\t')\nbck_data = pandas.read_csv('toy_datasets/toyMC_bck_mass.csv', sep='\\t')\n\nlabels = numpy.array([1] * len(sig_data) + [0] * len(bck_data))\ndata = pandas.concat([sig_data, bck_data])\n\ntrain_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.7)",
"Training variables",
"variables = [\"FlightDistance\", \"FlightDistanceError\", \"IP\", \"VertexChi2\", \"pt\", \"p0_pt\", \"p1_pt\", \"p2_pt\", 'LifeTime', 'dira']\ndata = data[variables]",
"Folding strategy - stacking algorithm\nIt implements the same interface as all classifiers, but with some difference:\n\nall prediction methods have additional parameter \"vote_function\" (example folder.predict(X, vote_function=None)), which is used to combine all classifiers' predictions. By default \"mean\" is used as \"vote_function\"",
"from rep.estimators import SklearnClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier",
"Define folding model",
"from rep.metaml import FoldingClassifier\n\nn_folds = 4\nfolder = FoldingClassifier(GradientBoostingClassifier(), n_folds=n_folds, features=variables)\nfolder.fit(train_data, train_labels)",
"Default prediction (predict i_th_ fold by i_th_ classifier)",
"folder.predict_proba(train_data)",
"Voting prediction (predict i-fold by all classifiers and take value, which is calculated by vote_function)",
"# definition of mean function, which combines all predictions\ndef mean_vote(x):\n return numpy.mean(x, axis=0)\n\nfolder.predict_proba(test_data, vote_function=mean_vote)",
"Comparison of folds\nAgain use ClassificationReport class to compare different results. For folding classifier this report uses only default prediction.\nReport training dataset",
"from rep.data.storage import LabeledDataStorage\nfrom rep.report import ClassificationReport\n# add folds_column to dataset to use mask\ntrain_data[\"FOLDS\"] = folder._get_folds_column(len(train_data))\nlds = LabeledDataStorage(train_data, train_labels)\n\nreport = ClassificationReport({'folding': folder}, lds)",
"Signal distribution for each fold\nUse mask parameter to plot distribution for the specific fold",
"for fold_num in range(n_folds):\n report.prediction_pdf(mask=\"FOLDS == %d\" % fold_num, labels_dict={1: 'sig fold %d' % fold_num}).plot()",
"Background distribution for each fold",
"for fold_num in range(n_folds):\n report.prediction_pdf(mask=\"FOLDS == %d\" % fold_num, labels_dict={0: 'bck fold %d' % fold_num}).plot()",
"ROCs (each fold used as test dataset)",
"for fold_num in range(n_folds):\n report.roc(mask=\"FOLDS == %d\" % fold_num).plot()",
"Report for test dataset\nNOTE: Here vote function is None, so default prediction is used",
"lds = LabeledDataStorage(test_data, test_labels)\n\nreport = ClassificationReport({'folding': folder}, lds)\n\nreport.prediction_pdf().plot(new_plot=True, figsize = (9, 4))\n\nreport.roc().plot(xlim=(0.5, 1))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_objects_from_arrays.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Creating MNE objects from data arrays\nIn this simple example, the creation of MNE objects from\nnumpy arrays is demonstrated. In the last example case, a\nNEO file format is used as a source for the data.",
"# Author: Jaakko Leppakangas <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport neo\n\nimport mne\n\nprint(__doc__)",
"Create arbitrary data",
"sfreq = 1000 # Sampling frequency\ntimes = np.arange(0, 10, 0.001) # Use 10000 samples (10s)\n\nsin = np.sin(times * 10) # Multiplied by 10 for shorter cycles\ncos = np.cos(times * 10)\nsinX2 = sin * 2\ncosX2 = cos * 2\n\n# Numpy array of size 4 X 10000.\ndata = np.array([sin, cos, sinX2, cosX2])\n\n# Definition of channel types and names.\nch_types = ['mag', 'mag', 'grad', 'grad']\nch_names = ['sin', 'cos', 'sinX2', 'cosX2']",
"Creation of info dictionary.",
"# It is also possible to use info from another raw object.\ninfo = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)\n\nraw = mne.io.RawArray(data, info)\n\n# Scaling of the figure.\n# For actual EEG/MEG data different scaling factors should be used.\nscalings = {'mag': 2, 'grad': 2}\n\nraw.plot(n_channels=4, scalings=scalings, title='Data from arrays',\n show=True, block=True)\n\n# It is also possible to auto-compute scalings\nscalings = 'auto' # Could also pass a dictionary with some value == 'auto'\nraw.plot(n_channels=4, scalings=scalings, title='Auto-scaled Data from arrays',\n show=True, block=True)",
"EpochsArray",
"event_id = 1\nevents = np.array([[200, 0, event_id],\n [1200, 0, event_id],\n [2000, 0, event_id]]) # List of three arbitrary events\n\n# Here a data set of 700 ms epochs from 2 channels is\n# created from sin and cos data.\n# Any data in shape (n_epochs, n_channels, n_times) can be used.\nepochs_data = np.array([[sin[:700], cos[:700]],\n [sin[1000:1700], cos[1000:1700]],\n [sin[1800:2500], cos[1800:2500]]])\n\nch_names = ['sin', 'cos']\nch_types = ['mag', 'mag']\ninfo = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)\n\nepochs = mne.EpochsArray(epochs_data, info=info, events=events,\n event_id={'arbitrary': 1})\n\npicks = mne.pick_types(info, meg=True, eeg=False, misc=False)\n\nepochs.plot(picks=picks, scalings='auto', show=True, block=True)",
"EvokedArray",
"nave = len(epochs_data) # Number of averaged epochs\nevoked_data = np.mean(epochs_data, axis=0)\n\nevokeds = mne.EvokedArray(evoked_data, info=info, tmin=-0.2,\n comment='Arbitrary', nave=nave)\nevokeds.plot(picks=picks, show=True, units={'mag': '-'},\n titles={'mag': 'sin and cos averaged'})",
"Extracting data from NEO file",
"# The example here uses the ExampleIO object for creating fake data.\n# For actual data and different file formats, consult the NEO documentation.\nreader = neo.io.ExampleIO('fakedata.nof')\nbl = reader.read(cascade=True, lazy=False)[0]\n\n# Get data from first (and only) segment\nseg = bl.segments[0]\ntitle = seg.file_origin\n\nch_names = list()\ndata = list()\nfor asig in seg.analogsignals:\n # Since the data does not contain channel names, channel indices are used.\n ch_names.append(str(asig.channel_index))\n asig = asig.rescale('V').magnitude\n data.append(asig)\n\nsfreq = int(seg.analogsignals[0].sampling_rate.magnitude)\n\n# By default, the channel types are assumed to be 'misc'.\ninfo = mne.create_info(ch_names=ch_names, sfreq=sfreq)\n\nraw = mne.io.RawArray(data, info)\nraw.plot(n_channels=4, scalings={'misc': 1}, title='Data from NEO',\n show=True, block=True, clipping='clamp')"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
apagac/cfme_tests | notebooks/MultiProcessing.ipynb | gpl-2.0 | [
"Multiprocessing in Python 3\n### Threads vs Processes\n ### Thread/Process execution, timing\n ### Direct Thread/Process Instantiation\n ### Thread/Process Pools\n ### Iteration with complex function signatures\n ### Storing/Fetching data with Queues\nThreads vs Processes\n\nThread\nIs bound to processor that python process running on\n\nIs controlled by Global Interpreter Lock (GIL)\n\nSingle python bytecode executed at a time by any thread\n\n\n\nProcess\n\nUses multiple processors\nConcurrency between threads and processes (local and remote)\nIgnores GIL",
"from os import getpid, getppid\nfrom time import sleep\n\ndef printer(val, wait=0):\n sleep(wait)\n print('Pid: {}, PPid: {}, Value: {}'\n .format(getpid(), getppid(), val))\n\n\n",
"Process Instantiation\nLet's start with most basic example of spawning new process to run a function",
"from multiprocessing import Process\n\nprint('Starting demo...')\np = Process(target=printer, args=('hello demo',))\np.start()",
"Process timing\n\nUse printer's delay to see process timing\nTrack multiple process objects\nExecute code in main process while chile process is running\nUse Process.join() to wait for processes to finish",
"proc_list = []\nfor values in [('immediate', 0), ('delayed', 2), ('eternity', 5)]:\n p = Process(target=printer, args=values)\n proc_list.append(p)\n p.start() # start execution of printer\n\nprint('Not waiting for proccesses to finish...')\n \n[p.join() for p in proc_list]\n\nprint('After processes...')",
"Process Pool\n\nWorker processes instead of direct instantiation\nContext manager to handle starting/joining child processes\nPool.map() works like default python map(f, args) function\nPool.map() Does not unpack args",
"from multiprocessing.pool import Pool\n\nwith Pool(3) as pool:\n pool.map(printer, ['Its', ('A', 5), 'Race'])\n # each worker process executes one function",
"Process + args/kwargs iteration with starmap",
"with Pool(2) as pool:\n pool.starmap(printer, [('Its',), ('A', 2), ('Race',)])\n # one worker will execute 2 functions, one worker will execute the 'slow' function",
"Starmap is the bomb",
"def pretend_delete_method(provider, vm_name):\n print('Pretend delete: {} on {}. (Pid: {})'\n .format(vm_name, provider, getpid())) \n \n# Assuming we fetched a list of vm names on providers we want to cleanup...\nexample_provider_vm_lists = dict(\n vmware=['test_vm_1', 'test_vm_2'],\n rhv=['test_vm_3', 'test_vm_4'],\n osp=['test_vm_5', 'test_vm_6'],\n)\n\n# don't hate me for nested comprehension here - building tuples of provider+name\nfrom multiprocessing.pool import ThreadPool\n\n# Threadpool instead of process pool, same interface\nwith ThreadPool(6) as pool:\n pool.starmap(\n pretend_delete_method, \n [(key, vm) \n for key, vms \n in example_provider_vm_lists.items() \n for vm in vms]\n )",
"Locking\n\nsemaphore-type object that can be acquired and released\nWhen acquired, only thread that has the lock can run\nNecessary when using shared objects",
"# Printing is thread safe, but will sometimes print separate messages on the same line (above)\n# Use a lock around print\nfrom multiprocessing import Lock\n\nlock = Lock()\ndef safe_printing_method(provider, vm_name):\n with lock:\n print('Pretend delete: {} on {}. (Pid: {})'\n .format(vm_name, provider, getpid()))\n\nwith ThreadPool(6) as pool:\n pool.starmap(\n safe_printing_method, \n [(key, vm) for key, vms in example_provider_vm_lists.items() for vm in vms])",
"Queues\n\nStore data/objects in child thread/processes and retrieve in parent\n\nFIFO stack with put, get, and empty methods\n\n\nmultiprocessing.Queue\n\ncannot be pickled and thus can't be passed to Pool methods\ncan deadlock with improper join use\nmultiprocessing.Manager.Queue\nis proxy, can be pickled\ncan be shared between processes",
"from multiprocessing import Manager\nfrom random import randint\n\n# Create instance of manager\nmanager = Manager()\n\ndef multiple_output_method(provider, vm_name, fail_queue):\n # random success of called method\n if randint(0, 1):\n return True\n else:\n # Store our failure vm on the queue\n fail_queue.put(vm_name)\n return None\n\n# Create queue object to give to child processes\nqueue_for_failures = manager.Queue()\nwith Pool(2) as pool:\n results = pool.starmap(\n multiple_output_method, \n [(key, vm, queue_for_failures)\n for key, vms\n in example_provider_vm_lists.items()\n for vm in vms]\n )\n\nprint('Results are in: {}'.format(results))\n\nfailed_vms = []\n# get items from the queue while its not empty\nwhile not queue_for_failures.empty():\n failed_vms.append(queue_for_failures.get())\n \nprint('Failures are in: {}'.format(failed_vms))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.